AI Mind Reading Experiment!
5:47

AI Mind Reading Experiment!

Two Minute Papers 23.08.2023 103 010 просмотров 4 470 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ Check out Weights & Biases and take their great course for free: https://wandb.me/papercourse 📝 The mind video paper "Cinematic Mindscapes: High-quality Video Reconstruction from Brain Activity" is available here: https://mind-video.com/ My latest paper on simulations that look almost like reality is available for free here: https://rdcu.be/cWPfD Or this is the orig. Nature Physics link with clickable citations: https://www.nature.com/articles/s41567-022-01788-5 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Kenneth Davis, Klaus Busse, Kyle Davis, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/

Оглавление (6 сегментов)

Intro

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. I can’t believe I am saying this, but today we are   going to read human minds by using AI.   Yes, that’s right. That really happened. So how do we read brains? Well here,  we are mainly interested in peaceful,

FMRI

non-invasive method, and that is going to be  an fMRI. Functional magnetic resonance imaging,   this is where you go into a tube, are asked  to stay still, and your brain activity will   be read. We get a result that is something  like this. So what is this good for? Well,   these are cross-sectional views  of the brain, slices if you will,   and not only that, but blood flow within  the brain can also be highlighted. This gives us an idea about what parts of the  brain are activated and when. So for instance,   we can show someone an image of a  familiar face and an unfamiliar one,   and see if there is a difference in the  brain activity, and what that difference is. That sounds great. However, not so fast.   With these peaceful non-invasive techniques,

Mind Reading

the readings are not as precise, and there is so  much more noise. And now, hold on to your papers,   because this paper did something that I did  not think humanity would ever be able to do. So here is the mind reading experiment they  propose. Chuck some folks into an fMRI machine,   show them images, and then make a brain reading.   Now this brain reading would have to be converted   to an image, not just this noisy blood flow  information, which sounds almost impossible.    And it is almost impossible. A previous work  from just 5 years ago did something like that.

Brain Reconstruction

These images went in, and this came out. This  is a reconstruction of what scientists think   the brain saw. In terms of shapes, there is  perhaps some correlation between the two,   perhaps they are related, but I feel like I am  grasping for straws here. And then, four years   later, two really promising works appeared that  could reconstruct these images from the brain. An   incredible scientific achievement, however,  still very blurry images. In this new work,   at the risk of simplifying a little, the  idea is to take these brain readings,   and plug them into the legendary text to image  AI system, Stable Diffusion. So what do we get   as a result? And now is the point where I fell  off the chair when reading this paper. Here are   the new results. Wow! Now we’re talking! These  are so much better. The readings show a strong   correlation of the real images that have been  shown to the subjects. And wait it gets better! It also works for video! That means that we  can show these subjects all kinds of videos

Evaluation

and after the brain reading reconstruction,  we don’t get back exactly what was shown,   but the results are so much  higher quality than before. Now, evaluating these results is a tough  matter, there are fundamental questions,   for a perfect reconstruction, are these two  supposed to be the same? Good question. Also,   the fMRI machine and the reconstruction  algorithm both have their limits. And the   list goes on. A healthy dose of skepticism  is a hallmark of the Wise Scholar. However, for now, we can compare the reconstructed  results to the ground truth through, one,   visual inspection. Or, two, through mathematical  ways of comparing images to each other. With both,   this new technique has been shown to be  so much better than the previous papers. And as every good paper does, this one  also raises important new questions   too. For instance, does it work for a diverse  set of topics? It does. And what I really want   to know - how similar would these images be  for different people? What do you think? Well,   I am delighted to tell you that there was an  experiment for that too. Absolutely lovely.    It almost feels like we are living in  a science fiction world. My brain is   also bubbling with ideas. I wonder what  that would look like in a video form? So, is this work perfect? The  answer is no. Not perfect. Not   even close. There are still lots of  limitations and many failure cases,   and we should all exercise a little critical  thinking for works like this. But once again,   the help of AI research makes the almost  impossible seem possible. What a time to be alive!

Outro

Now, note that this video came way later than  it could have. That is not great for views,   but that’s not what we are after. We wait until  the paper is sufficiently reviewed, the evidence   is thoroughly checked, and I only want to show it  to you then. That is the only thing that matters. Thanks for watching and for your generous  support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник