# This AI Senses Humans Through Walls 👀

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=kBFMsY5ZP0o
- **Дата:** 12.10.2018
- **Длительность:** 3:38
- **Просмотры:** 245,641

## Описание

Pick up cool perks on our Patreon page:
› https://www.patreon.com/TwoMinutePapers

Crypto and PayPal links are available below. Thank you very much for your generous support!
› PayPal: https://www.paypal.me/TwoMinutePapers
› Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
› Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A
› LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg

The paper "Through-Wall Human Pose Estimation Using Radio Signals " is available here:
http://rfpose.csail.mit.edu/

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga.
https://www.patreon.com/TwoMinutePapers

Thumbnail background image credit: https://pixabay.com/en/texture-wall-gray-wall-texture-1033755/
Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Facebook: https://www.facebook.com/TwoMinutePapers/
Twitter: https://twitter.com/karoly_zsolnai
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=kBFMsY5ZP0o) <Untitled Chapter 1>

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. Pose estimation is an interesting area of research where we typically have a few images or video footage of humans, and we try to automatically extract the pose a person was taking. In short, the input is one or more photo, and the output is typically a skeleton of the person. So what is this good for? A lot of things. For instance, we can use these skeletons to cheaply transfer the gestures of a human onto a virtual character, fall detection for the elderly, analyzing the motion of athletes, and many others. This work showcases a neural network that measures how the wifi radio signals bounce around in the room and reflect off of the human body, and from these murky waves, it estimates where we are. Not only that, but it also accurate enough to tell us our pose. As you see here, as the wifi signal also traverses in the dark, this pose estimation works really well in poor lighting conditions. That is a remarkable feat. But now, hold on to your papers, because that's nothing compared to what you are about to see now. Have a look here. We know that wifi signals go through walls. So perhaps, this means that... that can't be true, right? It tracks the pose of this human as he enters the room, and now, as he disappears, look, the algorithm still knows where he is. That's right!

### [1:31](https://www.youtube.com/watch?v=kBFMsY5ZP0o&t=91s) Though-wall poses using only RF

This means that it can also detect our pose through walls! What kind of wizardry is that? Now, note that this technique doesn't look at the video feed we are now looking at.

### [1:41](https://www.youtube.com/watch?v=kBFMsY5ZP0o&t=101s) It estimates poses of multiple people

It is there for us for visual reference. It is also quite remarkable that the signal being sent out is a thousand times weaker than an actual wifi signal, and it also can detect multiple humans. This is not much of a problem with color images, because we can clearly see everyone in an image, but the radio signals are more difficult to read when they reflect off of multiple bodies in the scene. The whole technique work through using a teacher-student network structure. The teacher is a standard pose estimation neural network that looks at a color image and predicts the pose of the humans therein. So far, so good, nothing new here. However, there is a student network that looks at the correct decisions of the teacher, but has the radio signal as an input instead. As a result, it will learn what the different radio signal distributions mean and how they relate to human positions and poses. As the name says, the teacher shows the student neural network the correct results, and the student learns how to produce them from radio signals instead of images. If anyone said that they were working on this problem ten years ago, they would have likely

### [2:48](https://www.youtube.com/watch?v=kBFMsY5ZP0o&t=168s) It handles various occlusions

ended up in an asylum. Today, it's reality. What a time to be alive! Also, if you enjoyed this episode, please consider supporting the show at Patreon. com/twominutepapers. You can pick up really cool perks like getting your name shown as a key supporter in the video description and more. Because of your support, we are able to create all of these videos smooth and creamy, in 4k resolution and 60 frames per second and with closed captions. And, we are currently saving up for a new video editing rig to make better videos for you. We also support one-time payments through PayPal and the usual cryptocurrencies. More details about all of these are available in the video description, and as always. Thanks for watching and for your generous support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/14404*