# This AI Creates Virtual Fingers! 🤝

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=_9Bli4zCzZY
- **Дата:** 04.09.2021
- **Длительность:** 5:50
- **Просмотры:** 156,974

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers

📝 The paper "ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation" is available here:
https://github.com/cghezhang/ManipNet

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: 
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background image credit: https://pixabay.com/images/id-5859606/

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

#vr

## Содержание

### [0:00](https://www.youtube.com/watch?v=_9Bli4zCzZY) Segment 1 (00:00 - 05:00)

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we are going to see how we can use our hands, but not our fingers to mingle with objects in virtual worlds. The promise of virtual reality, VR is indeed truly incredible. If one day it comes to fruition, doctors could be trained to perform surgery in a virtual environment, expose astronauts to virtual zero-gravity simulations, work together with telepresence applications, you name it. The dream is getting closer and closer, but something is still missing! For instance, this previous work uses a learning-based algorithm to teach a head-mounted camera to tell the orientation of our hands at all times. One more paper down the line, this technique appeared that can deal with examples with challenging hand-hand interactions, deformations, lots of self-contact and self-occlusion. This was absolutely amazing, because these are not gloves. No-no. This is the reconstruction of the hand by the algorithm. Absolutely amazing. However, it is slow, and mingling with other objects is still, quite limited. So, what is missing? What is left to be done here? So, let’s have a look at today’s paper and find out together. This is its output…yes, mingling that looks very natural. But, what is so interesting here? The interesting part is that it has realistic finger movements. Well, that means, that it just reads the data from sensors on the fingers, right? Now, hold on to your papers, and we’ll find out once we look at the input…oh my! Is this really true? No sensors on the fingers anywhere! What kind of black magic is this? And with that, we can now make the most important observation in the paper: it reads information from only the wrist and the objects in the hand. Look, the sensors are on these gloves, but none are on the fingers. Once again: the sensors have no idea what we are doing with our fingers, it only reads the movement of our wrist and the object, and all the finger movement is synthesized by it automatically. Whoa! And, with this, we can not only have a virtual version of our hand, but we can also manipulate virtual objects with very few sensor readings. The rest is up to the AI to synthesize. This means that we can have a drink with a friend online, use a virtual hammer to, depending on our mood, fix or destroy virtual objects. This is very challenging because the finger movements have to follow the geometry of the object. Look, here, the same hand is holding different objects, and the AI knows how to synthesize the appropriate finger movements for both of them. This is especially apparent when we change the scale of the object. You see, the small one requires small and precise finger movements to turn around, these are motions that need to be completely re-synthesized for the bigger objects. So cool. And now comes the key - so, does this only work on objects that it has been trained on? No, not at all! For instance, the method has not seen this kind of teapot before, and still, it knows how to use its handle, and now to hold it from the bottom too, even if both of these parts look different. Be careful though, who knows, maybe virtual teapots can get hot too! What’s more, it also handles the independent movement of the left and right hands. Now, how fast is all this? Can we have coffee together in virtual reality? Yes, absolutely! All this runs in close to real time! There is a tiny bit of delay though, but, a result like this is already amazing, and this is typically the kind of thing that can be fixed one more paper down the line. However, not even this technique is perfect. It still might miss small features on an object. For instance, a very thin handle might confuse it. Or, if it has an inaccurate reading of the hand pose and distances, this might happen. But for now, having a virtual coffee together…yes please, sign me up!

### [5:00](https://www.youtube.com/watch?v=_9Bli4zCzZY&t=300s) Segment 2 (05:00 - 05:00)

Thanks for watching and for your generous support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/13827*