# Finally, Robotic Telekinesis is Here! 🤖

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=fVrcBY0lOWw
- **Дата:** 22.07.2022
- **Длительность:** 6:39
- **Просмотры:** 65,674

## Описание

❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 
❤️ Their mentioned post is available here (thank you Soumik!): http://wandb.me/robotic-telekinesis

📝 The paper "Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans on Youtube" is available here:
https://robotic-telekinesis.github.io/

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: 
- https://www.patreon.com/TwoMinutePapers
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Ivo Galic, Jace O'Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=fVrcBY0lOWw) <Untitled Chapter 1>

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Today we are going to perform something that seems  like a true miracle. Oh yes, you are seeing it

### [0:07](https://www.youtube.com/watch?v=fVrcBY0lOWw&t=7s) Pickup Dinosaur Doll

correctly. This is robotic telekinesis. Moving  objects from afar. So, what is going on here?

### [0:17](https://www.youtube.com/watch?v=fVrcBY0lOWw&t=17s) Pickup Dice Toy

Well, we take a human operator, who  performs these gestures, which are then   transferred to a robot arm, and then,  the magic happens. This is unbelievable!    So now, hold on to your papers, and let’s  see how well it can pull off all this! Level 1 - Oh my, it comes out guns  blazing! Look at that! It is not a brute,   not at all! With delicate movements,  it can pick up these plush toys, or,   even rotate a box. That is a fantastic start.

### [0:54](https://www.youtube.com/watch?v=fVrcBY0lOWw&t=54s) Box Rotation

But, it can do even better! Level 2 - Let’s try to pick up those scissors.   That is going to require some dexterous hand

### [1:02](https://www.youtube.com/watch?v=fVrcBY0lOWw&t=62s) Scissor Pickup

movements from the operator and…wow! Can you  believe that? By the time I am saying this,   it is already done. And, it can also stack  these cups, which is a difficult matter as its

### [1:19](https://www.youtube.com/watch?v=fVrcBY0lOWw&t=79s) Cup Stack 2

own finger might get in the way. This was a little  more of a close call, but it still managed. Bravo! So, with all these amazing tasks, what could  possibly be level 3? Well, check this out! Yes, we are going to attempt to open  this drawer. That is quite a challenge.    Now note that it is slightly ajar to make  sure the task is not too hard for its fingers,   and, let’s see! Oh yeah! Good job little robot! But what good is opening a drawer  if we aren’t doing anything with it,   so, here comes +1, the final boss level  - open the drawer, and pick up the cup.

### [2:07](https://www.youtube.com/watch?v=fVrcBY0lOWw&t=127s) Open Drawer and Pickup Cup

There is no way that this is possible through  telekinesis. And…oh my goodness. I love it. And note that this human has done  extremely well with the hand motions, but   is this person a robot whisperer, or can anyone  pull this off? That is what I would like to know.    And, wow! This works with other operators too,   which is good news, because this means that  it is relatively easy and intuitive to use.    So much so, and get this, that these people  are completely untrained operators. So cool! So, now is the part where we expect the bad  news to come. So, where is the catch? Maybe   we need some sinfully expensive camera gear to  look at our hand to make all this happen, right?    Well, if you have been holding on to your  papers so far, now, squeeze those papers,   because we don’t need to buy anything crazy at  all! What we need is just one, uncalibrated color   camera. Today, this is available in almost  every single household. How cool is that! So, if it can do that, all we need is  a bunch of training footage, right? But   wait a second…are you thinking what I am thinking?   If we need just one uncalibrated color camera,   can it be that? Yes! That is exactly right. We  don’t need to create any training data at all,   we can just use Youtube videos that  already exist out there in the wild! In a previous paper, scientists at  DeepMind harnessed the power of Youtube   by having their AI watch humans play games,  and then, they would ask the AI to solve hard   exploration games and it just ripped through these  levels in Montezuma’s revenge and other games too.    And here comes the best part - what was even  more surprising there that didn’t just perform   an imitation of the teacher, no-no. It  even outperformed its human teacher! Wow. I wonder if we could somehow do  a variant of that in a future paper?    Imagine what good we could do with that. Virtual  surgeries? Any surgeon could perform a life-saving   operation on anyone, from anywhere else in  the world? Wow. What a time to be alive! Now,   for that, the success rate needs to be much  closer to a hundred percent here, but still,   this is incredible progress in AI research. And  of course, you are an experienced Fellow Scholar,   so you don’t forget to apply The First Law  Of Papers here, which says that research   is a process. Do not look at where we are, look  at where we will be two more papers down the line.    And line, I bet this  will not only be significantly more accurate,   but, I would dare to say that even full body  retargeting will be possible. Yes, we could   move around, and have a real robot replicate  our movements. Just let the computer graphics   people in with their motion capture knowledge,  and this might really become a real thing soon. So, does this get your mind going? What would you  use this for? Let me know in the comments below! Thanks for watching and for your generous  support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/13509*