# Google’s New Tech: This Isn’t a Photo!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=q2b_8XpGte0
- **Дата:** 12.06.2025
- **Длительность:** 6:57
- **Просмотры:** 77,537
- **Источник:** https://ekstraktznaniy.ru/video/12318

## Описание

❤️ Check out the Fully Connected conference from Weights & Biases on June 17-18th in SF:
https://wandb.me/fc2025
Use the code FCSF2WP to get a ticket for free!

📝 The paper "Practical Inverse Rendering of Textured and Translucent Appearance" is available here:
https://weiphil.github.io/portfolio/practical_reconstruction

📝 Separable Subsurface Scattering:
https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/

Free rendering course!
https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Benji Rabhan, B Shang, Christian Ahlin, Gordon Child, John Le, Juan Benet, Kyle Davis, Loyal Alchemist, Lukas Biewald, Michael Tedder, Owen Skarpness, Richard Sundvall, Steef, Sven Pfiffner, Taras Bobrovytsky, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

My r

## Транскрипт

### Segment 1 (00:00 - 05:00) []

This snail does not exist. I mean, this is not a real photo, but it kind of is. Okay, so where do I even start? This is Google's magical new research tech that is able to look at camera views with different lighting conditions and it recreates the whole scene digitally as if it were a computer game. Well, count me out then because that is something that we have seen a hundred times before. Many previous techniques can do that. However, wait a second. This can do something that none of the previous ones can do. Wow, look at that. And here comes the key. It can also recreate a digital version, a video game version of the scene with high frequency details. Just look at the shell in all its glorious detail. That is incredible. But it gets better. Oh my, that is my favorite. And I hope it will soon be yours too. Subsurface scattering. Translucent stuff where light enters the object and scatters around in there creating this beautiful look. Your skin does that too by the way as you will see in a later simulation here. Now wait a minute. We did some research on that before in this area. These are translucent objects rendered with our technique called separable subsurface scattering. I've heard people call it a textbook industry standard technique for rendering beautiful materials like human skin, milk, marble and more in real time. However, for this we needed to take hundreds of pages of measurements of real objects to accomplish this. So, are you saying that this one just looks at some itty bitty footage and it reconstructs every material, all the translucency and in high quality? Yes, that is exactly what this paper is saying. Absolutely incredible. And before we get stunned by how much more it can do, I mean, just look at this. This is coming in a moment. And now, another fun experiment. Amazingly, it starts with blank geometry, then learns the object's physics directly from the photos. Instead of simply copying the colors of a snail's shell, it reverse engineers its actual material properties. Like its texture, its glossiness, and even how light scatters beneath its translucent skin. That is incredible. This is called inverse rendering. Previous techniques can do this with this kind of quality. Look, they either give us a splotchy result or there is simply too much noise in the output. Okay, but how close is this to the real result? Now hold on to your papers, fellow scholars, and oh holy mother of papers. Are you seeing what I am seeing? It is nearly a pixel perfect but real version of our imagined input but with real materials. I did not think this would ever be possible. Absolutely stunning. And here comes the twist. It can also do it with human faces, creating a virtual avatar of us. Now, while it progressively grows out of nothing, we ask, okay, but what is all this good for? Well, after reconstructing the face, you can change the lighting and the incredible details appear. Just look at that. And with this, we can also reimagine ourselves in completely new environments and talk to each other on the internet as if we were there or put ourselves in a cool video game as a hero. What a time to be alive. Now, not even this technique is perfect. Here you see some artifacts around the eyes. We also have to have some information about the scene. For instance, the geometry and lighting has to be known. I say that is a small price to pay for something of this quality. And also, it is so far beyond previous techniques, it is not even funny. But that is not even the biggest surprise. I'll give you that in a moment. So, what does this do differently? How is all this possible? Dear fellow scholars, this is two minute papers with Dr. Koa Eher. Well, the old way called diffusion was a shortcut. Imagine light hitting skin like a pebble in milky water. It just assumes a blurry circle will come out, which is just a fast shortcut, but not reality. Now, instead, this new technique uses path tracing. Now, that is the real deal. It simulates the actual journey of individual light rays. It's like a virtual game of pinball.

### Segment 2 (05:00 - 06:00) [5:00]

Like a ball, each ray of light penetrates the skin, goes on a journey, scattering inside the material until it finds its way out. The technique does this for millions of light rays to build up the final image. Why is it better? Well, because it does not guess. It is a direct simulation of reality. So, it is super accurate. Loving it. This is not an ad. If you want to learn how this is done, I have a full master level course for you in the video description. Yes, it's free of charge. I won't accept anything in return. And no, it doesn't make sense to just give it away for free, but I do it anyway because I love it and it's the right thing to do. So, building on this knowledge, here comes the huge surprise. It uses automatic differentiation and gradient descent. These are important parts of a modern AI algorithm toolbox. However, get this. No neural networks were used here, but tons of human ingenuity instead. Bravo. This is exactly what we have here in this corner of the internet. If you enjoy it too, subscribe and hit the bell for more. And check out our sponsor because it is about something that I will run just this once on June 17 and 18th in San Francisco. The Weights and Biases fully connected conference is taking place again with people from Nvidia, Windsurf, Adobe, AWS and more. Fantastic for networking and learning for AI developers, researchers or business leaders. Yes, you fellow scholars. And get this, you can get tickets for 100% off. Yes, go to wnb. me/fc2025 me/fc2025 or click the link in the description and use this code to get yours for
