# NVIDIA’s Insane AI Found The Math Of Reality

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=WNsSzX0L4Es
- **Дата:** 15.02.2026
- **Длительность:** 9:09
- **Просмотры:** 197,324

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambda.ai/papers

📝 The paper is available here: https://research.nvidia.com/labs/sil/projects/ppisp/

Our Patreon if you wish to support us: https://www.patreon.com/TwoMinutePapers

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Adam Bridges, Benji Rabhan, B Shang, Cameron Navor, Christian Ahlin, Eric T, Fred R, Gordon Child, Juan Benet, Michael Tedder, Owen Skarpness, Richard Sundvall, Ryan Stankye, Steef, Taras Bobrovytsky, Tazaur Sagenclaw, Tybie Fitzhugh, Ueli Gallizzi
 
My research: https://cg.tuwien.ac.at/~zsolnai/

#nvidia

## Содержание

### [0:00](https://www.youtube.com/watch?v=WNsSzX0L4Es) Segment 1 (00:00 - 05:00)

Here’s a bunch of photos. It is very choppy,  not great to look at. And now, hold on to your   papers Fellow Scholars and check this out. Wow! What happened? Did we just take a   video that is this one, and take out  a few images to make it look choppy? Nope. Exactly the other way around! This  is an AI-based technique that looks at a   bunch of photos, and learns the world from  these photos. So much so that it is able to   synthesize what is in between these photos! Magic!   Absolutely incredible. Scientists call it NERF. These are amazing for training self-driving  cars in a virtual world, creating movies,   video games, and more. These would be  amazing. If it worked. But it doesn’t. You see, NERFs are not new. Previous  techniques were able to do this kind   of thing. But look…this was the quality  that was possible before. Do you see   those floaters? They are kind of ruining  the whole scene. Why did they appear? Well, imagine trying to buy a  house. Well, let’s be honest here,   based on how the economy looks today, that’s  probably the closest any of us is getting to   buying a house. Alright, so you go and you  check the house on Monday. It’s a blue house.   Then you go check again on Tuesday, and it is  fiery red. What is going on? Then, on Wednesday,   dark and shadowy. And understandably, you  get quite confused. What happened? Is this   some sort of magic house? Is this why a  tiny shoe box costs more than a million   dollars in California? Károly, come on. Okay,  okay. So then, you realize, it’s not magic. It’s just a normal house, but each day you  showed up with a different pair of sunglasses.   Now that is the exact problem we  have in 3D reconstruction today.   We take thousands of photos of a scene, but  each photo is a bit different. We get them   from a different time of day, from different  angles, but it gets even worse. Cameras choose   a bunch of parameters like exposure automatically  based on how much light they see. This can change   a lot from one frame to the next. That leads to a disaster. Why?   Because the reconstruction algorithms think  that the objects are suddenly changing color.    They really think that this house is blue  on Monday, and red on Tuesday. And yes,   this is what creates this floater problem. We get  ghostly 3D models because the AI tries to paint   these lighting errors onto the 3D object itself. The result is this blurry nightmare. Now enter NVIDIA’s new technique, called  PPISP. This is a master detective who   says I am not going to look at the  house. I am going to meet the buyers,   and look at their sunglasses instead.   Genius! It actually understands what the   house looks like. And the result: yup,  the ghostly floaters are finally gone! Okay, so how do they do that? Well, the  master detective shows up and looks at the   first photo. He finds that the exposure  and the white balance values are weird.    Now he says okay, this buyer is  wearing blue-colored glasses,   and they are standing in a dark spot. And  here comes the magic part. He now surgically   removes the blue tint and the darkness to  reveal the true color of the wall. Oh yes! Then, when a new video is created, you  are actually seeing reality. And then,   you can choose what colored  sunglasses you want to use if any. You can see here how it is predicting  different exposure and color correction   values for each frame. This is super tough  because you have to do it in a way that gives   you a convincing video where the colors don’t  start flashing like crazy from frame to frame. The mathematical tool that the master detective  is using is called color correction matrix. It   sounds amazing, but all this means is  a 3x3 grid that is the prescription for   your sunglasses. It tells you how the colors  were changed by the sunglasses. The camera,   that is. By solving for this matrix, this can  revert colors back to reality. Absolutely amazing. Would you like to see the master at work? Look. This is going to be incredible. Watch how it  peels the layers off the image, one by one. First,   it solves the exposure offset - basically  figuring out how bright the scene was. Then,

### [5:00](https://www.youtube.com/watch?v=WNsSzX0L4Es&t=300s) Segment 2 (05:00 - 09:00)

it figures out the white balance,  removing those colored sunglasses. But here is the most impressive part: look at  the corners. It learns the vignetting effect   too. Real camera lenses are imperfect! They  make the image darker near the edges. The AI   learned this behavior of the physical  lens just by looking at the photos!    It is like it reverse-engineered the camera  that took the picture. That is insane! And finally, it solves the camera response  curve. Digital sensors distort light in a   weird, non-linear way. The AI figured out that  distortion curve and flattened it out. And now,   by solving these four specific puzzles separately,  it doesn't just paint a pretty picture - it   mathematically reconstructs the reality that  was hiding behind the camera's messy lens. Now here is something wild that I realized. The  controller they built, this is the thing that   fixes the exposure for new views. This works  almost exactly like the auto exposure system   in your smartphone cameras. Yes. They essentially  re-invented the digital camera's brain inside a   neural network! That is kind of genius. But it is  still not perfect, I’ll tell you why in a moment. But, there is more to be learned from this  paper. For instance, remember that the AI   separates the object's true color from the  camera's biased image. This is great life   advice! Separate facts from your feelings.   Don't confuse a bad mood with a bad life. Then, the AI learns the flaws of the  camera to correct the final image.    You can do that too! Try to find your  own biases and try to correct them.    Acknowledging your flaws is the only  way to see the world clearly. So cool! This work is coming from a team of scientists at  NVIDIA who are known to do great computational   photography work. And they took this amazing  piece of work, and gave it to all of us for   free. Thank you so much! This is a great  gift to humanity. What a time to be alive! Now, not even this work is perfect. Even the  master detective has its limits. So what are   the limits? Dear Fellow Scholars, this is Two  Minute Papers with Dr. Károly Zsolnai-Fehér. Wow,   that was a long cold open. Now, the paper mentions  that the method ignores spatially-adaptive   effects. Okay, what does that mean? Well,  our master detective assumes that the camera   follows strict global rules. Uh-oh. But modern  smartphone cameras are sneaky! They use techniques   to brighten just a face or darken just a bright  window. This is called local tone mapping. These   tricks break the global rules. When the detective  sees these, he gets confused because they don't   fit his physical equations. He thinks the whole  room should be bright, not just the window! So a really advanced paper explained  in simple words. Hope you enjoyed it,   if you did, consider subscribing and hitting  the bell icon. It would be great because   there is doom and gloom everywhere you  look, and so few people are talking about   these amazing works of human brilliance.   More people need to hear about this! So,   save the snails, save the beavers,  subscribe to Two Minute Papers!

---
*Источник: https://ekstraktznaniy.ru/video/11384*