# Wow, NVIDIA’s Rendering, But 10X Faster! (3D Gaussian Splatting)

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=TLK3TDDcJFU
- **Дата:** 02.09.2023
- **Длительность:** 6:53
- **Просмотры:** 147,396

## Описание

❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 

📝 The paper "3D Gaussian Splatting for Real-Time Radiance Field Rendering" is available here:
https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

Unofficial implementation: https://jatentaki.github.io/portfolio/gaussian-splatting/

Community showcase links:
https://twitter.com/alexcarliera/status/1693755522636206494
https://www.linkedin.com/posts/divesh-naidoo-48809934_vfx-nerf-3d-activity-7099439156354781185-Dswa
https://www.linkedin.com/posts/huguesbruyere_gaussiansplatting-3d-ml-activity-7098993947112300544-qjCi
https://twitter.com/jonstephens85/status/1692281505526235373?t=v1hiWNMGBSUhcUDCkCihLg&s=19
https://twitter.com/8Infinite8/status/1694322101744738706
https://twitter.com/JonathonLuiten/status/1692346451668636100

My latest paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, Kenneth Davis, Klaus Busse, Kyle Davis, Lukas Biewald, Martin, Matthew Valle, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=TLK3TDDcJFU) Segment 1 (00:00 - 05:00)

Because so many of you Fellow Scholars have  requested it, today we are going to look at   one of the best papers of the year.   An incredible work that I really hope   will revolutionize the creation of virtual  worlds for movies and video games, and more. Just imagine grabbing your smartphone, taking  a few images or videos of real-world places,   and then, we could create a virtual version  of them and play in them in real time. That   would be amazing. Just imagine that we could  drive on the streets of San Francisco and LA,   go on a virtual hike and see what it would look  like, and a ton of other, really cool things. Techniques that involve rendering radiance  fields, for instance, NERFs have this as a goal,   and a ton of them already exist. But,  with those, we have two huge problems. One, they are not fast enough. For huge  scenes in full HD or higher resolution,   getting a real time solution is tough.   It typically goes slower than that. And two, of course thin structures, it’s always  the thin structures that are a problem. Now,   these are quite plentiful in scenes in the  real world. Plant leaves, cables, fences,   bicycle spokes, that kind of thing. All thin  structures. Yes, all of those are problems. Now I see big words here, because this new  technique promises results more than 10   times faster in terms of rendering time than  NVIDIA’s excellent Instant NERF technique,   so of course, the quality  has got to be lower, right? Wait a second. Look here. Are you  seeing what I am seeing? The thin   structures indicate that this is not only  more than 10 times faster in rendering time,   but it also promises higher quality  results than the previous methods. Wow! And upon further inspection…  this is so much cleaner,   so much better. And it is ridiculous  how much better it is in quality   than some of the other, quite capable  previous methods from just a year ago! And yes, that’s right. Now we can get  a virtual copy of the real world with   difficult thin structures  going in full HD and yes,   real time. Much, much faster than  than real time. My goodness, so good! And here comes the punchline. Hold on to  your papers, because we should likely not   call this technique a NERF variant, because  this does not use neural networks. Yes,   this is a good old handcrafted  computer graphics technique. Wow. So how does all this wizardry work? We take  out the super capable neural networks, and   replace them with a little human ingenuity, and  it gets better? How? Well, this has two key ideas. Now, we have a 3D world at hand which has to be  represented on our 2D screen. Now the Gaussian   part means that objects in this scene will  be represented as a sum of many little lumps.    It is a bit like throwing a small pebble into  water. The pebble might be a small little point,   but the waves it creates spread out around this  point. Sounds good, right? Well, not so fast.    Unfortunately, in the 3D space, many of these  waves overlap and it is unclear how exactly   they can be transferred onto our 2D screens. And  splatting is a way of throwing these little waves   from the 3D virtual world at our 2D screen and  computing what they should really look like. As   an additional advantage, these waves concentrate  around solid objects, and thus, this technique   does not have to waste time computing parts of  the scene with lots of empty space. Loving it! Idea number two, typically NERF algorithms  are interested in going through every single   pixel on your screen, and instead, this one is  interested not in pixels, but in the primitives,   objects in the scene. Now, the concept is not new,  it has been used in computer graphics for decades,   but how it is applied to this problem  is the cool and interesting part. All in all, this is a simple  algorithm, it is really fast,   and we don’t have to sacrifice  quality. I mean, look at that! I would like to send huge  congratulations to the entire team,   and to the shared first author Bernhard who  is my former colleague and a good friend.

### [5:00](https://www.youtube.com/watch?v=TLK3TDDcJFU&t=300s) Segment 2 (05:00 - 06:00)

Now, not even this technique is perfect, for  instance, rendering specular reflections is   not perfect. These light transport  effects look much more crisp in the   real world. This first unoptimized version  also consumes quite a bit of memory. However,   just imagine what we will be capable of just  one more paper down the line. I am a computer   graphics researcher by trade, I specialize in  light transport, ray tracing if you will, and   I am always so happy to see a good  graphics paper rip. And this one,   this one is something different. Also, it just  came out a few days ago, and look how it already   captured the imagination of some Fellow  Scholars there. What a time to be alive! If you enjoyed this episode and would  like to see many more amazing papers,   consider subscribing to the  channel and hitting the bell icon. Thanks for watching and for your generous  support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/13048*