# Disney's AI Learns To Render Clouds | Two Minute Papers #204

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=7wt-9fjPDjQ
- **Дата:** 09.11.2017
- **Длительность:** 4:07
- **Просмотры:** 62,373

## Описание

The paper "Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks" is available here:
http://drz.disneyresearch.com/~jnovak/publications/DeepScattering/
http://simon-kallweit.me/deepscattering/
https://tom94.net/data/publications/kallweit17deep/interactive-viewer/

Our Patreon page with the details:
https://www.patreon.com/TwoMinutePapers

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Andrew Melnychuk, Brian Gilman, Christoph Jadanowski, Dave Rushton-Smith, Dennis Abts, Eric Haddad, Esa Turkulainen, Evan Breznyik, Kaben Gabriel Nanlohy, Malek Cellier, Michael Albrecht, Michael Jensen, Michael Orenstein, Steef, Steve Messina, Sunil Kim, Torsten Reil.
https://www.patreon.com/TwoMinutePapers

Two Minute Papers Merch:
US: http://twominutepapers.com/
EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/

Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/)
Artist: http://audionautix.com/ 

Thumbnail background image credit: https://pixabay.com/photo-2920167/
Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Facebook: https://www.facebook.com/TwoMinutePapers/
Twitter: https://twitter.com/karoly_zsolnai
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=7wt-9fjPDjQ) <Untitled Chapter 1>

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. This is a fully in-house Disney paper on how to teach a neural network to capture the appearance

### [0:06](https://www.youtube.com/watch?v=7wt-9fjPDjQ&t=6s) Sunrise / Sunset

of clouds. This topic is one of my absolute favorites because it is in the intersection of the two

### [0:15](https://www.youtube.com/watch?v=7wt-9fjPDjQ&t=15s) Camera Orbiting Cloud

topics I love most - computer graphics and machine learning. Hell yeah! Generally, we use light simulation programs to render these clouds, and the difficult part of this is that we have to perform something that is called volumetric path tracing.

### [0:28](https://www.youtube.com/watch?v=7wt-9fjPDjQ&t=28s) Light Orbiting Cloud

This is a technique where we have to simulate rays of light that do not necessarily bounce off of the surface of objects, but may penetrate their surfaces and undergo many scattering events. Understandably, in the case of clouds, capturing volumetric scattering properly is a key element in modeling their physical appearance. However, we have to simulate millions and millions of light paths with potentially hundreds

### [0:57](https://www.youtube.com/watch?v=7wt-9fjPDjQ&t=57s) Gradual Density Change

of scattering events, which is a computationally demanding task even in the age of rapidly improving hardware. As you can see here, the more we bump up the number of possible simulated scattering events, the closer we get to reality, but the longer it takes to render an image. In the case of bright clouds here, rendering an image like this can take up to 30 hours. In this work, a nice hybrid approach is proposed where a neural network learns the concept of in-scattered radiance and predicts it rapidly so this part we don't have to compute ourselves. It is a hybrid because some parts of the renderer are still using the traditional algorithms. The dataset used for training the neural network contains 75 different clouds, some of which are procedurally generated by a computer, and some are drawn by artists to expose the learning algorithm to a large variety of cases. As a result, these images can be rendered in a matter of seconds to minutes. Normally, this would take many-many hours on a powerful computer. Here's another result with traditional path tracing. And now the same with deep scattering. Yep, that's how long it takes. The scattering parameters can also be interactively edited without us having to wait for hours to see if the new settings are better than the previous ones. Dialing in the perfect results typically takes an extremely lengthy trial and error phase which now can be done almost instantaneously. The technique also supports a variety of different scattering models. As with all results, they have to be compared to the ground truth renderings, and as you can see here, they seem mostly indistinguishable from reality. It is also temporally stable, so animation rendering can take place flicker-free as is

### [2:54](https://www.youtube.com/watch?v=7wt-9fjPDjQ&t=174s) Temporally Varying Cloud

demonstrated here in the video. I think this work is also a great testament to show how these incredible learning algorithms can accelerate progress in practically all fields of science. And given that this work was done by Disney, I am pretty sure we can expect tons of photorealistic clouds in their upcoming movies in the near future. There are tons of more details discussed in the paper, which is remarkably well produced

### [3:18](https://www.youtube.com/watch?v=7wt-9fjPDjQ&t=198s) Modeling Directional Light

make sure to have a look, the link is in the video description. This is a proper, proper paper, you don't want to miss out on this one. And, if you enjoyed this episode and you feel that the series provides you value in the form of enjoyment or learning, please consider supporting us on Patreon. You can pick up cool perks there like deciding the order of the next few episodes, and you also help us make better videos in the future. Details are available in the description. Thanks for watching and for your generous support, and I'll see you next time!

---
*Источник: https://ekstraktznaniy.ru/video/14560*