Separable Subsurface Scattering | Two Minute Papers #66
4:38

Separable Subsurface Scattering | Two Minute Papers #66

Two Minute Papers 15.05.2016 49 144 просмотров 1 172 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Separable Subsurface Scattering is a novel technique to add real-time subsurface light transport calculations for computer games and other real-time applications. ____________________________ The paper "Separable Subsurface Scattering" and its implementation is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ http://www.iryoku.com/separable-sss/ Recommended for you: Ray Tracing / Subsurface Scattering @ Function 2015 - https://www.youtube.com/watch?v=qyDUvatu5M8 Separable Subsurface Scattering Unofficial Talk - https://www.youtube.com/watch?v=mU-5CsaPfsE Separable Subsurface Scattering Implementation in Blender (thank you Lubos Lenco!): http://www.blendernation.com/2016/05/02/separable-subsurface-scattering-game-engine-cycles/ http://luboslenco.com/notes/ssss/ WE WOULD LIKE TO THANK OUR GENEROUS SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE: Sunil Kim. https://www.patreon.com/TwoMinutePapers Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz Image credits: Leaves - https://flic.kr/p/fGie2L Snail - https://flic.kr/p/8wXFiC Skin: Wikipedia Extended credits (copied from the Acknowledgements section of the mentioned paper): The authors want to thank the reviewers for their insightful comments; Infinity Realities, in particular Lee Perry-Smith, for his head model and for the Lauren model; the Institute of Creative Technologies at USC, in particular Paul Debevec, for the Ari and Bernardo models; and Bernardo Antoniazzi for letting us use his likeness. Furthermore, we want to thank the Stanford University Computer Graphics Laboratory for the Dragon model, and the following contributors from Blend Swap under CC-BY licence: longrender for the Dish model, metalix for the Green apple model, betomo16 for the Plant model, and PickleJones for the Grapes model. We also thank Felícia Fehér for editing the figures. This research has been partially funded by the European Commission, 7th Framework Programme, through projects GOLEM and VERVE, the Spanish Ministry of Economy and Competitiveness through project LIGHTSLICE, and project TAMA, and the Austrian Science Fund (FWF) through project no. P23700-N23. The thumbnail background image was taken from the corresponding paper linked above. Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook → https://www.facebook.com/TwoMinutePapers/ Twitter → https://twitter.com/karoly_zsolnai Web → https://cg.tuwien.ac.at/~zsolnai/

Оглавление (1 сегментов)

Segment 1 (00:00 - 04:00)

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. Subsurface scattering means that a portion of incoming light penetrates the surface of a material. Our skin is a little known, but nonetheless great example of that, but so are plant leaves, marble, milk, or snails, to have a wackier example. Subsurface scattering looks unbelievably beautiful, but at the same time, it is very expensive to compute because we have to simulate up to thousands and thousands of light scattering events for every ray of light. And we have to do this for millions of rays. It really takes forever. The lack of subsurface scattering is the reason why we've seen so many lifeless, rubber-looking human characters in video games and animated movies for decades now. This technique is a collaboration between the Activision Blizzard game development company, the University of Zaragoza in Spain, and the Technical University of Vienna in Austria. And, it can simulate this kind of subsurface light transport in half a millisecond per image. Let's stop for a minute and think about this. Earlier, we talked about subsurface scattering techniques that were really awesome, but still took at least let's say four hours on a scene before they became useful. This one is half a millisecond per image. Almost nothing. In one second, it can do this calculation two thousand times. Now, this has to be a completely different approach than just simulating many millions of rays of light, right? We can't take a four hour long algorithm, do some magic and get something like this. The first key thought is that we can set up some cool experiment where we play around with light sources and big blocks of translucent materials, and record how light bounces off of these materials. Cool thing number one: we only need to do it once per material. Number two: the results can be stored in an image. This is what we call a diffusion profile and this is how it looks like. So we have an image of the diffusion profile, and one image of the material that we would like to add subsurface scattering to. This is a convolution-based technique, which means that it enables us not to add these two images together, but to mix them together in a way that the optical properties of the diffusion profiles are carried to the image. If we add the optical properties of an apple to a human face, it will look more like a face that has been carved out of a giant apple. A less asinine application is, of course, if we mix it with the appropriate skin profile image, then we'll get photorealistic looking faces, as it is demonstrated quite aptly by this animation. This apple to skin example, by the way, you can actually try for yourself, as the source code and an executable demo is also freely available for everyone to experiment with. Convolutions have so many cool applications, I don't even know where to start. In fact, I think we should have an episode solely on that. Can't wait, it's going to be a lot of fun! These convolution computations are great, but they are still too expensive for real-time video games. What this work gives us, is a set of techniques that are able to compute this convolution not on these original images, but much smaller, tiny-tiny strips which are much cheaper, but the result of the computations look barely distinguishable. Another cool thing is that the quality of the results is not only scientifically provable, but this technique also opens up the possibility of artistic manipulation. It is done in a way that we can start out with a physically plausible result and tailor it to our liking. You can see some exaggerated examples of that. The entire technique is so simple, a computer program that executes it can fit on your business card. It also seems to have appeared in Blender recently. Also, a big hello and a shoutout for the awesome people at Intel who recently invited my humble self to chat a bit about this technique. If you would like to hear more about the details on how this algorithm works, I have put some videos in the description box. The most important take home message from this project, at least for me, is that it is possible to conduct academic research projects together with companies, and create results that can make it to multi-million dollar computer games, but also having proven results that are useful for the scientific community. Thanks for watching, and for your generous support, and I'll see you next time!

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник