TU Wien Rendering #35 - Stochastic Progressive Photon Mapping
3:42

TU Wien Rendering #35 - Stochastic Progressive Photon Mapping

Two Minute Papers 05.06.2015 6 766 просмотров 99 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Photon mapping is working great for a variety of scenes. Ideally, we would like to have a large number of photons for caustics, indirect illumination, etc. but having only a finite amount of photons in our photon maps introduces problems. To remedy this, Toshiya Hachisuka came up with Stochastic Progressive Photon Mapping, a technique where we progressively discard and re-generate the photon maps with fresh samples. This way we are not stuck with the only one photon map we have and we get more and more information about the scene as time goes by. About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai

Оглавление (1 сегментов)

Segment 1 (00:00 - 03:00)

stochastic Progressive photo mapping what is this thing about well you would need an infinite amount of photons to ensure consistency you cannot do that but what you could do is that you could from time to time generate a new photo map and use that and this means discarding previous symbols and creating new ones so we start out with a regular Ray tracing pass that we call I pass and we use this photo map that we have and then we generate a new photo map and then we are going to use that from the next pass there's also an addition and you start out with bigger photons so to say and the size or the radius of these photons would shrink in time why is this useful well because you have practically an infinite number of photons and you can see how the rendered image evolves over time with Progressive photo mapping so this method is consistent this is a big deal because you can make photo mapping consistent in Practical cases so this is our preview scene with heavy SDS transport and you can see how it converges in the first 10 minutes of the rendering process with svpm another set of results with the classical algorithms that we all know and love and you can see that photo mapping kind of works you don't have high frequency noise but you can see that it over blurs many of the important features of the image and this is the result with BPM much sharper images slightly more noise but it is practically consistent what about this difficult previous scene with lots of SD S transport well photo mapping kind of worked but it again over blurred many of the important features Progressive photo mapping takes care of this you can read the papers here so SPM doesn't just render SDS light paths but it does it efficiently it is a wonderful previewing algorithm so you can just fire it up and in a matter of seconds you can get a good idea on how your scene actually is going to look like however if you set this starting radius to a setting that's too high then you're going to have large photons for the longest time and this means that the image will be again over blurred for a very long time in the rendering process however if you set it for too low it will be a very sharp image but it will take a very long time to feel the image so as you can see this is a more complex technique that can possibly outperform the algorithms that you have previously seen but this comes at a cost this is a more complex algorithm this is slightly more difficult to implement and it has more parameters than previous methods you can see that this is not like the large mutation probability with Metropolis light transport if you set up one of the parameters incorrectly you may have to wait for way too long and if you set up a simple photom mapper not SPM incorrectly you may even get an incorrect image because you don't have enough photons at the most important regions of the image this work was created by toshia haisa and his colleagues and it's a brilliant piece of work

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник