TU Wien Rendering #31 - Unbiased, Consistent Algorithm Classes
14:12

TU Wien Rendering #31 - Unbiased, Consistent Algorithm Classes

Two Minute Papers 29.05.2015 3 683 просмотров 39 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
We consider photorealistic rendering a mature subfield of computer graphics, and as many global illumination algorithms exist, it'd be great to classify them according to what behavior we can expect from them. Such an algorithm can be biased/unbiased and consistent/inconsistent. Choose your poison! About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai

Оглавление (4 сегментов)

<Untitled Chapter 1>

let's talk about just briefly about the pbrt architecture pbrt is not exactly the renderer that we are going to use we're going to use Lux render but Lux render was built upon pbrt and therefore the basic structure remained completely intact and this is a really good architecture that you would see that many of the rendering engines out there global illumination use most of them use the very same architecture so we have a main sampler uh render task that asks the sampler to provide random samples so the sampler you can imagine as a random number generator we need a lot of different random numbers because the pixel that we are sampling some techniques choose it deterministically going from Pixel to pixel some techniques take pixels randomly I mean which pixel we choose to be sampled is usually deterministic but the displacement because we would be sampling the pixels not in not only in the midpoint like recursive rate tracing but you we you would take completely random samples from nearby and use filtering to sum them up in a meaningful way now this requires random numbers they come from the sampler you would also send outgoing rays in the hemisphere of different objects you also need random numbers for this so in this sample these random ROM numbers arrive and this sample you would send to the camera and the camera would give back to you array so you tell the camera please give me array that points to this pixel and this camera would give you back array which starts from the camera starting point and points exactly there now all you need to do is give this Ray to the integrator and the integrator will tell you how much Radiance is coming along this Ray and what you can do after that is write it to a film and this is not necessarily trivial because for instance you could just simply write it to a PPM or a PNG file and be done with it in contrast what Lux render does is it has a film class and what you can do is that you can save different for instance different contributions in different buffers so what you could do is for instance separate direct and indirect illumination into different films different images and then you can in the end sum them up but maybe you could say that I don't need cos stics on this image and then you would just cut that image so you can do tricky things if you have a correctly implemented film class okay so Lux render just what I have been talking about it's built upon pbrt and uses the very same architecture this is how it looks so it has graphical user interface and you can also manipulate different tone mapping algorithms in there different the noising image the noising algorithms in there and you can even manipulate light groups this is another tricky thing with the film class basically what this means is that you save the contributions of different light sources into different films by films you can imagine image files so every single light source has a different PNG file if you will and they are saved into there and the final image would come up as a sum of these individual films but you could say that one of the light sources is a bit too bright I would like to tone it down but if you want then you would have to reender your image because you changed the physical properties of what's going on now you can do this if you have this light groups option because they are stored into individual buffers so you can just dim one of these images and just add them up together and then you would have the effect of that light source a bit dimmer you can for instance completely turn off sunlight or television that you don't want to use in the scene it sounded like a good idea but it wasn't you can just turn it off without rendering the scene and you can operate all of these things in the through Deluxe render GUI now before we go into algorithms let's talk

Algorithm Classes

about algorithm classes what kinds of algorithms are we interested in first

Consistent Algorithms

what we are interested in is consistent algorithms consistent means that if I use an infinite number of Monte Carlo samples then I would converge exactly to the right answer I would get back the exact integral of the function intuitively it says if I run this algorithm sooner or later it will converge it also is important to note that no one said anything about when this sooner or later happens so if an algorithm is consistent it doesn't mean that it is fast it's slow it can be anything absolutely anything it may be that there is an algorithm that's theoretically consistent so after an infinite amount of samples you would get the right answer but it really feels like Infinity so it may be that after two weeks you still don't get the correct image there are algorithms like that and theoretically that's consistent that's fine because you can prove that it's going to converge sooner or later the more difficult uh class that many people seem to mess up is unbiased

Unbiased Algorithms

algorithms now what does it mean if you just read the formula then you can see that the expected error of the estimation is zero and we have to note that this is completely independent of n is the number of samples that we have taken now the expected error of the estimation is zero it doesn't mean that the error is zero because it's independent of the number of sample it doesn't mean that after one sampler per pixel I get the right result it says that the expected error is zero I will give you many intuitions of this for this because it is very easy to misunderstand and misinterpret because in statistics there's a difference between expected value and variance and this doesn't say anything about the variance this only tells you about the expected values so for instance if you're a mathematician and and think a bit about this then you could say that if I have an unbiased algorithm and I have two noisy images you render something on your machine I render something on my machine that's to noisy images I could merge them together I could average them because they are unbiased samples it doesn't matter where they come from I would add these samples together average them and I would get a better solution we will see an example for that my favorite intuition is that the algorithm has the very same chance of over and underestimating the integrant so it means that if I would try to estimate the outcome of a dice roll the expected value you can roll from 1 to six with equal probabilities the expected value is 3. 5 so this means that I would have the very same probability of saying four as I would have the probability for saying three so the very same chance to under and overestimate the integrant and I'll give you my other favorite intuition this is what journalists tend to like the best it means that there's no systematic error in the algorithm doesn't cut corners and if there are errors in the image then this can be only noise and this noise comes because you don't have enough samples and if you add more you are guaranteed to get better now let's take another look at this really good intuition so I can combine together two noisy images so this means that I should be able to do Network rendering without actually using a network which sounds a bit mindboggling I really like the parallel to this which is a famous saying of Einstein from long ago where they talked about sending electromagnetic waves out and they talked about the telephone and people could not grasp the idea of a telephone and he said that we would have a super long cat one the tail of the cat would be in Manhattan and if you would just pull the tail of the cat in Manhattan then the front of the cat would be in New York and if you pull the tail in Manhattan then she would say meow in New York and he asked the people is this understandable yes this is understandable okay perfect we're almost there now imagine that there's no cat and this is the exact same thing so this is Network rendering without an actual Network well okay mathematical theories okay but let's actually let's give it a try so what I did here is I rendered this interior scene and this is how it looks like after 2 minutes it's really noisy right now what I did is I ran 10 of these rendering processes and saved the images 10 times so I didn't run one rendering process for long I ran many completely independent rendering processes for two minutes and what I did is I merged the images together what it means is that I averaged the images I added them together and averaged them now basically this means that you could do this on completely independent computers that have never heard of each other and now let's take a look this is the noisy image that we had and now let's merge 10 of these together this is what we will get look closely look at that now one more time this is the noisy after 2 minut minutes and this is merging some of these noisy images together so this is unbelievable that this actually works so if you have unbiased algorithms you can expect this kind of behavior and you don't need to write sophisticated uh networking to use your path Tracer for instance in a network because you don't need the network at all and this is really awesome no because if you don't add any kind of seed to your computations then you are Computing completely independent samples and it doesn't matter if the sample is computed on the same machine or in a different machine if you have some kind of determinism then it may be possible that the same paths are computed by multiple machine and that's indeed wasted time but otherwise it works just fine now let's practice a bit instead there is a question yes yeah just how big is the difference between one picture rendered 20 minutes and 10 pictures rendered 2 minutes each and then combined nothing in terms of samples nothing that the only difference is that you actually need to fire up that scene on multiple machines so if there's like 10 gab of textures then it takes longer to load it up on multiple machines and maybe transfer the data together but if you think only in terms of sample it doesn't matter where it comes from okay let's practice a bit we have different techniques and this is how the error is evolving in time now the intuition of consistent means that the error tends to zero over time so if I render for long enough then the error is going to be zero is this black one a consistent algorithm no because it converges here to the dash line and not to zero now what about the other two guys are they consistent or not okay they are consistent why because they converge to the basine mhm okay so the error seems to converge to zero okay now what about these techniques are they biased or unbiased which is which what about this one this is the darker gray is this biased or unbiased now if we have this intuition that if we render for more the image is guaranteed to get better or at least not worse then this dark gray is definitely not unbiased because it is possible that I'm rendering for 10 minutes that's this point for instance and I say okay I almost have a good enough image and I render for another five image and expect it to be better and then I get this maybe a completely garbled up image full of artifacts and errors and that is entirely possible with biased algorithms no one said that it's likely but it is possible so you cannot really predict how the error would evolve in time and if you take a look at the other two lines you can see that they are unbiased algorithms so as you render for longer you are guaranteed to get a better image

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник