# TU Wien Rendering #25 - Path Tracing, Next Event Estimation

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=i6KDgYk5Nzg
- **Дата:** 15.05.2015
- **Длительность:** 17:57
- **Просмотры:** 10,515
- **Источник:** https://ekstraktznaniy.ru/video/14987

## Описание

We finally have every tool in our hand to solve the Holy Rendering Equation! Furthermore, we extend it with next event estimation (in other words, explicit light sampling) to handle occluded and point light sources well.

About the course:
This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. 

The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. 

The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. 

At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should d

## Транскрипт

### <Untitled Chapter 1> []

let's take a look at how the exact algorithm looks like i have a recursive function where what i'm looking for first is if i have reached the maximum depth that i would like to render this means that if i say that i would write to render five bounces then this is going to be five did i reach this number of bounces yes okay then just stop and return the black color because this way is not gonna continue then what i'm looking for is the nearest intersection you remember from the previous lecture that this means parametric equations which i solved for t so what i'm interested in is that i am intersecting a lot of objects and i'm only interested in the very first intersection and if i didn't hit anything then i will return a black color because there's no energy going in this through this way now if i have the intersection of the object i will be interested in the emission and the material of this object emission means that if this is a light source then it's going to have emission and the material can be an arbitrary vrdf diffuse glossy or some complicated multi-layer material this i'm going to store on them what's up next well i would like to construct a blue ray because i will trace the next gray so i can i instantiate a new ray this will start wherever i hit this object so if i hit the table i will create an array that starts from the table

### set the outgoing direction [1:40]

and i will set the outgoing direction according to some sample stream that i have let's say that what they have here if it says that random unit vector in the hemisphere off where the object was hit and this sounds like a diffuse case for me so

### generate a random unit factor on this atmosphere [1:56]

i generate a random unit vector on this atmosphere and this is going to be the outgoing direction now let's add together the elements of the rendering equation i have the cosine theta which is the light attenuation i have the brdf term and in the brdf term it seems that here they have also included this cosine theta part which is the light attenuation this is the albedo of the material how much light is absorbed and how much is reflected and then what i would like to do is i would call the very same function that you see in line of code number one so this is a recursive function i will start the same process again with the new rate the new starting point and the new direction and in the end if i have traced a sufficient number of rays then i will exit this recursion and i collect the result of this in this variable that's called reflected and in the end this is the elegant representation of the rendering equation the emission the le plus the integrated function which is the brdf times this reflected which is all the recursive terms so this means that i shoot out this ray in the hemisphere and there's going to be many subsequent bounces and i add up all this energy into the reflected incoming light so this is the pseudo fault this is not something that you should try to compile or anything like that but this is what we will code during the next lecture this is just a schematic overview on what is happening exactly it's actually very true we shoot array we bounce into the scene and then we hopefully hit a light source at some point and even if we hit the light source we continue but hitting the light source is important because this is where the emission term comes from let me show you what's going on if we don't hit light sources so this l e is the emission term on the left side here so we add this to the end result at every recursion step and the fundamental question is that if we have a long life path that doesn't hit the light source we are using completely random standing or maybe some smart important sampling well then we never we will never have this emission term what does this mean that the radiance that we give we get from the program is going to be zero so the corollary of this is that only you will get gradients you will get an output from only light paths that heat a light source if you don't hit the light source you don't know where the light is coming from so you will return a black pixel and this is obviously a really bad thing because you're computing churning out samples and samples perhaps on your gpu but it doesn't return you anything so it's a very peculiar fact about simple naive path tracing is that if you have a small light source

### light source [5:16]

the convergence of your final image is going to be slower why someone help me out smaller light sources more variants slower convergence because we need a random rays to hit the light source isn't it small then we won't hit it as often exactly so the relative probability of keeping the light source is going to be less for a small light source and up to the extreme where we have a point light source and if like source then we will see that we will be in trouble because what i would expect from my path tracer is to return something like this while imagine a point light source in here but this is not what we will end up with so i would expect it to return the correct result many people have reported many forums on the internet that hey i implemented it but this is what i got and wow this doesn't work at all i mean all these pronouns law smells like total internal reflection monte carlo integration for a black image i mean i could generate this with five lines of c plus why do we even bother we will get nothing why is that point light source black image why yes exactly so a point represents a location in mathematics it does not have area so technically it is the same getting the point by source is impossible because this is the same as what you study in statistics and probability theory that if you have one number in a continuous scale what is the probability of hitting this number zero because that's the point it has no surface area it's infinitely small you cannot hit it so this is the reason of your black image if you read the forums on the internet you'll find plenty of this now we could also sum up our findings internet meme style if you will so if you would like

### compute our face into the point light source [7:42]

to compute pathways with a point light source without the technique that is called next event estimation then you usually will expect a wonderful image but this is what you will get now the first question is obviously how do we overcome this what we can do is that every time we hit something some object in the scene be it diffuse or anything but not light sources we

### compute the direct effect of the light source on this point [8:11]

compute the direct effect of the light source on this point in the scene so this is a schematic to show what is going on so i start from the viewer i hit this sphere and i don't just start tracing the new ray outwards but i will connect this point to the light source and i will compute a direct illumination term this is the schematic for path tracing without the next event estimation and this is with next event estimation so at every station i connect to the light source in this case this is actually occluded debatable and in the third bounce you will get some contributions from the light source question is how do we do this exactly well this is the topic this was the topic of assignment zero so the formula that you see in the assignment 0 is exactly the very same thing as what you should be using what was in there well what we were interested in is that there was a term with the four pi because if you have a light source that's a sphere then what you would then we are interested in how much radiance is emitted in one direction so then you will need to divide by the area of the surface which is division by four pi and there's going to be the attenuation term which is r squared same as in the gravitational law or in the law of electric fields it means that the further away i am from the light source the less light is going through this is a really good technique because of multiple reasons and one of the reasons is that you will get contributions from every bounce during the computing to life before i proceed i would like to tell you that here this l we are talking about this ld the emission term and we are adding this part of this emission term in every bounce so if i hit p1 i add this something if i hit p2 p3 then i also add to something but when i hit the light source i don't add the emission term anymore because i will be adding it again so this one led that you would add when you hit the light source by chance this is distributed into individual purposes why is this great one you can remember point light sources because the direct effect you can actually measure but you cannot hit the light source itself so that's okay two you will have less variance because it's not like i either hit the light source or i don't i statistically always will pick the light source unless there are occluders so i'm adding many samples with small variances not one sample and lottery because you either mean or you don't get anything back so i can lower the variance which means that my images will convert faster and the other thing is that because there are contributions of every balance i can

### separate direct and indirect illumination [11:27]

separate direct and indirect modulation so a lot of people do this in the industry because the movie industry is nowadays using path pricing i cannot say that like as an all-encompassing something statement but for instance disney is now using global illumination in that mostly countries why because it looks insanely good and it is very simple and it took them more than 20 years for them to replace their old system which they really liked it was called case and now they are using global illumination processing it has taken a long time but the benefits of global illumination are now too big to test something and what they are doing is that they get a physically based result but this is not always what the artists are looking for because if you have work together with artists then they will say okay you have computed a beautiful image but i would like the shadows to get a bit brighter you will as the engineer say that while this is not possible i computed what would happen in physical reality and that's it but the artists are not interested in physical reality they are interested in their own thoughts and their own artistic vision and they would like to change the shadows so you could technically make one of the

### make one of the light sources brighter [12:55]

light sources brighter and then the shadows would get brighter but then the artist says hey don't change anything else in the scene just change the shadows

### pull out your knowledge of the rendering equation of bloom [13:06]

and then you could pull out your knowledge of the rendering equation that look the radians coming out from this point you cancel its surroundings so you cannot just make something brighter and the nearby things will also get brighter you cannot circumvent them when you can what you can do with the next event destination is that you take you generate an image from the first balance so you will get one image which is which you deposit the radians that you measured in p1 that's an image and then you create another image which will only contain the second bounces p2 and upwards so you would have multiple images and you could technically just add up all of these images with simple addition and you would get physical reality great but if the artist says that i want

### indirect illumination [13:55]

stronger indirect illumination then you would grab this buffer this image that holds up the second and the higher order bounces and you could do some photoshop or you could do whatever you want with that without touching anything else so you have a nice separation for direct and indirect illumination moving in the sorry the movie industry they love it they are playing with it all the time and in the later you will see some algorithms that behave differently only indirect elimination and differently on direct illumination you can only do that if you separate these terms so let's see path tracing now with next event destination i have the very first bounce and before i continue my ray i will send the classical super classical shadow ray to the light source a randomly chosen point of the light source and i will add this direct contribution of the light source to this point and then i continue let's go back to the terms sorry we use many terms for the very same thing this is why i'm writing all of these terms because if you read the forums if you read papers you will see these terms and they all mean the same so explicit light sampling max demand destination very same thing so i continue my ray

### hit the light source with a shadow ray [15:20]

and i also hit the light source with the shadow ray and then i continue on and on and imagine that this third one is an outgoing way that actually hits the light source and if i do i don't add the lp turning there because i did in the previous process it's very important to know now you have seen the results for a point light source nothing versus something that's pretty hefty but even if you have a reasonably big light source like the lifeline size light source i told you that you can have a variance suppression effect as well so this is some amount of samples per pixel i think it's two maybe three samples per pixel so this means that i grab one pixel and i send three rays through it so it's three multi-parallel samples now this you can do in two different ways because if you start to use renderers then you will see how this exactly happens some renderers are rendering tiles so what they do is that they start with some pixels and if you say i want 1000 samples per pixel then it will start take one or four or whatever number of threads you have on your machine it could take four seven four pixels and it will shoot more and more samples to it and after it got to 1000 samples it will stop and show you a really good and converged pixel and what we call progressive rendering is the opposite you pick one pixel you shoot array through it but only one and then you go to the next and then you will see an image that has some amount of noise and progressively you will get less and less voice so this is what you see here is progressive rendering emotion now no next event estimation so we only get contributions if we hit this light source at the end somewhere if we don't you will get a black sample now look closely this is with next event estimation so there's a huge difference such a simple technique

### speed up the rendering of many scenes [17:36]

can speed up the rendering of many scenes with orders of magnitude you can also play with this program by the way this is implemented on shader toy so when you read this at home just click on the link and play with it it's amazingly fun
