# TU Wien Rendering #23 - Monte Carlo Integration: The Solution

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=Su6mJp6NYY4
- **Дата:** 15.05.2015
- **Длительность:** 17:01
- **Просмотры:** 7,418

## Описание

In segment #17, we encountered the problem with Monte Carlo integration: in some cases, it seemed to work well, not so much in others. What went wrong? Instead of just giving you the answer, I'll try to shed light on the nature of the problem from multiple angles and we will then successfully crack this nut together. The important lesson here is not only the solution to this problem, but you can learn how to approach and formalize a problem that you only have an intuitive understanding of. I hope this will help help you along your journey of expanding your knowledge.

About the course:
This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. 

The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. 

The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. 

At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves.

These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger

Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/
Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz
Web → https://cg.tuwien.ac.at/~zsolnai/
Twitter → https://twitter.com/karoly_zsolnai

## Содержание

### [0:00](https://www.youtube.com/watch?v=Su6mJp6NYY4) <Untitled Chapter 1>

we encountered some problems so we wanted to integrate this function two times sine squared of x from zero to pi and through engineering or through mathematics we realized that this should be pi and what we did is that we ran the code that would integrate this through multi-column integration and we got one instead so there is some problem there is some insufficient knowledge that we have to remedy in some way so why don't we take a look at another example which will reveal what we did wrong so let's integrate this unbelievably difficult function from one to zero obviously this is x squared over two and the brackets show you that we have to substitute this from one to five what we get in the end is twelve now let's do monte carlo integration

### [0:52](https://www.youtube.com/watch?v=Su6mJp6NYY4&t=52s) Monte Carlo Integration

let's pretend that we cannot integrate this function as we would like to analytically so i take three completely random samples of this function what does it mean that i evaluate f of x at one now if i evaluate this x at one i obviously get one i evaluated at three and i also get that three so i have three samples now and what i do is i simply average them so this is one plus three plus five over three the end result is three but it shouldn't be that right because the end result through analytic integration is exactly four times that so something is definitely wrong with this multiple integration scheme so what we know is that three is exactly one-fourth of 12. so we see that there is a difference five of a factor of four and if you take a closer look at the integration domain then you will see that four is exactly the size of the integration domain we are integrating from one to five so just empirically if we don't this is one angle to look at the problem and to solve it we see multiple angles this is more like the engineering way of solving things you don't know how to derive a full and correct solution but you see that there is a factor of four is the size of the integration domain well why don't we multiply with that and see what happens and obviously it works so if we multiply with the size of the integration domain we get the result that we're looking for so let's change the code i multiply the previous function with the integration domain which is from 0 to pi and this is what i want to find and obviously i will get pi as i'm at result which is the correct solution for the previous symbol now this is great and this looks very simple and apparently this technique seems to work but we still don't really know what is happening here so we should use some black magic or mathematics if you will to see what is going on under the hood so imagine that we sample a function with the uniform distribution on zero power what does it mean i have an interval from 0 to pi and i generate random numbers on it and every single number has the very same probability so this function would look like 1 over pi regardless of x in the parameter because it doesn't matter which part of the domain i choose it will have the same probability to be chosen now what we are essentially doing is integrating a function f of x multiplied by this sampling probability why because imagine that some regions of the function would have zero probability to be sampled so imagine that i'm integrating from 0 to pi but i will only take samples from 0 to 2. so there's a region in the function that i'm never going to visit and i don't integrate this part so that's one intuition the other intuition is that if i draw samples not with uniform distribution but with a different distribution then in the average that i compute some regions of the function will be overrepresented because i have a higher chance of solving those so what we are doing is multiplying this f of x with the sampling probability p of x now this p of x is in this case to 1 over pi the uniform distribution which is obviously a constant so get out of my integral and in the end we have the integral of the function over pi but this is not what i'm looking for i just want to integrate the function itself so i need to make this pi disappear so i have this one over pi multiplier what do i need to multiply with to get only this function what should the question mark be louder okay excellent exactly so i just killed this one over pi multiplier which is this p of x sampling distribution and if you take a look then yes this is also the size of the integration domain so this is a bit more rigorous way to understand what is going on this is through a derivation not just empirical stuff what should i multiply with we know a bit more about what is happening i have a sampling distribution that i need to get rid of so if i have the one over pi multiplier i got the one incorrectly and if i use this scalar multiplier that i'm looking for then i will get to the correct solution let's examine the whole thing a bit more deeply in different angles i would like to show you how to solve the same problem in multiple different angles so a super quick

### [6:06](https://www.youtube.com/watch?v=Su6mJp6NYY4&t=366s) Probability Theory

probability theory recap we have an expected value this is what we're looking for what is an expected value the expected value means that there's a value of something and there's a probability of getting these values so let's take the expected value of the dice roll how does it work i can roll from 1 to 6 and they all have the same probability all roles 1 6 so the values are 1 2 up to 6 and the probabilities are all the same 1 6 and if i add this up then this says that the expected value of the dice roll is 3. 5 well this means that if i need if i would need to guess what the next dice roll would be then this would be the best value in order to minimize the error from the expected outcome now if we would like to compute the expected value of something then this means that i take the values that this something can take and i multiply it with the probabilities for this event for instance it is impossible to roll seven with the dice so theoretically you could put as there's something a seven in there but it would have zero probability therefore it could never show up in the sum and this is the discrete case for the continuous case we don't really need to do anything very serious we just change the summation to integration so we are not using these three sum but we are integrating continuous functions and we are using continuous sampling distributions now let's introduce this notation what

### [7:48](https://www.youtube.com/watch?v=Su6mJp6NYY4&t=468s) Notation

i'm looking for is the expected value of this function f of x after an n amount of seventh because multi-color integration you need to add more and more samples to get a more faithful representation of the integral now what this means is f is the something and p is the sampling distribution and what we can do is that we can create a discrete sum that takes samples of this function and then multiplies with the size of the domain and obviously since we are taking the sum we need to divide by f because the more sample n is the number of samples the more samples we take from the function the larger the number we would get so this is the averaging part now you have to take a look at always keep looking at the relevant quantities so the expected value of this f of x is does mean that in the integration i multiply it with this sampling probability and on the right side in the monte carlo estimate i will have the same quantity as on the left side so if i'm looking for the expected value of x then i will sample f of x now if you have if you take a look then you can see that this is just an approximation this is not exactly the integral that we're looking for but there is a multitude of theorems in computer science that show you that if you could use an infinite amount of samples then you would approach the actual interval and most courses on monte carlo integration show you different ways of proving this but this is not what we are interested in we would just believe that this is what is happening it's actually very intuitive why this is happening you remember seeing this sine wave that we sampled with all these blue and red dots so you could see that if you have a lot of samples you will get to the destination of the area under the curve now let's try to use different sampling distributions and in a few minutes you will see why this would be a good idea in some cases so i would like to integrate this f of x and i am now doing the transformation that is the identity transformation i didn't do anything to my f of x i multiplied by p of x and then i divide it by so this is almost like a scalar multiplier and then by dividing the same number i get the very same thing but if i would like to write that this is the expected value of something then this will look a bit different because f over p is the something and p of x is the sampling probability of distribution so what we have now is the expected value of f over p

### [10:41](https://www.youtube.com/watch?v=Su6mJp6NYY4&t=641s) What Is the Monte Carlo Estimator

and the question is what is the monte carlo estimator for this and what we concluded in the previous slides that this should be the very same quantity as what i see in the expected value so i will be something f over b so i'm not only something f i'm sampling f over an arbitrarily chosen probability distribution now there are some good readings on how to do this well and why this is useful so if you would like to know more about this please read some of these documents they are really well and that's a rare thing nowadays because i've seen lots of not so well written guides on monte carlo integration i need you to look for a very long time to find something that has the quality that i should give out for other people to study now let's solve the actual example that we had previously with this formula so f over p times p so i am still integrating only f and the sampling distribution was this 2 times sine square x this was the function that we wanted to integrate and 1 over pi is the sampling distribution probability sorry uniform distribution over 1 to pi so i'm getting back the integral of the original function so i'm looking for the expected value that's f over p so i'm going to sample in my code f over p let's put this in source code if you look here i now divide by this sampling distribution so it's one over b minus a so this means one over pi e and a this a should have been zero in this case so i apologize for the differences in the code i put the 2. 5 in there because if you always the a is always zero then you may write code that works for integration from zero to something but not one to something so this is a cool thing to check if you have implemented this so i apologize this a should be zero but if you compute the actual result that you would be looking for then you will get your pi so this is the f the first term in the sum in line 36 and after the division we have the p okay wonderful so this works and from multiple angles we now understand how exactly this thing is working now if you write a good monte carlo integration routine and you solve the rendering equation with this what you will see is that as you add more samples you will see first the really noisy image and then as you add more and more samples this noise will slowly kick up and if you think back in the previous lecture of mine we have talked about over and underestimations of the integral and this is exactly what shows up also in images if we are trying to sample a function i would like to be interested in the radians but as i add more and more samples before i converge i will get values that are larger than the actual intensities and i will get values that are smaller so this is what shows up visually as noise so what you are looking for is always this samples per pixel metric and when you have a noisy image you would need to know how many samples i have used per pixel and if it's still noisy then you would need to add more samples this is also some visualization on the evolution of an image after hundreds and then 100 000 samples depending on the algorithm there's multiple ways of solving the rendering equation you could have smarter algorithms that take longer to compute one sample because they are doing some smart magic that this would mean that you would need less samples per pixel to get the converged image and the first algorithm that we will study is actually the naive

### [14:55](https://www.youtube.com/watch?v=Su6mJp6NYY4&t=895s) Naive Algorithm

algorithm it's called r tracing and it needs usually a tremendous amount of samples to compute an image but since it is a simple algorithm you can use your gpu or cpu to dish out a lot of samples per pixels in every second now a bit of a beauty break this is what we can get if we implement such a part bracer this was rendered with lux render and some recent example does anyone know who this is just raise your hand okay half of the people okay excellent so this is actually marjorie material from the game of thrones and if anyone tells me any spoilers i will go on page okay so please and this is actuality because the game of thrones is running obviously we all love the show and there's also scheme being rendered so there's tons of subsurface scattering and you can solve this with a simple path tracer that we will put together the theoretical part in the second half of this lecture and then we will implement it with the next lecture so when i see renders like this what i feel is only comparable to religious spiritual wonder it is absolutely amazing that we can compute something like this using only mathematics and these very simple tools that i have shown and the other really cool thing is that we are writing these algorithms we are creating products that use these algorithms and these are given to world-class artists who are just as good of an artist as we are engineers and they are also giving it their best to create more and more faithful models and we can work together to create stuff like that so this is absolutely amazing

---
*Источник: https://ekstraktznaniy.ru/video/14989*