TU Wien Rendering #29 - Path Tracing Implementation & Code Walkthrough
23:28

TU Wien Rendering #29 - Path Tracing Implementation & Code Walkthrough

Two Minute Papers 18.05.2015 18 429 просмотров 295 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Now that we know how path tracing works, we put in to code close to everything we've learned so far and will now implement a full global illumination path tracer from scratch in just 250 lines of C++ code. Imagine that all this knowledge we've amassed can be compressed into such a small program! The full implementation can be downloaded here: http://cg.tuwien.ac.at/~zsolnai/gfx/smallpaint/ About the course: This course aims to give an overview of basic and state-of-the-art methods of rendering. Offline methods such as ray and path tracing, photon mapping and many other algorithms are introduced and various refinement are explained. The basics of the involved physics, such as geometric optics, surface and media interaction with light and camera models are outlined. The apparatus of Monte Carlo methods is introduced which is heavily used in several algorithms and its refinement in the form of stratified sampling and the Metropolis-Hastings method is explained. At the end of the course students should be familiar with common techniques in rendering and find their way around the current state-of-the-art of the field. Furthermore the exercises should deepen the attendees' understanding of the basic principles of light transport and enable them to write a simple rendering program themselves. These videos are the recordings of the lectures of 2015 at the Teschnische Universität Wien by Károly Zsolnai and Thomas Auzinger Course website and slides → http://www.cg.tuwien.ac.at/courses/Rendering/ Subscribe → http://www.youtube.com/subscription_center?add_user=keeroyz Web → https://cg.tuwien.ac.at/~zsolnai/ Twitter → https://twitter.com/karoly_zsolnai

Оглавление (17 сегментов)

Intro

what a wonderful day we have today and what a wonderful time it is to write a path Tracer so why don't we get

Overview

started what we are going to be looking at is a program called small paint which is a small path Tracer in effectively 250 lines of code and it contains everything that we have learned during this course we're going to be able to compute soft Shadows anti-aliasing Monte Carlo integration and even quasi Monte Carlo sampling which basically means low discrepancy sampling this version of the program is going to be able to compute refractions color bleeding and cos stics and in the end the binary you compile from the code can be compressed into less than 4 kilobytes this is how the end result

End Result

looks like it has a beautiful painterly look which actually comes from a bug and you can also see that the light source up there the whitish looking sphere is you could say perfectly anti-alias in order to achieve this with a recursive rate Tracer and no Global illumination algorithm you would need to compute the very same image on a much larger resolution and then scale it down to a smaller image like this and this anti- aing effect you get for free if you compute Monte Carlo path tracing so the question is how is this exactly done and now is the best time to put everything into use that we have learned so far so let's get started we have a

Vector class

completely standard vector class it is a three-dimensional Vector it has its own Constructor the classical operators that you would expect and we also have a DOT product for the vector and a cross it is also obviously possible to compute the length of this vect Vector so nothing too exciting or important here but we definitely need to build on a solid vector class now the representation of array has an origin and a direction and if you take a close look at the Constructor you can see that when you initialize array with a direction then this direction is normed and the reason is that when we compute the dot products between these vectors most of these information needs to be directional information so we are not interested in the magnitude of the vector only interested in the direction of the vector a good way to get rid of problems where you would initialize your array with directions that are not vectors of unit length what you can do is to normalize this input in the Constructor so you will never have headaches about incorrect results where you have no idea what is really happening what is the representation of

Object representation

an object well an object has a color this is the denoted by CL this is actually very sloppy notation and this you can imagine as albo but it is not a double so it's not a number it is actually a vector and the reason for this is the fact that we need to define the albos how much light in different wavelengths is being reflected and how much is being absorbed by the given object now objects may also have emission if they have some nonzero emission then they are light sources and by type we have an integer that would specify what kind of brdf we have it is also important to have an intersection routine and some other function that can compute the normal of the object now these are of course virtual functions we don't Define them for an abstract object but they would have to be implemented in other classes that inherit from the object let's take a look at the sphere

Sphere representation

so a sphere has the C and R C is the center of the objects and R is the radius The Constructor is Trivial we have the intersection function now you hopefully remember all three of the elements of the quadratic function that we need to solve but if you take a good look then you will see that a is missing from here and the question is why is that the answer is we are multiplying D with d the direction Vector of array with itself and if it's a vector that is normed so it is of length one then this a will always be one after that we can deal with the discriminant is normally not B sared minus 4 a c remember a is one here so it's omitted but it is behind the square root and this square root is completely omitted here which looks like a mistake but it is not it is essentially an optimization step because you will see that we are testing the discriminant if it's less than zero then we don't need to deal with this equation because the solutions for the quadratic equation are going to exist in the plane of complex numbers and that's not a valid T it's not a valid distance where we would intersect the sphere if this is not happening then we can compute the square root for the discriminant why after that because if the discriminant is bigger than zero then the square root is not going to make a difference so what we can essentially do is to postpone the square root calculation after the test note that square roots are really expensive so we can save up lots of computational time if you omit this calculation there is a really nice square root hack in the source code of Quake 3 which is by the way open source take a look at how people are trying to hack together functions in order to work better and faster than they should because square roots are super expensive and there are some really interesting hacks in order to speed them up we have the plus and minus term and the division by 2 a is again postponed and that's also another interesting question why is this postponed so you can see that the so 2 is divided by two and the S one is also divided by two but only after the test so it is possible that if we have the solution to if it is bigger than Epsilon then we have the first expression after the question mark But if it's not then we are looking for the second expression after it which is another test and if the answer is no for that as well then we are going to return zero this would mean that we don't have any hits or the hits are behind us and we are not interested in intersections that are behind our Ray There is a possibility that we encounter this and in this case I Don't Want to Waste My Time by dividing these solutions by two because I'm not going to use them why am I splitting hairs here that's an important question why do we need to optimize so much because if you grab a profiler a program that is able to show you in what functions are you spending most of your time in then this profiler would show you that more than 90% of the execution time is spent in the intersection routines so you have to have a really well optimized intersection routine some programs have replaced these expressions with assembly in order to speed it up even more so how do we compute the normal of a sphere well very simple what we have here is p minus C now what does it mean so if I have a minus B then this means a vector that points from B to a so what this would mean look at the figure here if I would have a circle then this would mean that from the Center it is pointing towards the given points of the sphere now this is also divided by R because you could imagine you have a sphere that is not of unit radius and if it's then this normal Vector would have a length you could compute a normalization we have a normalized function in our Vector implementation but it also has a square root so it would be much better to have something that's faster well if you just divide by the radius of the sphere then you would immediately get a vector of unit length so in the end we can get the correct result by just simply dividing by

Perspective camera

R excellent now we have a perspective camera here hopefully you remember from the first lecture we are basically just copy pasting this expression here we have derived them rigorously and analyzed how this exactly Works a simple intuition basically what we are doing we have as an input an X and A Y basically this means give me a pixel with this displacement and what it would give you back is the world space coordinates of these

Uniform sampling

these pixels uniform sampling of a hemisphere what is this for if we are encountering a diffuse object what we would like to do is to send array out on the hemisphere of this object this we would need to uniformly simple this means that the diffuse prdf is 1/ pi or row over Pi if you take into consideration the lbos and you need transforms in order to do it there is a reading behind this link how it works exactly is drawing uniform samples on a plane which is simple and then we are projecting it on the hemisphere that's basically all there

Trace function

is what about the trace function as you can see here in the first line this code says that there is a maximum depth now clamping after a maximum depth value is not really optimal because whatever number you put in there the higher order bounces are going to be completely omitted now the real solution would be Russian Roulette past termination which we fortunately also have after depth of an arbitrary number like five you start the Russian Roulette routine which basically says there is a probability for stopping the light path right there and we generate a random number and compare to this probability if we don't hit this probability then we will continue our path but we will multiply the output and the contribution of this light path by this given number that we have specified in one of the previous lectures so this was implemented by Christian mahek and kind thanks and you can see that later we are going to use this RR factor in order to multiply the contribution of array later now what about the intersection

Intersection routine

routine this is definitely not the best way to do it but is sure as hell the easiest way to do it we specify this T which is going to be the intersection distance how far we are from the start of the Ray and how far is this intersection exactly ID is basically Which object did we hit and then we iterate through all of the objects in the scene and what we are interested in is calling the inter intersection routine this will return a t how far is the intersection and what I am interested in an intersection that is the smallest number this means the closest intersection and also something that is larger than Epsilon because if I would tolerate zero then this would mean that self intersections are accepted therefore every single object that I am on is going to be the first intersection I'm not interested in this I know where I am I just want to continuing my path if there's no intersection then we return there is zero contribution where is the intersection in World space we denot this by HP this means hit point and where we have started a ray we traveled along the direction of the ray with this T amount where the intersection is so this is where we ended up and the origin of the new Ray is going to be this hit point what is the normal going to be well we just grab the object that we intersected and we are taking the normal with the given function what is the return Radiance we simply add the emission term this is the emission term on all three wavelengths there is a magic multiplier disregard that and then we continue evaluating the second part of the rendering equation the first part is the emission and the second is the reflected amount of

Diffuse

light let's continue with the inside of the trace function if we hit an object with a type of one then this is a diffuse brdf the next functions compute the next random number for the low discrepancy Halton sampler and the direction is going to be a random sample in the hemisphere a completely uniform of this object what we have here is this n plus the hemisphere function this is intuition this is not exactly what is happening I've just shortened the code slightly in order to simplify what is going on here the code that you will download will have the real deal in there now then we compute the cosign term very straightforward and on the TMP we are going to instantiate a new vector and this is going to hold the recursion so the subsequent samples that we shoot out from the hemisphere are going to be added and added to this TMP now is the time for recursion we pass the Ray and the scene to the trace function the ray is actually not the current one is the new one so basically we set up the new hit point and the new direction of the array and this is what we're going to pass to the trace function we increment the depth variable because we have computed the bounds the TMP is going to be a variable where we gather all this radians and we put every other parameter that is needed to compute one more bounds now the color is going to contain the cosine term and all this Radiance that is collected from the recursion and we multiply it with this cl. XYZ which is basically the brdf so this is the right side of the rendering equation for a diffuse brdf this is multiplied by 0. 1 this is just a magic

specular

constant now what about a specular brdf what if we hit the mirror well very simple we compute the perfect reflection Direction you can see the rate. D and we again set up this variable to collect the radiance in there and we are not doing anything we are just going to add the radiance as we get reflected off of this mirror then we are going to compute subsequent bounces and this is going to be stored on this TMP so this is what we're going to add to this radians what about a refractive material

refraction

well we have every bit of knowledge that we need for this because essentially this is the vector version of sna law what does it mean well the original sna law that we have computed is in 1D so it only gives you one angle but if you are in 3D you are interested in angles in two different dimensions this is nothing but the extension of the very same law into higher Dimension now where is this implemented exactly you can see the cosine of theta 2 note that N1 and N2 is considered different ly because one of these media is always going to be air therefore one of the indices of refraction is always going to be one the rest is just copy paste and again you can see that the square root is missing and we are going to postpone this after the test of cosine T2 because if it is actually not larger than zero then we are not going to need this variable at all therefore we can postpone this after the test again what about the direction of the outgoing gray well this is just copy paste from the formula that we have derived before so simple as that obviously we again need the recursion because if we go inside a glass sphere then we are going to compute the refraction so we are going to be inside of the sphere what does it mean one that we have to invert the normal because we are inside so the normal are flipped and again we need to compute the trace function which is the recursion so we are also interested in the higher order bounces onwards to frenel law what is

reflection

the probability of reflection and refraction when rays are bouncing off in different directions in different angles off of refractive surfaces implemented by Christian huffner so a big thanks for him it is very simple you can see that it is exactly the same as what we have learned in mathematics so this is the r0o term this is the probability of reflection in normal incidents and we are interested in the square of that and note that you don't see the N1 and N2 this is because one of them is always going to be air or vacuum so it is going to have the index of reflection of one now what about the final probability of reflection it is also coming from the formula we have every bit of information we need so we just put in there this term with the cosine

main function

attenuation how does the main function look like well we have some Wizardry with C++ 11 Lambda functions but basically this is just a shortcut in order to be able to add a new sphere or a new plane to the scene in one line of code spheres are given by their radius position color by color we obviously mean albos emission and type means what kind of brdf we have a diffuse a specular or refractive brdf now for planes we have position normal color emission and obviously type so what kind of material we have so using just one line of code you can add a new object and specify everything every information that you would need to it now we also add the light source and we specify the index of refraction for the refractive brdfs and we also specify how many samples per pixel would we like to compute

main loop

onwards to the main Loop we have two for Loops that iterate through the width and the height of the image plane now Vector C is color it's again very sloppy what it means is actually the radiance that we compute we instantiate array what is going to be the origin of the aray this is going to be 0 Z so this is where the camera is placed what is going to be the direction of the ray well we connect this Ray to the camera plane and we specify which pixel we are Computing with I and J and then we add this weird random number to it now what this means is actually filtering in recursive Ray tracing what you would do is you would only send the ray through the midpoint of a pixel and that's it you would compute one sample per pixel in Monte Carlo path tracing you're Computing many samples per pixel and they don't have to go through the midpoint of the pixel you would sample the area of the pixel and this gives you anti-aliasing effects for free if you use it correctly what is going to be the direction of the ray well this is again the same a minus B the B is the origin of the Ray and a is the camera coordinate so what does it mean that it is pointing from zero to the camera plane and we normalize this expression to have array of unit length now we obviously called it Trace function the number of bounces is zero and we pass every information that needs to be known in order to compute these bounces so we provide this initial array and the scene and everything else obviously we also pass the C and this is going to collect all the radians there is in the subsequent bounces and then after this recursion is done we deposit all this energy all this radians to the individual pixels and then we divide by the number of samples because if we wouldn't do this then you remember the 1 / n multiplier everywhere in Monte Carlo integration if you wouldn't do this then the more samples you compute The Brighter Image you would get and this is obviously not what we're looking for

ppm file

at the very end we create a file this is the PPM file format where you can easily write all your contributions in there we also start a stopwatch in order to measure how long we have been tracing all these Rays so very simple very trivial and when we are done we close the file it has the image in there and we then write how long the rendering algorithm has been running for and basically that's it this is effectively 250 lines of code that can compute indirect illumination cost scks and every Global illumination effect and it can compute images like this is one student submission from previous years absolutely gorgeous this is the fixed version of small paint where there are no errors in the sampling another one from mik Kama this actually looks like I don't know if you are into the music band Boards of Canada but this looks exactly like one of their album covers love it really cool and also serinsky triangles from Christian Kusa you can find the link for the code in there and take a crack at it just try it build different scenes try to understand what is going on in there try to mess the code up I wonder what happens if I would not normalize this Vector play with it it's a really small concise and really understandable path Tracer so take your time and play with it it's lots of fun and you can create lots of beautiful images with global illumination thank you

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник