# Real-Time Hair Rendering With Deep Opacity Maps | Two Minute Papers #171

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=7x2UvvD48Fw
- **Дата:** 16.07.2017
- **Длительность:** 3:57
- **Просмотры:** 39,839
- **Источник:** https://ekstraktznaniy.ru/video/14625

## Описание

The paper "Deep Opacity Maps" is available here:
http://www.cemyuksel.com/research/deepopacity/

Unofficial implementation:
http://prideout.net/blog/?p=69

Recommended for you:
The Dunning-Kruger Effect - https://www.youtube.com/watch?v=4Y7RIAgOpn0
Are We Living In a Computer Simulation? - https://www.youtube.com/watch?v=ATN9oqMF_qk

Two Minute Papers Merch:
US: http://twominutepapers.com/
EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/

WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
Andrew Melnychuk, Christian Lawson, Dave Rushton-Smith, Dennis Abts, e, Esa Turkulainen, Kaben Gabriel Nanlohy, Michael Albrecht, Michael Orenstein, Sunil Kim, Torsten Reil, VR Wizard.
https://www.patreon.com/TwoMinutePapers

Music: Antarctica by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/)
Artist: http://audionautix.com/ 

Thumbnail background image credit: https://pixabay.com/ph

## Транскрипт

### Segment 1 (00:00 - 03:00) []

dear fellow Scholars this is 2minute papers with carool in earlier episodes we've seen plenty of video footage about hair simulations and rendering and today we are going to look at a cool new technique that produces self shadowing effects for hair and fur in this image pair you can see this drastic difference that shows how prominent this effect is in the visual appearance of hair just look at that beautiful but Computing such a thing is extremely costly since we have a dense piece of geometry for instance hundreds of thousands of hair strands we have to know how each one occludes the other ones this would take hopelessly long to compute to even get a program that executes in a reasonable amount of time we clearly need to simplify the problem further an earlier technique takes a few planes that cut the hair volume into layers these planes are typically regularly spaced outwards from the light sources and it is much easier to work with a handful of these volume segments than with the full geometry the more planes we use the more layers we obtain and the higher quality results we can expect however even if we can do this in real time we will produce unrealistic images when using around 16 layers well of course we should then crank up the number of layers some more if we do that for instance by now using a 128 layers we can expect better quality results but we'll be able to process an image only twice a second Which is far from competitive and even then the final results still contain layering artifacts and are not very close to the ground truth there has to be a better way to do this and with this new technique called Deep opacity Maps these layers are chosen more wisely and this way we can achieve higher quality results with using only three layers and it runs easily in real time it is also more memory efficient than previous techniques the key idea is that if we look at the hair from the light source point of view we can record how far away different parts of the geometry are from the light source then we can create the new layers further and further away according to this shape this way the layers are not planer anymore they adapt to the scene that we have at hand and contain significantly more useful occlusion information as you can see this new technique blows all previous methods away and is incredibly simple I have found an implementation from Philip ride out the link to this is available in the video description if you have found more let me know and I'll include your findings in the video description for the fellow tinkerers out there the paper is ample in comparisons make sure to have a look at that too and sometimes I get some messages saying caroy why do you bother covering papers from so many years ago it doesn't make any sense and here you can see that part of the excitement of two-minute papers is that the next episode can be about absolutely anything the series has been mostly focusing on computer graphics and machine learning papers but don't forget that we also have an episode on whether we are living in a simulation or the Dunning Krueger effect and so much more I've put a link to both of them in the video description for your enjoyment the other reason for covering old their papers is that a lot of people don't know about them and if we can help just a tiny bit to make sure these incredible Works see more widespread adoption we've done our job well thanks for watching and for your generous support and I see you next
