# Sound Propagation With Bidirectional Path Tracing | Two Minute Papers #111

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=DzsZ2qMtEUE
- **Дата:** 04.12.2016
- **Длительность:** 5:06
- **Просмотры:** 35,751
- **Источник:** https://ekstraktznaniy.ru/video/14742

## Описание

The paper "Interactive Sound Propagation with Bidirectional Path Tracing" is available here:
http://gaps-zju.org/bst/

Veach's paper on Multiple Importance Sampling:
http://www.cs.jhu.edu/~misha/ReadingSeminar/Papers/Veach95.pdf
http://dl.acm.org/citation.cfm?id=218498

I am also holding a full course on light transport simulations at the Technical University of Vienna. There is plenty of discussion on path tracing and bidirectional path tracing therein:
https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi

WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
Sunil Kim, Julian Josephs, Daniel John Benton, Dave Rushton-Smith, Benjamin Kang.
https://www.patreon.com/TwoMinutePapers

Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz

Music: Dat Groove by Audionautix is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/)
Ar

## Транскрипт

### Segment 1 (00:00 - 05:00) []

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. Imagine if we had an accurate algorithm to simulate how different sound effects would propagate in a virtual world. We would find computer games exhibiting gunfire in open areas, or a pianist inside a castle courtyard to be way more immersive, and we've been waiting for efficient techniques for this for quite a while now. This is a research field where convolutions enjoy quite a bit of attention due to the fact that they are a reliable and efficient way to approximate how a given signal would sound in a room with given geometry and material properties. However, the keyword is approximate - this, however, is one of those path sampling techniques that gives us the real deal, so quite excited for that! So what about this path sampling thing? This means an actual simulation of sound waves. We have a vast literature and decades of experience in simulating how rays of light bounce and reflect around in a scene, and leaning on this knowledge, we can create beautiful photorealistic images. The first idea is to adapt the mathematical framework of light simulations to be able to do the very same with sound waves. Path tracing is a technique where we build light paths from the camera, bounce them around in a scene, and hope that we hit a light source with these rays. If this happens, then we compute the amount of energy that is transferred from the light source to the camera. Note that energy is a more popular and journalistic term here, what researchers actually measure here is a quantity called radiance. The main contribution of this work is adapting bidirectional path tracing to sound. This is a technique originally designed for light simulations that builds light paths from both the light source and the camera at the same time, and it is significantly more efficient than the classical path tracer on difficult indoors scenes. And of course, the main issue with these methods is that they have to simulate a large number of rays to obtain a satisfactory result, and many of these rays don't really contribute anything to the final result, only a small subset of them are responsible for most of the image we see or sound we hear. It is a bit like the Pareto principle or the 80/20 rule on steroids. This is ice cream for my ears. Love it! This work also introduces a metric to not only be able to compare similar sound synthesis techniques in the future, but the proposed technique is built around minimizing this metric, which leads us to an idea on which rays carry important information and which ones we are better off discarding. I also like this minimap on the upper left that actually shows what we hear in this footage, exactly where the sound sources are and how they change their positions. Looking forward to seeing and listening to similar presentations in future papers in this area! A typical number for the execution time of the algorithm is between 15-20 milliseconds per frame on a consumer-grade processor. That is about 50-65 frames per second. The position of the sound sources makes a great deal of difference for the classical path tracer. The bidirectional path tracer, however, is not only more effective, but offers significantly more consistent results as well. This new method is especially useful in these cases. There are way more details explained in the paper, for instance, it also supports path caching and also borrows the all-powerful multiple importance sampling from photorealistic rendering research. Have a look! Thanks for watching and for your generous support, and I'll see you next time!
