# Bundlefusion: 3D Scenes from 2D Videos | Two Minute Papers #81

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=zLzhsyeAie4
- **Дата:** 25.07.2016
- **Длительность:** 2:20
- **Просмотры:** 12,924

## Описание

This piece of work enables us to walk around in a room with a camera, and create a complete 3D computer model from the video footage. Note that the title says "2D", but since RGB-D cameras are relatively new, they are both referred to as 2D and 3D (I've heard 2.5D as well before). We went with the 2D for now and I hope it won't raise any confusion! :)

____________________________

The paper "BundleFusion: Real-time Globally Consistent 3D Reconstruction using Online Surface Re-integration" is available here:
http://graphics.stanford.edu/projects/bundlefusion/

WE WOULD LIKE TO THANK OUR GENEROUS PATREON SUPPORTERS WHO MAKE TWO MINUTE PAPERS POSSIBLE:
David Jaenisch, Sunil Kim, Julian Josephs, Daniel John Benton.
https://www.patreon.com/TwoMinutePapers

We also thank Experiment for sponsoring our series. - https://experiment.com/

The thumbnail background image was created by gregzaal - http://www.blendswap.com/blends/view/74382
Subscribe if you would like to see more of these! - http://www.youtube.com/subscription_center?add_user=keeroyz

Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu

Károly Zsolnai-Fehér's links:
Facebook → https://www.facebook.com/TwoMinutePapers/
Twitter → https://twitter.com/karoly_zsolnai
Web → https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=zLzhsyeAie4) Segment 1 (00:00 - 02:00)

Dear fellow scholars, this is two minute papers with Kohaa. This piece of work enables us to walk around in a room with a camera and create a complete 3D computer model from the video footage. The technique has a really cool effect where the 3D model is continuously refined as we obtain more and more data by walking around with our camera. This is a very difficult problem and a good solution to this offers a set of cool potential applications. If we have a 3D model of a scene, what can we do with it? Well, of course, assign different materials to them and run a light simulation program for architectural visualization applications, animation movies, and so on. We can also easily scan a lot of different furnitureurnitures and create a useful database out of them. There are tons of more applications, but I think this should do for starters. Normally, if one has to create a 3D model of a room or a building, the bottom line is that it requires several days or weeks of labor. Fortunately, with this technique, we'll obtain a 3D model in real time and we won't have to go through these tribulations. However, I'd like to note that the models are still by far not perfect. If we are interested in the many small intricate details, we have to add them back by hand. Previous methods were able to achieve similar results, but they suffer from a number of different drawbacks. For instance, most of them don't support traditional consumer cameras or take minutes to hours to perform the reconstruction. To produce the results presented in the paper, an Nvidia Titan X video card was used, which is currently one of the pricier pieces of equipment for consumers, but not so much for companies who are typically interested in these applications. If we take into consideration the rate at which graphical hardware is improving, anyone will be able to run this at home in real time in a few years time. The comparisons to previous works reveal that this technique is not only real time but the quality of the results is mostly comparable and in some cases it surpasses previous methods. Thanks for watching and for your generous support and I'll see you next time.

---
*Источник: https://ekstraktznaniy.ru/video/14799*