# This AI Learned To See In The Dark! 👀

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=bcZFQ3f26pA
- **Дата:** 29.05.2018
- **Длительность:** 3:02
- **Просмотры:** 344,021
- **Источник:** https://ekstraktznaniy.ru/video/14462

## Описание

The paper "Learning to See in the Dark" and its source code is available here:
http://cchen156.web.engr.illinois.edu/paper/18CVPR_SID.pdf
https://github.com/cchen156/Learning-to-See-in-the-Dark

Our Patreon page: https://www.patreon.com/TwoMinutePapers

We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
313V, Andrew Melnychuk, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Esa Turkulainen, Geronimo Moralez, Kjartan Olason, Lorin Atzberger, Malek Cellier, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Nader Shakerin, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Torsten Reil.
https://www.patreon.com/TwoMinutePapers

One-time payment links are available below. Thank you very much for your generous support!
PayPal: https://www.paypal.me/TwoMinutePapers
Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh
Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53

## Транскрипт

### Segment 1 (00:00 - 03:00) []

Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. If you start watching reviews of some of the more recent smartphones, you will almost always see a dedicated section to low-light photography. The result is almost always that cameras that work remarkably well in well-lit scenes produce almost unusable results in dim environments. So unless we have access to a super expensive camera, what can we really do to obtain more usable low-light images? Well, of course, we could try brightening the image up by increasing the exposure. This would help maybe a tiny bit, but would also mess up our white balance and also amplify the noise within the image. I hope that by now you are getting the feeling that there must be a better AI-based solution. Let's have a look! This is an image of a dark indoor environment, I am sure you have noticed. This was taken with a relatively high light sensitivity that can be achieved with a consumer camera. This footage is unusable. And this image was taken by an expensive camera with extremely high light sensitivity settings. This footage is kinda usable, but is quite dim and is highly contaminated by noise. And now, hold on to your papers, because this AI-based technique takes sensor data from the first, unusable image, and produces this. Holy smokes! And you know what the best part is? It produced this output image in less than a second. Let's have a look at some more results. These look almost too good to be true, but luckily, we have a paper at our disposal so we can have a look at some of the details of the technique! It reveals that we have to use a Convolutional Neural Network to learn the concept of this kind of image translation, but that also means that we require some training data. The input should contain a bunch of dark images, these are the before images, this can hardly be a problem, but the output should always be the corresponding image with better visibility. These are the after images. So how do we obtain them? The key idea is to use different exposure times for the input and output images. A short exposure time means that when taking a photograph, the camera aperture is only open for a short amount of time. This means that less light is let in, therefore the photo will be darker. This is perfect for the input images as these will be the ones to be improved. And the improved versions are going to be the images with a much longer exposure time. This is because more light is let in, and we'll get brighter and clearer images. This is exactly what we're looking for! So now that we have the before and after images that we referred to as input and output, we can start training the network to learn how to perform low-light photography well. And as you see here, the results are remarkable. Machine learning research at its finest. I really hope we get a software implementation of something like this in the smartphones of the near future, that would be quite amazing. And as we have only scratched the surface, please make sure to look at the paper as it contains a lot more details. Thanks for watching and for your generous support, and I'll see you next time!
