# This is What Abraham Lincoln May Have Looked Like! 🎩

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=2wcw_O_19XQ
- **Дата:** 24.02.2021
- **Длительность:** 6:50
- **Просмотры:** 847,342
- **Источник:** https://ekstraktznaniy.ru/video/13970

## Описание

❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers 
❤️ Their mentioned post is available here: https://wandb.ai/wandb/instacolorization/reports/Overview-Instance-Aware-Image-Colorization---VmlldzoyOTk3MDI

📝 The paper "Time-Travel Rephotography" is available here:
https://time-travel-rephotography.github.io/

📝 Our "Separable Subsurface Scattering" paper with Activision Blizzard is available here:
https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, 

## Транскрипт

### <Untitled Chapter 1> []

Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today we are going to travel in time. With the ascendancy of neural-network based learning techniques, this previous method enables us to take and old black and white movie that suffers from a lot of problems like missing data, flickering, and more, give it to a neural network, and have it restore it for us. And here, you can not only see how much better this restored version is, but, it did take it one step further. It also performed colorization! Essentially, here, we could produce 6 colorized reference images, and the neural network uses them as art direction and propagates all this information to the remainder of the frames. So this work did restoration and colorization at the same time. This was absolutely amazing, and now comes something even better, today, we have a new piece of work that performs not only restoration and colorization, but super-resolution as

### Algorithm output [1:01]

well! What this means is that we can take an antique photo, which suffers from a lot of issues. Look, these old films exaggerate wrinkles a great deal, they even darken the lips and do funny things with red colors. For instance, subsurface scattering is also missing, this is light penetrating our skin and bouncing inside before coming out again, and the lack of this effect is why the skin looks a little plasticky here. Luckily, we can simulate all these phenomena on our computers. I am a light transport researcher by trade, and this is from our earlier paper with the Activision Blizzard game development company, this is the same phenomenon, a simulation without subsurface scattering, and this one is with simulating this effect. Beautiful. You can find a link to this paper in the video description. So with all these problems with the antique photos, our question is, what did Lincoln really look like? Well, let’s try an earlier framework for restoration, colorization and super resolution…and. Well, unfortunately, most of our issues still remain. Lots of exaggerated wrinkles, plasticky look, lots of detail is missing. Can we do better? Well, hold on to your papers, and observe the output with the new technique.

### New technique [2:33]

Wow. The restoration indeed took place properly, brought the wrinkles down to a much more realistic level, skin looks like skin because of subsurface scattering, and the super resolution part is responsible for a lot of new detail everywhere, but especially around the lips. Outstanding. It truly feels like this photo has been rephotographed with a modern camera. And with that, please meet Time-Travel Rephotography. And the curious thing is that all this sounds flat out impossible. Why is that? Since we don’t have old and new image pairs of Lincoln and many other historic figures, the question naturally arises in the mind of the curious Fellow Scholar - how do we train a neural network to perform this? And the answer is that we need to use their siblings. Now this doesn’t mean that Lincoln had a long lost sibling that we don’t know about. What this means is that as the input image is fed through our neural network, we can generate a photorealistic image of someone, and this someone kind of resembles the target subject, and has all the details filled in. Then, in the next step, we can start morphing the sibling until is starts resembling the test subject. With this previously existing StyleGAN2 technique, morphing is now easy to do, but restoration is hard, so essentially, with this, we can skip the difficult restoration part, and just do the easier morphing instead. Trading a difficult problem for an easier one. Absolutely brilliant idea. And if you have been holding on to your papers so far, now, squeeze that paper, because it can do even more. Age progression!

### Age progression [4:28]

Look. If we have only a few target photos of Thomas Edison throughout his life, these will be our yardsticks, and the algorithm is able to generate his aging process between these yardstick images. And the best part is that these images have different lighting, pose, and none of this is an issue for the technique. It just doesn’t care and it still works beautifully. Wow. So we saw earlier that there are other methods that attempt to do this too, at least the colorization part. Yes, we have colorization and other techniques in abundance. So how does this compare to them? It appears to outpace all of them really convincingly. The numbers from the user study and the algorithmically generated scores also favor the new technique. This is a huge leap forward. Do you have some other applications in mind for this new technique? Let me know in the comments what you would do with this or how you would like to see it improved. Now, of course, not even this technique is perfect. Blurry and noisy regions can still appear here and there. And note that StyleGAN2, the basis for this algorithm came out just a little more than a year ago. And it is amazing that we are witnessing such incredible progress in so little time. My goodness. And just imagine what the next paper down the line will bring! What a time to be alive! Thanks for watching and for your generous support, and I'll see you next time!
