# AI-Based Style Transfer For Video…Now in Real Time!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=UiEaWkf3r9A
- **Дата:** 29.09.2020
- **Длительность:** 4:35
- **Просмотры:** 85,699

## Описание

❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers 
❤️ Their mentioned post is available here: https://app.wandb.ai/stacey/yolo-drive/reports/Bounding-Boxes-for-Object-Detection--Vmlldzo4Nzg4MQ

📝 The paper "Interactive Video Stylization Using Few-Shot Patch-Based Training" is available here:
https://ondrejtexler.github.io/patch-based_training/

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh.
If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers

Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/

## Содержание

### [0:00](https://www.youtube.com/watch?v=UiEaWkf3r9A) Segment 1 (00:00 - 04:00)

dear fellow scholars this is two minute papers with dr carol jonah style transfer is an interesting problem in machine learning research where we have two input images one for content and one for style and the output is our content image reimagined with this new style the cool part is that the content can be a photo straight from our camera and the style can be a painting which leads to super fun and really good looking results we have seen plenty of papers doing variations of style transfer but i always wonder can we push this concept further and the answer is yes for instance few people know the star transfer can also be done for video first we record a video with our camera then take a still image from the video and apply our artistic style to it then our style will be applied to the entirety of the video the main advantage of this new method compared to previous ones is that they either take too long or we have to run an expensive pre-training step with this new one we can just start drawing and see the output results right away but it gets even better due to the interactive nature of this new technique we can even do this live all we need to do is change our input drawing and it transfers the new style to the video as fast as we can draw this way we can refine our input style for as long as we wish or until we find the perfect way to stylize the video and there is even more if this works interactively then it has to be able to offer an amazing workflow where we can capture a video of ourselves live and mark it up as we go let's see oh wow just look at that it is great to see that this new method also retains temporal consistency over a long time frame which means that even if the marked up keyframe is from a long time ago it can still be applied to the video and the outputs will show minimal flickering and note that we can not only play with the colors but with the geometry too look we can warp the style image and it will be reflected in the output as well i bet there is going to be a follow-up paper on more elaborate shape modifications as well and this new work improves upon previous methods in even more areas for instance this is a method from just one year ago and here you see how it struggled with contour-based styles here's a keyframe of the input video and here's the style that we wish to apply to it later this method from last year seems to lose not only the contours but a lot of visual detail is also gone so how did the new method do in this case look it not only retains the contours better but a lot more of the sharp details remain in the outputs amazing now note that this technique also comes with some limitations for instance there is still some temporal flickering in the outputs and in some cases separating the foreground and the background is challenging but really such incredible progress in just one year and i can only imagine what this method will be capable of two more papers down the line what a time to be alive make sure to have a look at the paper in the video description and you will see many additional details for instance how you can just partially fill in some of the keyframes with your style and still get an excellent result this episode has been supported by weights and biases in this post they show you how to test and explore putting bounding boxes around objects in your photos weights and biases provides tools to track your experiments in your deep learning projects their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as open ai toyota research github and more and the best part is that if you have an open source academic or personal project you can use their tools for free it really is as good as it gets make sure to visit them through wnb. com papers or click the link in the video description to start tracking your experiments in 5 minutes our thanks to weights and biases for their long-standing support and for helping us make better videos for you thanks for watching and for your generous support and i'll see you next time

---
*Источник: https://ekstraktznaniy.ru/video/14062*