# Photoshop’s New AI Feature Is Amazing!

## Метаданные

- **Канал:** Two Minute Papers
- **YouTube:** https://www.youtube.com/watch?v=Y119ZaHIPp0
- **Дата:** 13.06.2023
- **Длительность:** 6:21
- **Просмотры:** 85,341
- **Источник:** https://ekstraktznaniy.ru/video/13137

## Описание

❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers

Generative fill is now available in the beta versions of Photoshop.

Links and papers:
https://vcai.mpi-inf.mpg.de/projects/DragGAN/
https://github.com/Zeqiang-Lai/DragGAN
https://clipdrop.co/stable-diffusion-reimagine
https://github.com/SHI-Labs/Prompt-Free-Diffusion
https://www.reddit.com/r/midjourney/comments/13rxtmq/adobe_photoshops_beta_generative_fill_my_samples/

My latest paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD 

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bret Brizzee, Bryan Learn, B Shang, Christian Ahlin, Eric Martel, Geronimo Moralez, Gordon Child, Jace O'Brien, Jack Lukic, John Le, K

## Транскрипт

### Segment 1 (00:00 - 05:00) []

Dear Fellow Scholars, this is Two Minute  Papers with Dr. Károly Zsolnai-Fehér. Today we are going to have a look at how  Photoshop just got transformed by the generative   AI revolution. And more. Previously, if we had an  image with an object that we wanted to cut out,   but in the meantime, also fill in  the gap with sensible information,   we could use the content-aware fill feature, and  then, this happens. It has done a passable job,   and a little fiddling with the parameters  probably yields an okay solution. So,   can we do even better? Well, through the  power of the papers, yes, now we can! Now, experiment number one. Let’s try the new  generative fill feature where we can add a text   prompt, and it shall be filled with exactly  that. So that is a wooden table, little AI,   please fill in a wooden table. Wait a minute.   That is not exactly what I was thinking about,   but perhaps it did exactly what it was asked.   That is a wooden table, can’t argue with that. Alright, experiment two, this one was a sharp  object in focus, but can it deal with blurred   objects in the background. Let’s change  this bottle to a flower pot. Does it know   how this needs to be done? Will it be out  of focus? That is an excellent solution,   great job! I like that we can choose  from several different variants. Now, experiment number three. I am a light  transport researcher by trade, so I would   love to see some caustics, that is, light  refracted through the glass here on the table.    And…ouch. Okay, this one is a little rough  around the edges at the moment. Although if   we are looking for reflections,  it can do that reasonably well. Now, experiment number four. When trying  this on not just some small thing,   but on higher quality images, created with  some of the best generative AIs out there,   the quality differences are apparent.   This will not always be able to keep up. Experiment five. This is going to be very  interesting and relevant. It can do not only   generative inpainting, but outpainting as well.   Oh yes, if we say that we wish to extend an image   outwards, it can do that really well in this case.   I don’t see any seams, or in other words, I don’t   see the traces of the algorithm anywhere. If I was  shown this image, I would not be able to tell that   some wizardry was going on. Others also seemed  to get really good results with this already. However, don’t take these results and flaws  for granted, these models are improving at   an incredibly rapid pace. For instance, a new  paper describes a technique that is capable   of something even more incredible. And that is,  magically changing the posture, proportions and   the pose of already existing images. This is  incredible. And do you know what is even more   incredible? You are not going to believe this  - the paper has only been out for a few days,   and an unofficial implementation of it already  exists, which means that with a little technical   expertise, you can already try it yourself.   The link is available in the video description. And get this, there are even two new  techniques for generating images without   even writing a text prompt. How is that even  possible? Well, have a look at this. Here,   we just drop in an image, and it  generates beautiful variants for it. But, we are not done yet. Not even close.   If we a looking for a bit more control,   but we still don’t want to enter a text prompt,   which basically sounds almost impossible, well,  we have another new tool. Dear Fellow Scholars,   have a look at this prompt-free diffusion  paper. Yes, this generates images,   and we don’t even need to enter text. All we need  to do is add an image, and a little scribble,   and a new image shall be born that follows both  the scribble and the style of the input image. And everything that you saw here just  happened not within the last few years,   but days. Wow. That is the  amazing time we live in today. What a time to be   alive! The generative Photoshop extension is a bit  behind the state of the art, that much is clear,   but it is super easy to use and will now be handed  out to millions and millions of people. You see,

### Segment 2 (05:00 - 06:00) [5:00]

research scientists are writing these amazing  papers to go beyond what is possible today,   and these are the next wave of techniques  that you might see in industry standard   tools like Photoshop. So as you see,  the papers are real. As real as it gets. Thanks for watching and for your generous  support, and I'll see you next time!
