New AI Just Made Fashion In Games Real
10:00

New AI Just Made Fashion In Games Real

Two Minute Papers 24.10.2025 59 662 просмотров 2 848 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
❤️ Check out the Fully Connected Conference by Weights & Biases - https://wandb.me/fclon2025-2min 20% discount code: FCLON2025-2MIN 📝 The paper is available here: https://dress-1-to-3.github.io/ ❤️ Get cool perks and support The Papers on Patreon! Link: https://www.patreon.com/c/TwoMinutePapers 📝 My paper on simulations that look almost like reality is available for free here: https://rdcu.be/cWPfD Or this is the orig. Nature Physics link with clickable citations: https://www.nature.com/articles/s41567-022-01788-5 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Benji Rabhan, B Shang, Christian Ahlin, Gordon Child, Juan Benet, Michael Tedder, Owen Skarpness, Richard Sundvall, Steef, Taras Bobrovytsky, Tybie Fitzhugh, Ueli Gallizzi If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers My research: https://cg.tuwien.ac.at/~zsolnai/ X/Twitter: https://twitter.com/twominutepapers Thumbnail design: Felícia Zsolnai-Fehér - http://felicia.hu

Оглавление (2 сегментов)

Segment 1 (00:00 - 05:00)

This new work is a beautiful marriage of  AI and human ingenuity, and I am loving   it. And it has a self-healing underwear  too. Yeah, I know, I’ll try to explain it.   So what is this? Well, when you see someone  dressed so sharply that even your GPU blushes - in   real life, or in a game like Cyberpunk - and  you’re thinking: I wish I could look like that.   Possible? Yes! With “image-to-3D” models, you  can turn a picture into a 3D person. Problem   solved? Kind of… but not really. This is from only  5 years ago, and it’s quite rough. No thanks. Now,   today, we can do way better. Good enough  for us to actually recognize the problem:   oh boy, these reconstructions usually glue  the clothes and the body into one thing. Sure,   it’s one piece, but unless you want  to become a demon, and oh goodness,   worse, a demon that is almost  naked. Then this is not for you. It also has other problems. Since the  body and clothes are not separated,   no simulation. Did you hear what I said? No  simulation. Can’t do the Two Minute Papers   thing. You can’t make it flutter, and wrinkle when  your hero does a cool spin. I am about to cry. So the dream of true digital fashion -  physics-ready, wearable, separable garments,   god almighty, I only want the clothes. And  this remains out of reach. Until now. Get this,   this new paper from UCLA and the University  of Utah claims that from just a single photo,   it can reconstruct not just a 3D human, but  physically accurate, simulation-ready clothes,   separated and ready to move. Bold claim! Why?   Because this is one of the hardest problems in   virtual human modeling - a geometry, a physics,  and AI nightmare all at once. So, good luck! Okay, so how does this work? Well, you see,  in goes your image, then it guesses an initial   sewing pattern. It’s like a digital tailor cutting  fabric pieces based on what it sees. Then, those   flat panels are thrown onto a 3D human model. And,  oof. I am sorry, but this is not even close. The   man’s clothes are all over the place, okay, let’s  try another one. Wow, this is somehow even worse.    This is a fitted T-shirt, the output is not  even close, and the skirt needs to go below   the knee and is not shaped like this. This is  a total disaster. Not working at all. But wait! We are not done yet! Now, the system uses  differentiable physics and multi-view diffusion   guidance to refine the shapes of the sewing  panels. That means it adjusts the curves and   seams so that the simulated garment better matches  the character. Let’s try that on. Oh my, now we’re   talking! But the textures are still missing, this  is just the shape. Not a problem, then the system   looks at the input image again and paints the  correct material and color on the 3D garment. Okay, I say this is ready for simulation.   Now hold on to your papers Fellow Scholars,   and let’s see… holy mother of papers.   That looks absolutely incredible. And not only that, but these characters  also have some incredible dance chops. So how does it do all this magic? It  is explained in the paper in detail,   but this is for experts. Well, I am not that,  but I’ll try my best to explain it to you,   and I promise, it gets properly insane.   Let us dive in! Dear Fellow Scholars,   this is Two Minute Papers with Dr.   Károly Zsolnai-Fehér. Dr. Carroll. First, the AI part - they use something  called multi-view diffusion guidance. You   show the model one picture, and it imagines  what you’d look like from every angle - left,   right, back, top - as if the AI walked around  you taking photos. It’s basically an AI fashion   paparazzi - but one that doesn’t shout your name  while you’re eating. No. Instead, you can think   of it like a team of tiny artists walking in  circles around the mannequin, each one sketching   what they think they see, and then arguing  until they all agree on a consistent shape. Now comes the human ingenuity  part. Here is the secret weapon:   Codimensional Incremental Potential  Contact. If you want to sound really cool,   just call it CIPC and disregard the blank stare of  that poor cashier in Costco. Bonus points if you

Segment 2 (05:00 - 10:00)

explain this energy term to them while buying  socks. Okay, so what do all these hieroglyphs   do? This is a beautiful optimization-based  cloth simulator. The math is so good it’s   frightening - we’re talking about minimizing  something called the total system energy. Now what does that mean? In plain terms,  it’s like the universe trying to find the   most comfortable resting position for every  thread in the fabric. The first term keeps   the cloth near where it was supposed to be,  the second makes it elastic and bend nicely,   and the last one - the barrier term  - screams “don’t you dare penetrate   the body with the clothes! ”. We’ve seen that  problem before, and those results are unusable. And this is not fake physics - it’s fully  differentiable. That means the AI can feel   how wrong it is and learn which way to pull  or stretch each seam to fix it. Imagine the   tailor not just seeing the mistake but feeling  the tug on the fabric and adjusting instantly. So, the AI part, multi-view diffusion  tells the system what it should look like,   and the human ingenuity part, CIPC tells  it how it should behave in the real world.    Together, they turn a single image into a  beautiful, simulation-ready digital outfit. So, a really advanced paper explained in a way  anyone can understand. I hope. I am trying my best   here. But the paper is of course, not perfect.   For instance, I think the sleeve is way too long   here. And that is one of the few weak points  of this method - it’s brilliant, but it still   struggles with out-of-distribution fashion. If  you wear something exotic, like a feather jacket,   or a jellyfish costume, I know you do that, don’t  deny it, the AI just sighs, gets a drink, and   sews with its eyes closed. And it really shows. It  feels like the AI says, eh, I’ll fix it tomorrow.    Human-like intelligence? Checkmark! But you know  what, I’ll still take it over this naked demon. And this work was written by a bunch of legends,  in computer graphics. These are the same brilliant   minds behind the Incremental Potential Contact  model that keeps digital fabrics from clipping   through bodies and exploding into chaos. We  talked about this approximately 500 paper   videos ago. These folks are the quiet heroes  of physics-based animation. And almost nobody   talks about them. Works like these are the  endangered species of research - complex,   beautiful, and absolutely essential  to make all this magic happen. And we are trying to give these works a voice,  because if we don’t do it, I am heartbroken to   say, but no one else will. So before I tell  you about this weird self-healing underwear,   please like, subscribe, hit the bell  icon. And leave a really kind comment. And you can also help us save more papers - join  our Patreon, get early access, and your name in   the credits below. Every supporter keeps this  dream alive for beautiful works like this. Okay,   so…I’m a bit reluctant to show you this  weird underwear, but I’ll do it… for science. The system doesn’t just optimize the  sewing pattern - it can re-sew the   clothes mid-process when things go wrong.   Everywhere else, if the cloth mesh tangles,   the whole simulation explodes. But here, when  that happens, the AI tailor calmly pulls it back,   irons it out, and re-fits it on the  digital body - all automatically. So, it takes a while. About two hours for  the whole process. But it was impossible   to do well before, and now it is possible.   And it’s one of the reasons their system can   finish on a single RTX 3090 GPU without  collapsing into a polygonal disaster. From threads to fabrics, AI tailors, without any  wardrobe failures, subscribe to Two Minute Papers.

Другие видео автора — Two Minute Papers

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник