❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers
❤️ Their report about a previous paper is available here: https://app.wandb.ai/stacey/stargan/reports/Cute-Animals-and-Post-Modern-Style-Transfer%3A-Stargan-V2-for-Multi-Domain-Image-Synthesis---VmlldzoxNzcwODQ
📝 The paper "Detailed Rigid Body Simulation with Extended Position Based Dynamics" is available here:
- Paper: https://matthias-research.github.io/pages/publications/PBDBodies.pdf
- Talk video: https://www.youtube.com/watch?v=zzy6u1z_l9A&feature=youtu.be
Wish to see and hear the sound synthesis paper?
- Our video: https://www.youtube.com/watch?v=rskdLEl05KI
- Paper: https://research.cs.cornell.edu/Sound/mc/
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh.
If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers
Károly Zsolnai-Fehér's links:
Instagram: https://www.instagram.com/twominutepapers/
Twitter: https://twitter.com/twominutepapers
Web: https://cg.tuwien.ac.at/~zsolnai/
Оглавление (2 сегментов)
Segment 1 (00:00 - 05:00)
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today, with the power of computer graphics research, we can use our computers to run fluid simulations, simulate immersing a selection of objects into jelly, or tear meat in a way that much like in reality, it tears along the muscle fibers. If we look at the abstract of this amazing new paper, we see this, quoting: “This allows us to trace high speed motion of objects colliding against curved geometry, to reduce the number of constraints, to increase the robustness of the simulation, and to simplify the formulation of the solver. ” What! This sounds impossible, but at the very least, outrageously good. Let’s look at three examples of what it can do and see for ourselves if it can live up to its promise. One. It can simulate a steering mechanism full of joints and contact. Yup, an entire servo steering mechanism is simulated with a prescribed mass ratio. Loving it. I hereby declare that it passes inspection, and now, we can take off for some off-roading. All of the movement is simulated really well, and… wait a minute. Hold on to your papers! Are you seeing what I am seeing? Look. Even the tire deformations are part of the simulation! Beautiful. And now, let’s do a stress test, and race through a bunch of obstacles and see how well those tires can take it. At the end of the video, I will tell you how much time it takes to simulate all this…and note that I had to look three times because I could not believe my eyes. Two. Restitution! Or in other words, we can smash an independent marble into a bunch of others and their combined velocity will be correctly computed. We know for a fact that the computations are correct, because when I stop the video here, you can see that the marbles themselves are smiling. The joys of curved geometry and specular reflections. Of course, this is not true, because if we attempt to do the same with a classical earlier technique by the name position based dynamics, this would happen. Yes, the velocities become erroneously large and the marbles jump off of the wire. And they still appear to be very happy about it. Of course, with the new technique, the simulation is much more stable and realistic. Talking about stability. Is it stable only in a small-scale simulation, or can it take a huge scene with lots of interactions? Would it still work? Let’s run a stress test and find out. Ha-haa! This animation can run all day long and not one thing appears to behave incorrectly. Loving this. Three. It can also simulate these beautiful high-frequency rolls that we often experience when we drop a coin on a table. This kind of interaction is very challenging to simulate correctly because of the high-frequency nature of the motion and the curved geometry that interacts with the table. I would love to see a technique that algorithmically generates the sound for this. I could almost hear its sound in my head! Believe it or not, this should be possible and is subject to some research attention in computer graphics. The animation was given, but the sounds were algorithmically generated. Listen! Let me know in the comments if you are one of our OG Fellow Scholars who were there when this episode was published hundreds of videos ago. So, how much do we have to wait to simulate all of these crazy physical interactions? We mentioned that the tires are stiff and take a great deal of computation to simulate properly. So, as always… all nighters, right? Nope! Look at that! Holy mother of papers! The car example takes only 18 milliseconds to compute per frame, which means 55 frames per second. Goodness! Not only do we not need an all-nighter, we don’t even need to leave for a coffee break! And the rolling marbles took even less, and, wo-hoo! The high-frequency coin example needs only one third of a millisecond, which means that we can generate more than 3000 frames with it per second. We not only don’t need an all nighter or a coffee break, we don’t even need to wait at all! Now, at the start of the video, I noted that the claim in the abstract sounds almost outrageous. It is because it promises to be able to do more than previous techniques, simplify the simulation algorithm itself, make it more robust, and do all this while being blazing fast. If someone told me that there is a work that does all this at the same time, I would say that give me that paper immediately because I do not believe a word of it. And yet, it really lives up to its promise. Typically, as a research field matures, we see new techniques that can do more than previous methods,
Segment 2 (05:00 - 06:00)
but the price to be paid for it is in the form of complexity. The algorithms get more and more involved over time, and with that, they often get slower and less robust. The engineers in the industry have to decide how much complexity they are willing to shoulder to be able to simulate all of these beautiful interactions. Don’t forget, these code bases have to be maintained and improved for many-many years so choosing a simple base algorithm is of utmost importance. But here, none of these factors need to be considered, because there is nearly no tradeoff here: it is simpler, more robust, and better at the same time. It really feels like we are living in a science fiction world. What a time to be alive! Huge congratulations to scientists at NVIDIA and the university of Copenhagen for this. Don’t forget, they could have kept the results for themselves, but they chose to share the details of this algorithm with everyone, free of charge. Thank you so much for doing this. Thanks for watching and for your generous support, and I'll see you next time!