# ChatGPT judges you music taste

## Метаданные

- **Канал:** Prompt Engineering
- **YouTube:** https://www.youtube.com/watch?v=WF7Z2AdCy5Q

## Содержание

### [0:00](https://www.youtube.com/watch?v=WF7Z2AdCy5Q) Segment 1 (00:00 - 04:00)

Greetings Fellow human! A developer created an app called Snobify that judges your music taste with Spotify and ChatGPT. Nvidia's AI animation system that brings characters to life with language! That’s all coming up in today’s roundup of AI news. Well, even if you're a music snob, there's now an app for that! Snobify is a new Spotify app that uses Open AI's davinci model and track metadata to categorize songs by popularity and judge a user's music taste. The app rewards users for diverse listening behaviors and criticizes them for the opposite. The metrics for good music taste were defined as a combination of diversity in listening history and the popularity of the songs [00:01:00] being listened to. To keep things fair, app only tracks a user's last week of listening history. But how does it actually works, you ask? Well, it's actually quite simple. The app first judges the popularity of a track using the Spotify track metadata, which is calculated based on the number of listens and their recency. The app also calculates the frequency of the listens in the user's recent history. And I do find it quite funny that despite the app's judgemental approach, Open AI's behavioural filters prohibit the davinci model from generating negative or harmful responses. This was solved through prompt engineering, where a specific context and perspective was provided to the AI and generate more specific or relevant responses. This story is amazing because it not only shows the potential for [00:02:00] AI to be integrated into music apps, but also the creative ways that prompt engineering can be used to bypass limitations and provide more relevant responses. Well, it turns out Nvidia has found a way to control computer characters using only language! No more fiddling with complicated controllers or even hand gestures. Now you can just talk to your computer and watch it come to life. But why would you want to do that, you might ask? Well, let me tell you. The goal of this technology is to develop AI systems that can generate natural-looking movements for a variety of simulated figures and replace manual animation and motion capture processes in the long term. And what better way to do that than with language, right? So, Nvidia’s new model combines a language model with reinforcement learning to make animation control by [00:03:00] language. It’s trained in three stages, starting with a shared embedding space that combines language and skills seen in short videos with motions and text descriptions. This space is then used to learn multiple policies for solving simple tasks, such as moving towards a specific object. And finally, the different learned policies are merged to create the final model. The result? Users can assign a character a specific task and corresponding skill just by using text input. For example, they can say “ sprint to the red block” or “face the target and hit it with a shield. ” And the best part? The characters automatically learn related movements by training with different movements and text description. Now, if I had to speculate, I’d say this could lead to some pretty interesting developments in the future. Who knows, maybe someday we’ll be able to control robots by just talking to them. But for now, I’m just excited to see how this technology will be used in industrial workflows. So there you have it, Nvidia’s new language-driven AI animation system that can control the behavior of physics-based characters. As always, thanks for watching. See you in the next video, have a good one.

---
*Источник: https://ekstraktznaniy.ru/video/22398*