# Googles New AI Can Talk To Dolphins...

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=oENYFPP3IBk
- **Дата:** 16.04.2025
- **Длительность:** 8:20
- **Просмотры:** 16,322

## Описание

Join my AI Academy - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

Music Used

LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=oENYFPP3IBk) Segment 1 (00:00 - 05:00)

This might be the craziest AI story this week. I can't believe I'm about to say this, but one day AI might actually be able to talk to dolphins like real dolphins in the ocean by using squeaks, clicks, and whistles. And AI is learning to understand what they mean and eventually is going to talk back. Now, this isn't sci-fi. This is called Dolphin Gemma. It's being built by Google in partnership with a team of dolphin researchers who've been studying dolphin behavior in the Bahamas for nearly 40 years. So, let me actually explain to you how crazy this is because it's about to get pretty wild. So, dolphins are known for making all sorts of crazy noises, clicks, whistles, and burst pulses. And for decades, scientists have tried to figure out if these are just sounds or if there's a real language hidden in there, a system, a structure, maybe even meaning. But here's the thing, you can't just record dolphin sounds and throw them into Google Translate. Their communication is way more complex than that. It's fast, it's high-pitched, it's 3D because it's happening underwater in space. And on top of that, every dolphin has its own name, their own unique whistle. And they actually call each other using it. So Google decided to step in with something that AI does really well, pattern recognition. They took their new model called Dolphin Gemma, which is based on the same tech as their Gemini models, and trained it on the largest data set of wild dolphin sounds in the wild. This data came from the Wild Dolphin Project. And since the 1980s, they've been diving underwater, recording dolphin pods, and keeping track of every individual dolphin they've encountered. They log what the dolphin was doing when it made a sound. Whether it was reuniting with a cough, chasing a shark, or trying to flirt with another dolphin. In other words, they've been building a dolphin social database. And now AI can finally do something useful with it. So, how does this actually work in practice? So, Dolphin uses something called a soundstream, a special audio tool that breaks dolphin sounds into patterns that the AI can understand. It then uses that to figure out what sounds tend to follow each other. Just like how chat GBT predicts the next word in your sentence. Only this time it's not text. It is whistles, squawks, and buzzes. And here is the crazy part. It can now generate new dolphin-like sounds that actually fit into the communication patterns scientists have observed. Now, let me be clear. It's not speaking dolphin fluently just yet, but it's starting to put together the grammar, the musicality, the rhythm, the flow of conversation. Now, the goal here is simple. If we can understand dolphin sounds well enough, maybe we can talk back. That's where something called chat comes in. The citation hearing augmentation telemetry system. This is a wearable underwater computer paired with a pixel phone that lets scientists associate madeup whistles with real world objects that dolphins love, like floating seaweed or play scarves. So, here's what they do. They play a new whistle, then they give the dolphin a scarf. And let's say they repeat this a few times. Guess what happens? Sometimes the dolphins start whistling back using the same sound. Basically, the dolphins are asking for the scarf. Now, imagine we decide to scale that up. Imagine a full set of whistles, each tied to a different object or maybe even an idea. That's two-way communication. And now that Dolphin Gemma can help predict what a dolphin might say next, scientists using the chat system can respond faster, offering the right item at the right time, reinforcing the meaning and turning this into something more like a real conversation. Even cooler, all of this runs on a phone. The upcoming version uses a Google Pixel 9 plugged into a headset with enough processing power to run AI models and analyze dolphin mimicry in real time underwater. That means no custom servers, no massive equipment, just AI, a waterproof case, and a curious dolphin. Now, the best part about all of this is that Google plans to open source dolphin gemma this summer, meaning anyone studying whales, botted dolphins, or other species can start training it on their own data. Different species will sound different, but the framework's there. We're moving towards a future where AI is helping understand other species, not just through translation, but through pattern recognition, generation, and interaction. We might not be chatting about the meaning of life with dolphins anytime soon. But asking a dolphin, "Do you want the seaweed or the scarf? " and getting a whistle back. That's not fiction anymore. It's actually happening and it's powered by AI. So now with all of this talk about AI and dolphins, some people have been asking the question, "What if dolphins have been trying to talk to us all along? " Long before Google got involved, a scientist named

### [5:00](https://www.youtube.com/watch?v=oENYFPP3IBk&t=300s) Segment 2 (05:00 - 08:00)

John Lily was already obsessed with dolphin language back in the 1960s. This guy actually built special flooded living spaces where researchers and dolphins could live together. And his methods were questionable by today's standards. But he pioneered the idea that these animals weren't just making noise, that they were intelligent beings trying to communicate. Now, if we're being honest, dolphins actually do have some serious brain power to back this up. Their brains are actually larger than ours in some areas, especially those related to emotional processing. They have an extra lobe that humans don't have. Then they process acoustic information so precisely that they can see through sound, creating detailed 3D images of their environment through echolocation. In other words, they don't just hear things, they can visualize them through sound, which actually raises a fascinating question. What if dolphin language isn't just sounds representing things, but actual acoustic pictures of things that they're sharing with each other? This is where AI has an advantage over humans. Our brains simply aren't wired to process sound the way dolphins do. But a neural network doesn't have that limitation. Dolphin Gemma might actually be hearing patterns we physically cannot. And that brings us to the ethical questions that keep marine biologists up at night. If we can actually talk to another intelligent species, should what happens the first time a dolphin asks us why we've polluted their home or why we catch their food or worse asks about their cousin who was captured for a marine park? These aren't scientific questions anymore. They're philosophical ones. The flip side is just as profound. Imagine what we could learn. Dolphins have been adapting to ocean ecosystems for over 50 million years. They navigate social structures we're just beginning to understand. And they might have knowledge about the deep ocean that would take us centuries to discover on our own. It isn't just about dolphins, though. Once we perfect this technology, the same approach could work with other vocal species. Elephants communicate through low frequency rumbles that travel for miles. Whales sing songs that evolve over generations and travel across entire ocean basins. AI could help us tune into these conversations, too. Now, some researchers are already collecting those sounds. Dr. Katherine Payne has been recording elephant infrasound for decades. The Interspecies Internet Project is building a framework for humans to interact with great apes and elephants through interfaces designed for their cognitive abilities. This is way bigger than just one species. It's about bridging the biological gap between the minds that evolved on completely different biological paths. Now, the real question is when does this go from core tech demo to actual communication? The researchers are careful not to over compromise. True back and forth conversation with a deep meaning might still be decades away. We're still at the point where we point at things and name them stage, which is basically like teaching a toddler basic vocabulary. But every technological revolution looks pretty simple at the beginning. The first telegraph messages were just beeps. The first phone call was just Mr. Watson, come here. The first text message was just merry Christmas. Now look where we are. 20 years from now, we might actually look back at Chat and Dolphin Dema as those first primitive steps. The moment when we realize that the ocean isn't just filled with other animals, but other intelligences and other cultures.

---
*Источник: https://ekstraktznaniy.ru/video/13038*