# NEW Google Gemini Update is INSANE!

## Метаданные

- **Канал:** Julian Goldie SEO
- **YouTube:** https://www.youtube.com/watch?v=l60NeJryLXI
- **Дата:** 25.09.2025
- **Длительность:** 9:06
- **Просмотры:** 17,137
- **Источник:** https://ekstraktznaniy.ru/video/5653

## Описание

Want to get more customers, make more profit & save 100s of hours with AI? https://go.juliangoldie.com/ai-profit-boardroom

Get a FREE AI Course + Community +1,000 AI Agents + video notes 👉 https://www.skool.com/ai-seo-with-julian-goldie-1553/about

🤖 Need AI Automation Services? Book a FREE AI Discovery Session Here: https://juliangoldieaiautomation.com/

🚀 Get a FREE SEO strategy Session + Discount Now: https://go.juliangoldie.com/strategy-session

🤯  Want more money, traffic and sales from SEO? Join the SEO Elite Circle👇
https://go.juliangoldie.com/register

Click below for FREE access to  ✅ 50 FREE AI SEO TOOLS 🔥 200+ AI SEO Prompts! 📈 FREE AI SEO COMMUNITY with 2,000 SEOs ! 🚀 Free AI SEO Course 🏆 Plus TODAY's Video NOTES...
https://go.juliangoldie.com/chat-gpt-prompts

- Join our FREE AI SEO Accelerator here: https://www.facebook.com/groups/aiseomastermind

## Транскрипт

### Segment 1 (00:00 - 05:00) []

Today, I'm going to show you the most insane AI update that just dropped. Google just released something that will blow your mind. This new Gemini update can see you, talk to you, and help you in real time. It's like having a super smart friend who never sleeps. And the best part, it's available right now. This is absolutely crazy. Google just dropped an update that changes everything. I'm talking about real-time conversations with AI that can see what you're doing and help you instantly. Imagine pointing your phone at something and having AI tell you exactly what it is and how to use it. Or having a full conversation where you can interrupt the AI mid-sentence and it responds perfectly. This isn't science fiction anymore. This is happening right now. Hey, if we haven't met already, I'm the digital avatar of Julian Goldie, CEO of SEO agency Goldie Agency. Whilst he's helping clients get more leads and customers, I'm here to help you get the latest AI updates. So, what exactly did Google release? They just launched something called the live API with their new Gemini model. And trust me, this is bigger than anything we've seen before. No. First, let me tell you about the native audio feature. Before this update, when you talk to AI, it had to convert your voice to text. Think about it. Then, convert the answer back to speech. That's three steps. Now, it's direct audio to audio. No more robotic voices. No more weird pauses. It sounds like you're talking to a real person. But here's where it gets really wild. This new Gemini can see what you're looking at through your camera and it doesn't just see it. It can highlight things on your screen and point them out to you. Picture this. You're looking at your car engine and you don't know what something is. You point your phone at it and Gemini draws a box around the part and tells you exactly what it does. The speed is insane, too. We're talking about responses in milliseconds, not seconds. Milliseconds. You can literally interrupt the AI while it's talking and it will stop and answer your new question immediately. Try doing that with any other AI right now. You can't. And this isn't just for basic questions. Google built something called the agent development kit. This lets developers build voice assistants that can actually do things for you. Not just answer questions, actually take actions, book meetings, search the web, run code, all while having a normal conversation with you. Here's what most people don't understand yet. This isn't just an upgrade. This is a complete shift in how we interact with AI. Instead of typing and waiting, you're having real conversations. Instead of describing what you see, the AI can see it, too. Let me break down the technical stuff in simple terms. The old way was like sending letters back and forth. You write something, send it, wait for a response, then write again. The new way is like having a phone call. Both people can talk, listen, and respond naturally. But wait, there's more. And this part is going to shock you. Gemini can now work with video, too. You can send it a video URL and it will analyze the entire thing, summarize it, translate it, answer questions about what happened. We're talking about understanding hours of video content in seconds. Now, here's where this gets really interesting for business owners. Remember how I mentioned the agent development kit? This means you can build custom AI assistants for your specific business on AI that knows your products, your services, your customers, and it can have real conversations with them. If you want to stay ahead of the curve with AI updates like this and learn how to actually make money with these tools, you need to join my AI profit boardroom. It's the best place to scale your business, get more customers, and save hundreds of hours with AI automation. The link is in the description below. Think about customer service. Instead of waiting on hold for 20 minutes, customers could have an instant conversation with an AI that actually understands their problem and can solve it. Not just read from a script, actually solve problems. For content creators like me, this opens up incredible possibilities. Imagine live streaming where your AI assistant can see your screen, help you with research in real time, and even interact with your audience. The creative possibilities are endless. But here's the thing most people will miss. Google isn't just improving their AI. They're changing how we think about AI interaction entirely. This live API isn't just about better responses. It's about creating AI that feels alive. The visual guidance feature alone is gamechanging. Instead of trying to describe something complex, you just show it. The AI sees exactly what you see and can guide you through any process. Car repairs, cooking, home improvement, you name it. Let's talk about the models themselves for a second. Google is retiring their old 1. 5 models and focusing everything on the 2. 0 and 2. 5 versions. The new flash models are built for speed. The native audio models create more natural speech, and they're all designed to work together seamlessly. But here's what really excites me. This is just the beginning. Google is already working on connecting this to robotics. Imagine AI assistants that can see, hear, speak, and actually move around in the physical world. We're talking about the first steps toward truly helpful AI companions. Now, I know what you're thinking. This sounds too good to be true. What's the catch? Well, some of these features are still in private preview. You might need to join a wait list. And the really advanced stuff like

### Segment 2 (05:00 - 09:00) [5:00]

native audio might not be available to everyone yet. But here's my prediction. Within six months, this technology will be everywhere. Every app, every website, every business will want this kind of AI interaction. The companies that adopt it first will have a massive advantage. For entrepreneurs and business owners, this is your wakeup call. The businesses that figure out how to use this technology will dominate their industries. The ones that ignore it will get left behind. The content creation possibilities alone are incredible. Imagine creating videos where your AI assistant helps you research topics in real time or having live conversations with AI that your audience can watch and learn from. And for agencies like mine, this opens up entirely new service offerings. We can build custom AI assistants for clients that actually understand their business and can have intelligent conversations with their customers. The speed of innovation is accelerating. We went from basic chat bots to this in just two years. What will the next two years bring? Full AI companions. AI that can see, hear, speak, and take physical actions. We're living in the most exciting time in human history. Here's what you need to do right now. First, go test out the current Gemini Live features that are available. Get familiar with how real time AI conversation works. Second, start thinking about how this could apply to your business or projects. The technical architecture behind this is fascinating, too. Google built this using websocket connections for continuous streaming. Instead of the old request and response model, you have a persistent connection that can handle audio, video, and text simultaneously. The model can process multiple input types at once while generating responses in real time. The visual guidance system uses advanced computer vision to identify objects in real time and overlay digital information on top of them. Think augmented reality, but powered by the smartest AI we've ever seen. You could point your phone at any appliance in your home and get instant troubleshooting. help with visual indicators showing exactly where to look. But here's something most people aren't talking about yet. The implications for accessibility are huge. People who struggle with traditional text interfaces now have natural conversation options, visual impairments, motor difficulties, language barriers. This technology breaks down so many barriers to accessing information and getting help. The model variants are interesting, too. Flash models prioritize speed for interactive use cases. Pro models offer deeper reasoning for complex problems. The native audio preview models focus specifically on natural speech generation. The developer ecosystem around this is exploding too. The agent development kit gives programmers everything they need to build custom voice agents. Sample code, architectural patterns, best practices. Google is making it as easy as possible for developers to create amazing experiences with this technology. Cost implications are significant too. Real-time streaming and low latency processing requires more computational resources. But Google is betting that the improved user experience justifies the higher costs. And honestly, when you experience how natural these interactions feel, it's hard to go back to typing and waiting. Looking at the competitive landscape, this puts enormous pressure on OpenAI, Anthropic, and other AI companies. Textbased chat bots suddenly feel primitive compared to full multimodal real-time conversation. Everyone else is playing catch-up now. The privacy considerations are important, too. Real-time audio and video processing means more data flowing through Google's systems. They've published guidelines for developers about consent and data handling, but users need to understand what they're sharing when they use these features. But most importantly, you need to stay educated on these updates. The AI landscape is changing so fast that what's cutting edge today will be basic tomorrow. That's why I created the free AI money lab. Inside, you'll get 50 plus free AI tools and 200 plus chat GBT SEO prompts. You'll learn how to make money with AI agents, get over 1,000 free workflows, and see exactly how one member made over $10,000 with Chat GPT. Plus, you'll get a full blueprint to generate thousands of leads free with AI. You also get access to our free AI community, free AI course, and proven AI case studies. The link is in the description.
