Googles New PROJECT ASTRA Just CHANGED THE GAME! (All New Google AI Updates)
44:07

Googles New PROJECT ASTRA Just CHANGED THE GAME! (All New Google AI Updates)

TheAIGRID 14.05.2024 18 926 просмотров 479 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Googles New PROJECT ASTRA Just CHANGED THE GAME! (All New Google AI Updates) How To Not Be Replaced By AGI https://youtu.be/AiDR2aMye5M Stay Up To Date With AI Job Market - https://www.youtube.com/@UCSPkiRjFYpz-8DY-aF_1wRg AI Tutorials - https://www.youtube.com/@TheAIGRIDAcademy/ 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Checkout My website - https://theaigrid.com/ Links From Todays Video: Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (9 сегментов)

Segment 1 (00:00 - 05:00)

So, Google just unveiled their project Astro, which is absolutely incredible, as well as a bunch of other smaller updates that are really going to change the game. In this presentation, I've removed all of these smallest spaces and distilled this down to the most important key updates that you're supposed to be aware of. Without wasting any more time, let Google take it away. Gemini directly through the app. Now available on Android and iOS and through Gemini Advance, which provides access to our most capable models. Over 1 million people have signed up to try it in just 3 months and it continues to show strong momentum. One of the most exciting transformations with Gemini has been in Google search. In the past year, we've answered billions of queries as part of her search generative experience. People are using it to search in entirely new ways and asking new types of questions, longer and more complex queries, even searching with photos and getting back the best the web has to offer. We've been testing this experience outside of labs. And we are encouraged to see not only an increase in search usage, but also an increase in user satisfaction. I'm excited to announce that we will begin we'll begin launching this fully revamped experience AI overviews to everyone in the US this week and we'll bring it to more countries soon. And people love using photos to search across their life. With Gemini, you're making that a whole lot easier. Say you're at a parking station ready to pay, but you can't recall your license plate number. Before you could search photos for keywords and then scroll through years worth of photos looking for the right one. Now you can simply ask photos. It knows the cars that appear often. It triangulates which one is yours and just tells you the license plate number. And ask photos can also help you search your memories in a deeper way. For example, you might be reminiscing about your daughter Lucia's early milestones. You can ask photos. When did Lucia learn to swim? You can even follow up with something more complex. Show me how Lucia's swimming has progressed. Here, Gemini goes beyond a simple search, recognizing different context from doing laps in the pool to snorkeling in the ocean to the text and dates on her swimming certificates and photos packages it up all together in a summary. You can really take it all in and relive amazing memories all over again. We are rolling out as photos this summer with more capabilities to come. Improved version of Gemini 1. 5 Pro to all developers globally. In addition, today Gemini 1. 5 Pro with 1 million context is now directly available for consumers in Gemini Advanced and can be used across 35 languages. 1 million tokens is opening up entirely new possibilities. It's exciting, but I think we can push ourselves even further. So, today we are expanding the context window to 2 million tokens. We are making it available for developers in private preview. It's amazing to look back and see just people are always searching their emails in Gmail. We are working to make it much more powerful with Gemini. Let's look at how. As a parent, you want to know everything that's going on with your child's school. Okay, maybe not everything, but you want to stay informed. Gemini can help you keep up. Now, we can ask Gemini to summarize all recent emails from the school. In the background, it's identifying relevant emails, even analyzing attachments like PDFs, and you get a summary of the key points and action items. So helpful. Maybe you were traveling this week and you couldn't make the PTA meeting. The recording of the meeting is an hour long. If it's from Google Meet, you can ask Gemini to give you the highlights. There's a parents group looking for volunteers. You're free that day. Of course, Gemini can draft a reply. There are countless other examples of how this can make life easier. Gemini 1. 5 Pro is available today in Workspace Labs. Last year at IO, we introduced Notebook LM, a research and writing tool grounded in the information you give it. Since then, we've seen a lot of momentum with students and teachers using it. And today, Gemini 1. 5 Pro is coming to Notebook LM, and it's great. Let me show you. So, here we are in Notebook LM. You can load it up with all the materials here on the left. In this notebook, I've been using it with my younger son, and I've added some of his science worksheets, a few slide decks from his teacher, and even an open- source textbook full of charts and diagrams. With 1. 5 Pro, it instantly creates this notebook guide with a helpful summary and can generate a study guide, an FAQ, or even quizzes. But for my son, Jimmy, he really learns best when he can listen to something. So, we've prototyped a new feature with Gemini and it's called audio overviews. Notebook LM is going to take all the materials on the left as input and output them into a lively

Segment 2 (05:00 - 10:00)

science discussion personalized for him. Let's take a listen. So, let's uh let's dive into physics. What's on deck for today? Well, uh we're starting with the basics. Force and motion. Okay. And that of course means we have to talk about Sir Isaac Newton and his three laws of motion. Ah, yes. the foundation for understanding how objects move and interact. Ah yes, this is where multimodal really shines. Now, it generated this audio discussion based on that text material. And what's amazing is that my son and I can join into the conversation and steer it whichever direction we want. When I tap Hold on, we have a question. What's up, Josh? Yeah. Can you give my son Jimmy a basketball example? Hey, Jimmy, that's a fantastic idea. Basketball is actually a great way to visualize force and motion. Let's break it down. Okay, so first imagine a basketball just sitting there on the court. It's not moving, right? That's because all the forces acting on it are balanced. The downward pull of gravity. Pretty cool, right? I got to say, the first time my son heard this, you should have seen how big his eyes got because he was gripped. They were talking to him. He was learning science through the example of basketball, his favorite sport. Now, what's interesting is under the hood, you saw that Gemini had used some of the concepts of gravity, Sir Isaac Newton, but nothing in there was about basketball. It connected the dots and created that age appropriate example for him. And this is what's becoming possible with the power of Gemini. You can give it lots of information in any format and it can be transformed in a way that's personalized and interactive for you. Explain what I mean by that. I think about them as intelligent systems that show reasoning, planning, and memory are able to think multiple steps ahead, work across software and systems all to get something done on your behalf and most importantly under your supervision. We are still in the early days and you'll see glimpses of our approach throughout the day. But let me show you the kinds of use cases we are working hard to solve. Let's start with shopping. It's pretty fun to shop for shoes and a lot less fun to return them when they don't fit. Imagine if Gemini could do all the steps for you. Searching your inbox for the receipt, locating the order number from your email, filling out a return form, and even scheduling a pickup. That's much easier, right? Let's take another example that's a bit more complex. Say you just moved to Chicago. You can imagine Gemini and Chrome working together to help you do a number of things to get ready. Organizing, reasoning, synthesizing on your behalf. For example, you will want to explore the city and find services nearby from dry cleaners to dog walkers. You'll have to update your new address across dozens of websites. Gemini can work across these tasks and will prompt you for more information when needed so you're always in control. That part is really important. As we prototype these experiences, we are thinking hard about how to do it in a way that's private, secure, and works for everyone. These are simple use cases, but they give you a good sense of the types of problems we want to solve by building intelligent systems that think ahead, reason, and plan all on your behalf. The power of Gemini with multimodality, long context, and agents brings us closer to our ultimate goal, making AI helpful for everyone. We see today we have some exciting new progress to share about the future of AI assistance that we're calling Project Astra. For a long time, we've wanted to build a universal AI agent that can be truly helpful in everyday life. Our work making this vision a reality goes back many years. It's why we made Gemini multimodal from the very beginning. An agent like this has to understand and respond to our complex and dynamic world just like we do. It would need to take in and remember what it sees so it can understand context and take action. And it would have to be proactive, teachable, and personal so you can talk to it naturally without lag or delay. While we've made some great strides in developing AI systems that can understand multimodal information, getting response time down to something conversational is a difficult engineering challenge. Building on our Gemini model, we've developed agents that can process information faster by continuously encoding video frames, combining the video and speech input into a timeline of events and caching this for efficient recall. We've also enhanced how they sound with a wider range of intonation. These agents better understand the context you're in and can respond quickly in conversation, making the pace and quality of interaction feel much more natural. Here's a video of our prototype, which you'll see has two parts. Each part was captured in a single take in real time. Let's do some tests. Tell me when you see something that makes sound. I see a speaker which makes sound. What is that part of the speaker called? That is the tweeter. It produces high frequency sounds. Give me a creative alliteration about these. Creative crayons color cheerfully. They

Segment 3 (10:00 - 15:00)

certainly craft colorful creations. What does that part of the code do? This code defines encryption and decryption functions. It seems to use AESBC encryption to encode and decode data based on a key and an initialization vector IV. That's right. What neighborhood do you think I'm in? This appears to be the King's Cross area of London. It is known for its railway station and transportation connections. Do you remember where you saw my glasses? Yes, I do. Your glasses were on the desk near a red apple. What can I add here to make this system faster? Adding a cache between the server and database could improve speed. What does this remind you of? Shroudinger's cat. All right. Uh, give me a band name for this duo. Golden stripes. Nice. Thanks, Gemini. I think you'll agree it's amazing to see how far AI has come, especially when it comes to spatial understanding, video processing, and memory. It's easy to envision a future where you can have an expert assistant by your side through your phone or new exciting form factors like glasses. Some of these agent capabilities will come to Google products like Gemini app later this year. For those of you on site today, you can try out a live demo version of it. Over the past few months, we've been working hard to build a new image generation model from the ground up with stronger evaluations, extensive red teaming, and state-of-the-art watermarking with Synth ID. Today, I'm so excited to introduce Imagine 3. It's our most capable image generation model yet. Imagine 3 is more photorealistic. You can literally count the whiskers on its snout with richer details like this incredible sunlight in the shot and fewer visual artifacts or distorted images. It understands prompts written the way people write. The more creative and detailed you are, the better. And Imagine 3 remembers to incorporate small details like the wild flowers or small blue bird in this longer prompt. Plus, this is our best model yet for rendering text, which has been a challenge for image generation models. In side-by-side comparisons, independent evaluators preferred Imagine 3 over other popular image generation model. In some, Imagine 3 is our highest quality image generation model so far. You can sign up today to try Imagine 3 in Image FX, part of our suite of AI tools at labs. google, and it'll be coming soon to developers and enterprise customers in Vert. Ex AI. Another area full of creative possibility is generative music. I've been working in this space for over 20 years, and this is by far the most exciting year of my career. We're exploring ways of working with artists to expand their creativity with AI. Together with YouTube, we've been building Music AI Sandbox, a suite of professional music AI tools that can create new instrumental sections from scratch, transfer styles between tracks, and more. To help us design and test them, we've been working closely with incredible musicians, songwriters, and producers. Some of them made even entirely new songs in ways that would have not been possible without these tools. Let's hear from some of the artists we've been working with. I think this really show. Today I'm excited to announce our newest, most capable generative video model called VO. VO creates highquality 1080p videos from text, image, and video prompts. It can capture the details of your instructions in different visual and cinematic styles. You can prompt for things like aerial shots of a landscape or time-lapse and further edit your videos using additional prompts. You can use VO in our new experimental tool called Video FX. We're exploring features like storyboarding and generating longer scenes. VO gives you unprecedented creative control. Techniques for generating static images have come a long way, but generating video is a different challenge altogether. Not only is it important to understand where an object or subject should be in space, it needs to maintain this consistency over time. Just like the car in this video, VO builds upon years of our pioneering generative video model work, including GQN, Faki, Walt, Video Poet, Lumiere, and much more. We combine the best of these architectures and techniques to improve consistency, quality, and output resolution. To see what VO can do, we put it in the hands of an amazing filmmaker. Let's take a look. Over the coming weeks, some of these features will be available to select creators through video effects at labs. google, and the wait list is open now. Of course, these advances in generative video go beyond the beautiful visuals you've seen today. By teaching future AI models how to solve problems creatively or in effect simulate the physics of our world, we can build more useful systems that can help people communicate in new ways and thereby advance the frontiers of AI. When we first began this journey to build AI

Segment 4 (15:00 - 20:00)

more than 15 years ago, we knew that one day it would change everything. Now that time is here, and we continue to be amazed by the progress we see and inspired by the advances still to come on the path. The result is a product that does the work for you. Google search is generative AI at the scale of human curiosity and it's our most exciting chapter of search yet. To tell you more, here's Liz. With each of these platform shifts, we haven't just adapted. We've expanded what's possible with Google Search. And now with Generative AI, search will do more for you than you ever imagined. So whatever is on your mind and whatever you need to get done, just ask and Google will do the Googling for you. All the advancements you'll see today are made possible by a new Gemini model customized for Google Search. What really sets this apart is our three unique strengths. First, our real-time information with over a trillion facts about people, places, and things. Second, our unparalleled ranking and quality system trusted for decades to get you the very best of the web. And third, the power of Gemini, which unlocks new agentive capabilities right in search. By bringing these three things all together, we're able to dramatically expand what's possible with Google search. Yet again, this is search in the Gemini era. So, let's dig in. You've heard today about AI overviews and how helpful people are finding them. With AI overviews, Google does the work for you instead of piecing together all the information yourself. You can ask your question and as you see here, you can get an answer instantly, complete with a range of perspectives and links to dive deeper. As Sun shared, AI overviews will begin rolling out to everyone in the US starting today with more countries soon. And by the end of the year, AI overviews will come to over a billion people in Google Search. But this is just the first step. We're making AI overviews even more helpful for your most complex question. The types that are really more like 10 questions in one. You can ask your entire question with all its sub question and get an AI overview in seconds. To make this possible, we're introducing multi-step reasoning in Google search. So, Google can do the researching for you. For example, let's say you've been trying to get into yoga and Pilates. Finding the right studio can take a lot of research. There's so many factors you need to consider. Soon you'll be able to ask search to find the best yoga or pilates studios in Boston and show you details on their intro offers and the walking time from Beacon Hill. As you can see here, Google gets to work for you finding the most relevant information and bringing it together into your AI overview. You get some studios with great ratings and their introductory offers. And you can see the distance for each. Like this one, it's just a 10-minute walk away. Right below you see where they're located laid out visually. And you got all this from just a single search. Under the hood, our custom Gemini model acts as your AI agent using what we call multi-step reasoning. It breaks your bigger question down into all its part and it figures out which problems it needs to solve and in what order. And thanks to our real-time info and ranking expertise, it reasons using the highest quality information out there. So, since you're asking about places, it taps into Google's index of information about the real world with over 250 million places and updated in real time, including their ratings, reviews, business hours, and more. Research that might have taken you minutes or even hours, Google can now do on your behalf. In just seconds, let me show you another way multi-step reasoning in Google Search can make your life that much easier. Take planning for example. Dreaming of trips and meal plans can be fun, but doing the work of actually figuring it all out, no thank you. With Gemini in search, Google does the planning with you. Planning is really hard for AI to get right. It's the type of problem that takes advanced reasoning and logic. After all, if you're meal planning, you probably don't want mac and cheese for breakfast, lunch, and dinner. Okay, my kids might, but say you're looking for a bit more variety. Now, you can ask Search to create a 3-day meal plan for a group that's easy to prepare. And here you get a plan with a wide range of recipes from across the web. This one for overnight oats looks particularly interesting. And you can easily head over to the website to learn how to prepare them. If you want to get more veggies in, you can simply ask Search to swap in a vegetarian dish. And just like that, Search customizes your meal plan. And you can export your meal plan or get the ingredients as a list just by tapping here. Looking ahead, you can imagine asking Google to add everything to your preferred shopping cart. Then we're really cooking. These planning capabilities mean Search will be able to help plan everything from meals and trips to parties, dates, workout routines, and more. So, you can get all the fun of planning without any of the hassle. You've seen how Google Search can help with increasingly complex questions and planning. But what about all those times when you don't know exactly what to ask and you need some help brainstorming? When you come to search for ideas, you'll get more than an AI generated answer. You'll get an entire AI organized page customuilt for you and your question. Say you're heading to Dallas to celebrate your anniversary and you're looking for the perfect restaurant. What you get here breaks AI out of the box and it brings it to the whole page. Our Gemini model uncovers the most interesting angles for you to explore and organizes these results into these helpful clusters. Like you might never have considered restaurants with live music or ones with

Segment 5 (20:00 - 25:00)

historic charm. Our model even uses contextual factors like the time of the year. So since it's warm in Dallas, you can get rooftop patios as an idea and it pulls everything together into a dynamic whole page experience. You'll start to see this new AI organized search results page when you look for inspiration starting with dining and recipes and coming to movies, music, books, hotels. Why will this not stay in place? And in a near instant, Google gives me an AI overview. I guess some reasons this might be happening and steps I can take to troubleshoot. So, looks like first, this is called a toner. Very helpful. And it looks like it may be unbalanced. And there's some really helpful steps here. And I love that because I'm new to all this, I can check out this helpful link from Audio Technica to learn even more. That was pretty quick. Let me walk you through what just happened. Thanks to a combination of our state-ofthe-art speech models, our deep visual understanding, and our custom Gemini model, Search was able to understand the question I asked out loud, break down the video frame by frame. Each frame was fed into Gemini's long context window you heard about earlier today. So S search could then pinpoint the exact make and model of my record player and make sense of the motion across frames to identify the tone arm was drifting. Search fanned out and combed the web to find relevant insights from articles, forums, videos, and more. And it stitched all of this together into my AI overview. The result was music to my ears. Back to you, Liz. And now we're really excited that the new Gemini powered side panel will be generally available next month. One of our customers is a local favorite right here in California, Sports Basement. They rolled out Gemini for workspace to the organization and this has helped improve the productivity of their customer support team by more than 30%. Customers love how Gemini grows participation in meetings with automatic language detection and realtime captions now expanding to 68 languages. We are really excited about what Gemini 1. 5 Pro unlocks for Workspace and AI premium customers. Let me start by showing you three new capabilities coming to Gmail mobile is my Gmail account. Okay, there's an email up top from my husband. Help me sort out the roof repair thing, please. Now, we've been trying to find a contractor to fix our roof and with work travel, I have clearly dropped the ball. It looks like there's an email thread on this with lots of emails that I haven't read. And luckily for me, I can simply tap the summarize option up top and skip reading this long back and forth. Now, Gemini pulls up this helpful mobile card as an overlay. And this is where I can read a nice summary of all the salient information that I need to know. So, I see here that we have a quote from Jeff at Green Roofing and he's ready to start. Now, I know we had other bids and I don't remember the details. Previously, I would have had to do a number of searches in Gmail and then remember and compare information across different emails. Now, I can simply type out my question right here in the mobile card and say something like, "Compare my roof repair bids by price and availability. " This new Q& A feature makes it so easy to get quick answers on anything in my inbox. for example, when are my shoes arriving or what time do doors open for the Knicks game without having to first search Gmail, then open the email and then look for the specific information and attachments and so on. Anyway, back to my roof. It looks like Gemini has found details that I got from two other contractors in completely different email threads and I have this really nicely organized summary and I can do a quick comparison. So, it seems like just quote was right in the middle. You can start immediately. So, green roofing it is. I'll open that last email from Jeff and confirm the project. And look at that. I see some suggested replies from Gemini. Now, what is really, really neat about this evolution of smart reply is that it's contextual. Gemini understood the back and forth in that thread and that Jeff was ready to start. So, offers me a few customized options based on that context. So, you know, here I see I have decline the service, suggest a new time. I'll choose proceed and confirm time. I can even see a preview of the full reply simply by long pressing. This looks reasonable, so I'll hit send. These new capabilities in Gemini and Gmail will start rolling out this month to labs users. Okay, so one of the really neat things about workspace apps like Gmail, Drive

Segment 6 (25:00 - 30:00)

Docs, Calendar is how well they work together. And in our daily lives, we often have information that flows from one app to another, like say adding a calendar entry from Gmail or creating reminders from a spreadsheet tracker. But what if Gemini could make these journeys totally seamless? Perhaps even automate them for you entirely. So, let me show you what I mean with a real life example. My sister is a self-employed photographer and her inbox is full of appointment bookings, receipts, client feedback on photos, and so much more. Now, if you're a freelancer or a small business, you really want to focus on your craft and not on bookkeeping and logistics. So, let's go to her inbox and take a look. Lots of unread emails. Let's click on the first one. It's got a PDF that's an attachment from a hotel as a receipt. And I see a suggestion in the side panel. help me organize and track my receipts. Let's click on this prompt side panel now show will show me more details about what that really means. And as you can see, there's two steps here. Step one, create a drive folder and put this receipt and 37 others it's found into that folder. Makes sense. Step two, extract the relevant information from those receipts in that folder into a new spreadsheet. Now, this sounds useful. Why not? I also have the option to edit these actions or just hit okay. So, let's hit okay. Gemini will now complete the two steps described above. And this is where it gets even better. Gemini offers you the option to automate this so that this particular workflow is run on all future emails, keeping your drive folder and expense sheet up to date with no effort from you. Now, we know that creating a complex spreadsheet like this can be daunting for most people, but with this automation, Gemini does the hard work of extracting all the right information from all the files and in that folder and generates this sheet for you. So, let's take a look. Okay, it's super well organized and it even has a category for expense type. Now, we have this sheet. Things can get even more fun. We can ask Gemini questions. questions like, "Show me where the money is spent. " Gemini not only analyzes the data from the sheet, but also creates a nice visual to help me see the complete breakdown by category. And you can imagine how this extends to all sorts of use cases in your inbox like travel expenses, shopping, remodeling projects, you name it. All of that information in Gmail can be put to good use and help you work, plan, and play better. Now, this particular I know this particular ability to organize your attachments and drive and generate a sheet and do data analysis via Q& A will be rolling out to labs users this September and it's just one of the many automations that we're working on in Workspace. Workspace in the Gemini era will continue to unlock new ways of getting things done. We're building advanced agentive experiences, including customizing how you use Gemini. Now, as we look to 2025 and beyond, we're exploring entirely new ways of working with AI. Now, with Gemini, you have an AI powered assistant always at your side. But what if you could expand how you interact with AI? For example, when we work with other people, we mention them in comments and docs or we send them emails, we have group chats with them, etc. And it's not just how we collaborate with each other, but we each have a specific role to play in the team. And as the team works together, we build a set of collective experiences and context to learn from each other. We have the combined set of skills to draw from when we need help. So, how could we introduce AI into this mix and build on this shared expertise? Well, here's one way. We're prototyping a virtual Gemini powered teammate. This teammate has an identity, a workspace account along with a specific role and objective. Let me bring Tony up to show you what I mean. Hey Tony. Hey Aerna. Hey everyone. Okay, so let me start by showing you how we set up this virtual teammate. As you can see, the teammate has its very own account and we can go ahead and give it a name. We'll do something fun like chips's been given a specific job role with a set of descriptions on how to be helpful for the team. You can see that here. And some of the jobs are to monitor and track projects. We've listed a few out to organize information and provide context and a few more things. Now that we've configured our virtual teammate, let's go ahead and see Chip in action. To do that, I'll switch us over here to Google Chat. When planning for an event like IO, we have a ton of chat rooms for various purposes. Luckily for me, Chip is in all of them. To quickly catch up, I might ask a question like, "Anyone know if our IO storyboards are

Segment 7 (30:00 - 35:00)

approved? " Because we've instructed Chip to track this project, Chip searches across all the conversations and knows respond with an answer. There it is. Simple, but very helpful. As the team adds Chip to more group chats, more files, more email threads, Chip builds a collective memory of our work together. Let's look at an example to show you. I'll switch over to a different room. How about uh Project Sapphire over here? And here we are discussing a product release coming up. And as usual, many pieces are still in flight. So I can go ahead and ask, are we on track for launch? Chip gets to work not only searching through everything it has access to, but also synthesizing what's found and coming back with an up-to-date response. There it is, a clear timeline, a nice summary, and notice even in this first message here, chip flags a potential issue the team should be aware of. Because we're in a group space, everyone can follow along. Anyone can jump in at any time, as you see someone just did, asking Chip to help create a dock to help address the issue. A task like this could take me hours, dozens of hours. Chip can get it all done in just a few minutes, sending the dock over right when it's ready. And so much of this practical helpfulness comes from how we've customized Chip to our team's needs and how seamlessly this AI is integrated directly into where we're already working. Back to you, Para. Thank you, Tony. Now, I can imagine a number of virtual types of v a number of different types of virtual teammates configured by businesses to help them do what they need. Now we have a lot of work to do to figure out how to bring these agentive experiences like virtual teammates into workspace including enabling third parties to make their very own versions of chip. We're excited about where this is headed so stay tuned. And as Gemini and its capabilities continue to evolve, we're diligently bringing that power directly into workspace to make all our users more productive and creative both at home and at work. And now over to tell you more about Gemini app. Our vision for the Gemini app is to be the most helpful personal AI assistant by giving you direct access to Google's latest AI models. Gemini can help you learn, create, code, and anything else you can imagine. And over the past year, Gemini has put Google's AI in the hands of millions of people with experiences designed for your phone and the web. We also launched Gemini Advanced, our premium subscription for access to the latest AI innovations from Google. Today, we'll show you how Gemini is delivering our most intelligent AI experience. Let's start with the Gemini app, which is redefining how we interact with AI. It's natively multimodal, so you can use text, voice, or your phone's camera to express yourself naturally. And this summer, you can have an in-depth conversation with Gemini using your voice. We're calling this new experience live. Using Google's latest speech models, Gemini can better understand you and answer naturally. You can even interrupt while Gemini is responding, and it will adapt to your speech pattern. And this is just the beginning. We're excited to bring the speed gains and video understanding capabilities from Project Astra to the Gemini app. When you go live, you'll be able to open your camera so Gemini can see what you see and respond to your surroundings in real time. Now, the way I use Gemini isn't the way you use Gemini. So, we're rolling out a new feature that lets you customize it for your own needs and create personal experts on any topic you want. We're calling these gems. They're really simple to set up. Just tap to create a gem, write your instructions once, and come back whenever you need it. For example, here's a gem that I created that acts as a personal writing coach. It specializes in short stories with mysterious twists, and it even builds on the story drafts in my Google Drive. I call it the cliffhanger curator. Now, gems are a great time saver when you have specific ways that you want to interact with Gemini again and again. Gems will roll out in the coming months, and our trusted testers are already finding so many creative ways to put them to use. They can act as your yoga bestie, your personal sue chef, a brainy calculus tutor, a peer reviewer for your code, and so much more. Next, I'll show you how Gemini is taking a step closer to being a true AI assistant by planning and taking actions for you. Now, we all know that chat bots can give you ideas for your next vacation, but there's a lot more that goes into planning a great trip. It requires reasoning that considers space-time, time, logistics, and the intelligence to prioritize and make decisions. That reasoning and

Segment 8 (35:00 - 40:00)

intelligence all come together in the new trip planning experience in Gemini Advance. Now, it all starts with a prompt. Okay, so here we maybe you have a side hustle selling handcrafted product, but you're a better artist than accountant and it's really hard to understand which products are worth your time. Simply upload all of your spreadsheets and ask Gemini to visualize your earnings and help you understand your profit. Gemini goes to work calculating your returns and pulling its analysis together into a single chart so you can easily understand which products are really paying off. Now, behind the scenes, Gemini writes custom Python code to crunch these numbers. And of course, your files are not used to train our models. Oh, hi everyone. It's great to be back at Google IO. Today, you've seen how AI is transforming our products across Gemini, Search, Workspace, and more. We're bringing all these innovation right onto your Android phone, and we're going even further to make Android the best place to experience Google AI. This new era of AI is a profound opportunity to make smartphones truly smart. Our phones have come a long way in a short time, but if you think about it, it's been years since the user experience has fundamentally transformed. This is a once in a generation moment to reinvent what phones can do. So, we've embarked on a multi-year journey to reimagine Android with AI at the core. And it starts with three breakthroughs you'll see this year. First, we're putting AI powered search right at your fingertips, creating entirely new ways to get the answers you need. Second, Gemini is becoming your new AI assistant on Android, there to help you anytime. And third, we're harnessing ondevice AI to unlock new experiences that work as fast as you do while keeping your sensitive data private. Let's start with AI powered search. Earlier this year, we took an important first step at Samsung Unpacked by introducing Circle to Search. It brings the best of search directly into the user experience. So, you can go deeper on anything you see on your phone without switching out. Fashionistas are finding the perfect shoes. Home chefs are discovering new ingredients. And with our latest update, it's never been easier to translate whatever's on your screen, like a social post in another language. And there are even more ways circle to search can help. One thing we've heard from students is that they're doing more of their schoolwork directly on their phones and tablets. So, we thought, could circle the search be your perfect study buddy? Let's say my son needs help with a tricky physics word problem. Like, my first thought is, it's been a while since I've thought about kinematic. If he's stumped on this question, instead of putting me on the spot, he can circle the exact part he's stuck on and get stepby-step instructions right where he's already doing the work. Ah, of course. Final velocity equals initial velocity plus acceleration times elapse time. I was just about to say that. Seriously though, I love how it shows how to solve the problem, not just the answer. This new capability is available today and later this year, Circle to Search will be able to tackle more complex problems involving symbolic formulas, diagrams, graphs, and more. Circle to search is only on Android. It's available on more than a 100 million devices today and we're on track to double that by the end of the year. You've already heard from about the incredible updates coming to the Gemini app on Android. Gemini is so much more. It's becoming a foundational part of the Android experience. Here's Dave to share more. Hey everyone, a couple of months ago we launched Gemini on Android and like Circle to Search, Gemini works at the system level. So instead of going to a separate app, I can bring Gemini right to what I'm doing. Now we're making Gemini contextaware. So it can anticipate what you're trying to do and provide more helpful suggestions in the moment. In other words, to be a more helpful assistant. So, let me show you how this works. And I have my shiny new Pixel 8a here to help me. So, my friend Pete is asking if I want to play pickle ball this weekend. And I know how to play tennis. Sort of. Uh, I had to say that for the demo. Uh, but I'm new to this pickle ball thing. So, I'm going to reply and try to be funny. And I'll say, uh, is that like tennis but with, uh, pickles? Um, this would be actually a lot funnier with a meme. So, let me bring up Gemini to help with that. And I'll say, uh, create image of tennis with pickles. Now, one new thing you'll notice is that the Gemini window now hovers in place above the app so that I stay in the flow. Okay, so that generates some pretty good images. Uh, what's nice is I can then drag and drop any of these directly into the messages app below. So, like, so cool. Let me send that.

Segment 9 (40:00 - 44:00)

All right. All right, so Pete was typing and he says, uh, how he sent me a video on how to play pickle ball. All right, thanks Pete. Let's tap on that. That launches YouTube, but you know, I only have one or two burning questions about the game, and I can bring up Gemini to help with that. And because it's contextaware, Gemini knows I'm looking at a video, so it proactively shows me an ask this video chip. So, let me tap on that. And now I can ask specific questions about the video. So, for example, uh what is can't type the two bounce rule because that's something that I've heard about but don't quite understand in the game. By the way, this uses signals like YouTube's captions which means you can use it on billions of videos. So, give it a moment and there I get a nice succinct answer. The ball must bounce once on each side of the court uh after a serve. Okay, cool. Let me go back to messages. And Pete's followed up and he says, "You're an engineer, so here's the official rule book uh for pickle ball. " Okay, thanks Pete. Uh, Pete's very helpful, by the way. Okay, so we tap on that. Launches a PDF. That's an 84page PDF. I don't know how much time Pete thinks I have. Anyway, us engineers, as you all know, like to work smarter, not harder. So, instead of trolling through this entire document, I can pull up Gemini to help. And again, Gemini anticipates what I need and offers me an ask this PDF option. So, if I tap on that, Gemini now ingests all of the rules to become a pickle ball expert. And that means I can ask very esoteric questions like for example uh are spin uh serves allowed and uh let's hit that because I've heard that rule may be changing. Now because I'm a Gemini advanced user this works on any PDF and takes full advantage of the long context window and there's just lots of times when that's useful. For example, let's say you're looking for a quick answer in an appliance user manual. And there you have it. It turns out nope spins serves are not allowed. Gemini not only gives me a clear answer to my question, it also shows me exactly when where in the PDF to learn more. Awesome. Okay, let me show you another example of what ondevice AI can unlock. People lost more than $1 trillion to fraud last year. And as scams continue to evolve across texts, phone calls, and even videos, Android can help protect you from the bad guys, no matter how they try to reach you. So, let's say I get rudely interrupted by an unknown caller right in the middle of my presentation. Hello. Hi, I'm calling from Safe Morbank security department. Am I speaking to Dave? Uh, yeah. This is Dave. Kind of in the middle of something. We've detected some suspicious activity on your account. It appears someone is trying to make unauthorized charges. Uh, oh yeah. What kind of charges? We can't give you specifics over the phone, but to protect your account, I'm going to help you transfer your money to a secure account we've set up for you. And look at this. My phone gives me a warning that this call might be a stamina. Gemini Nano alerts me the second it detects suspicious activity, like a bank asking me to move my money to keep it safe. And everything happens right on my phone. So the audio processing stays completely private to me and on my device. We're currently testing this feature and we'll have more updates to share later this summer. And we're really just scratching the surface of the kinds of fast private experiences that ondevice AI unlock. Later this year, Gemini will be able to more deeply understand the content of the screen without any information leaving your phone thanks to the ondevice model. So remember that pickle ball example earlier, Gemini and Android will be able to automatically understand the conversation and provide relevant suggestions like where to find pickle ball clubs near me. And this is a powerful concept that will work across many apps on your phone. In fact, later today at the developer keynote, you'll hear about how we're empowering our developer community with our latest AI models and tools like Gemini Nano and Gemini in Android Studio. Also stay tuned tomorrow for our upcoming Android 15 updates which we can't wait to share with you.

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник