# Googles AI CEO Just Revealed AGI Details...

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=613OQ4hJic4
- **Дата:** 18.10.2024
- **Длительность:** 15:49
- **Просмотры:** 33,019
- **Источник:** https://ekstraktznaniy.ru/video/13972

## Описание

Prepare for AGI with me - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

00:00:00 - AGI timeline
00:00:17 - Hassabis introduction
00:01:19 - Multimodal AI
00:02:20 - Timeline comparisons
00:03:39 - AGI progression
00:04:30 - AGI breakthroughs
00:05:33 - Consumer AI
00:07:01 - Astro introduction
00:07:43 - Astro demonstration
00:10:00 - Universal assistants
00:11:34 - AGI capabilities
00:13:05 - AGI approaches
00:14:19 - Tool integration

Links From Todays Video:


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#Arti

## Транскрипт

### AGI timeline []

I think there's still two or three big Innovations needed from here to we get to AGI and that's why I'm a more of an h a 10year time scale than others some of my colleagues and peers and other and some of our competitors have much shorter timelines than that but I think 10 years is about right so

### Hassabis introduction [0:17]

demisis recently gave an interview where he actually gave us insights to how the AGI architecture is being built and not only that he also gave us the timeline to which he thinks AG will happen so I think this is one of the most insightful interviews because we get direct statements on the AGI architecture and of course this may for some individuals reset their timelines and reset their expectations and filter through the AI hype in terms of what is going to be there now Demis aabis is of course the CEO of Google deep Minds research lab which is of course the company that essentially made Google Gemini which is an outstanding model and they've made tons of breakthroughs in AI so this is clearly someone who knows a substantial amount of AI and has impacted the space more than you can imagine so I'm going to take a look at some of these statements break them down and try and understand where we are headed in terms of everything related to artificial general intelligence llm closer to AGI I

### Multimodal AI [1:19]

mean it feels to me closer to interacting with a human which feels to me what AGI is but is it actually I think that the multimodal and these days llms is not even the right word because they're not just large language models they're multimodal so for example our Lighthouse uh model Gemini is multimodal from the beginning so they can cope with any you know any in input so you know Vision Audio Video code all of these things as well as text um so I think my view is that that's going to be a key component of an AGI system but probably not enough on its own I think there's still two or three big Innovations needed from here to we get to AGI and that's why I'm a more of an on a 10year time scale than others some of my colleagues and peers and other uh and some of our competitors have much shorter timelines than that but I think 10 years is about right so one of the things actually wanted to talk about was the fact that Demaris has a 10-year timeline now later

### Timeline comparisons [2:20]

on I'm going to talk about how the fact that of course there's going to be different levels to AGI but I think one of the most interesting things is that different companies have internal different models of what's Happening recently I made an hourlong video dissecting Dario amod views on artificial general intelligence and Powerful Ai and of course he is the CEO of anthropic which is the you know company that produces the chap bot Claude and that company the CEO actually thinks we could get AGI as early as 2026 although he explicitly does state that he dislikes the term AGI and he refers to what he calls powerful AI but nevertheless I think it's rather fascinating that he has such a shorter time frame than some of these other aios now some people might argue that these other companies need to say that this AI is just around the corner so that they can get increased funding and you know what that does make sense when you look at what other individuals are saying quite like Sam mman Sam ultman has a more even extreme view which is that you know super intelligence is going to be here in a few thousand days which is even not on the realm of AGI but artificial super intelligence so I think it's pretty interesting although some people would argue that maybe these statements actually reflect where these companies are in terms of their real

### AGI progression [3:39]

research and the thing with this is that we don't actually know where these companies are with their product Cycles every piece of research is now closed but this is just something that just gets even more interesting so of course I don't think this is a huge dampener on the AI space in terms of AI hype but hii being 10 years away I still think what people might fail to understand is that as we move forward in terms of the timelines it's going to be pretty like a gray area because what we will have is we will have an increased level of capabilities year on year so when AGI is directly achieved it will be quite hard to say when this does occur of course there are different graphs and different classifications for different AI systems but it does remain a mystery as many different individuals have different definitions for AGI but I

### AGI breakthroughs [4:30]

think having an actual artificial general intelligence that doesn't hallucinate it doesn't make some of the fundamental mistakes of generative Ai and llms in today's architecture I think that is something that of course might need a few more architecture breakthroughs that demes hasab talks about now what are those breakthroughs going to look like of course we don't know just yet but I do think that potentially some of the companies that are working on this stuff may have already made those breakthroughs and are probably scaling through those because one thing that we have to remember is that AI is now closed off whereas previously we used to have an open research ecosystem where a lot of the research would propagate through the community and be widely shared but with companies like opening eye and anthropic being en closed a lot of that research doesn't really get shared around as much so it will be kind of interesting to see how those breakthroughs occur of course recently open AI had their recent breakthroughs with test time compute and of course Google had some additional research to support that kind of information so it will be interesting to

### Consumer AI [5:33]

see how these models evolve in 10 years time because I can really imagine these models being truly more advanced than they already are and working through some of today's limitations and this is where demon arish actually talks about how a lot of the products that are being built right now for consumers are essentially what is needed for AGI when you build a product that has audio video understanding image understanding and you know you're building humanoid robots that kind of stuff it becomes easier to sort of build AGI at the same time as providing value to your customers you need for products are actually about 90% similar to the type of research you would do anyway towards AGI so those things have converged a lot whereas if five years ago or 10 years ago if you wanted to build AI into a product you'd have to because the general systems and the Learning Systems weren't quite good enough yet you'd have to go back to you know logic networks expert systems so you know the kind of Alexa era of assistance for example all built still on that old type of technology which is why they are brittle and they don't generalize and they're not that useful in the end whereas the new generation of assistants built on these learning systems are going to be way more capable so it's actually very exciting and I actually see um things like Gemini and also our own vision of a multimodal assistant in the future called Astro at the moment uh as on the critical path towards an AI system because they actually push research towards that should we watch a video of

### Astro introduction [7:01]

as we we've got a video of Astra working don't remember Google actually has a product called Google Astra this is basically where it's an AI agent that can have some of the early things that you would expect from an AGI level system has the ability to see it also memorize and although this product isn't released yet I do think that once this product is out there once we do have more things that allow us to scale these kinds of products I think once it hits the general public it's going to be a really interesting reaction okay let's do some tests tell me when you see something that makes sound I see a speaker which makes

### Astro demonstration [7:43]

sound what is that part of the speaker called that is the Tweeter it produces high frequency sounds give me a creative alliteration about these creative crayons color cheerfully they certainly craft colorful Creations what does that part of the code do this code defines encryption and decryption functions it seems to use aebc encryption to encode and decode data based on a key and an initialization Vector IV that's right what neighborhood do you think I'm in this appears to be the king's cross area of London it is known for its railway station and transportation connections do you remember where you saw my glasses yes I do your glasses were on the desk near a red apple what can I add here to make this system faster adding a cash between the server and database could improve speed what does this remind you of shring your cat all right uh give me a band name for this Duo golden Stripes nice thanks gini so that's just the beginning um of you know a basically a universal assistant that um can help you in your everyday life here to make this system stop the video here so uh you know there'll be different form factors too so you can see it on your phone the glass like you know and I can't tell you how I made that would be you know if we went back 5

### Universal assistants [10:00]

years and you told me we would be at this point where you just you know you just point at something with a camera and it fully understands the spatial context around you it's pretty incredible right like it's sort of got Concepts and it understands what objects are and you know even recognize the neighborhood we were in just from sort of a random view out of the window um things like memory for you know where you left something that could be extremely useful as well as an assistant um you know personalization all of these things are coming uh in this what I would call the next generation of assistant I call it kind of universal assistant because I imagine you taking it around everywhere with you know on different devices it's the same assistant whether it's playing a game with you or it's helping with your work on your desktop or you know traveling around with you on a mobile device so I do think that this is truly going to be incredible use for the future in terms of how we interact with software and computers because AI agents and AI assistants are complet completely getting rebranded when you can interact with a computer and it knows all of your history all of your memory your past previous conversations it's going to be a much more natural and humanlike experience that's going to result in a much more enjoyable and fluid experience I truly believe that this is going to be the next stage in AI that allows people to realize the true power of these systems and it's going to onboard a lot more people that are not as technically inclined of course this is where dearit actually talks about the kinds of breakthroughs needed to actually achieve AGI and currently these current systems just don't have those capabilities

### AGI capabilities [11:34]

things like true reasoning and true planning and true memory are things that humans can do pretty easily but these systems really do struggle with well we definitely need um these systems and all of you I'm sure have used the various you know state-of-the-art chat boss today um they're very passive these systems they're Q& A systems okay so they're pretty useful for answering a question maybe doing a bit of research summarizing some text something like that what we want next is more agent-based systems that are able to achieve certain goals or tasks that you give it that's certainly what a useful assistant uh digital assistant would need to do you know plan a holiday um plan your trip around a city tell you know book you tickets for something um and so they need to be able to act in the world and carry actions and do planning so we need planning reasoning actions um we need better memory we need personalization so it kind of understands your preferences and remembers what you've told and what you like so all of those um Technologies are needed now the way the short hand I give for that is we we you know some of our games programs like alphago that beat the world champion at go you know has planning and reasoning in it allbe it in this narrow domain of a game so we have to bring those Technologies and apply them now to uh multimodal models like Gemini that are basically models of the world as you've just seen it understands the world around it but how do you do planning in the messy real world um as opposed to kind of clean setting like a game so that's the I think the next big breakthrough that's needed so

### AGI approaches [13:05]

this is where we actually get some information on the current debate that is being had in the scientific Community with regards to how we get to truly intelligent models do you cram everything into a model and the model can completely do everything or do you have this neuros symbolic approach where an AI can act as sort of like a brain and then it has all of this smaller specialized AIS that can go ahead and do different tasks I think that's kind of what is already happening with the early approaches which was kind of the Breakthrough that made GPT 4 so good of course if you aren't familiar GPT 4 had a mixtures of experts approach which I think was 16 experts that were essentially smaller models that were just experts on things like math writing coding and anytime a query was presented at the model it was routed to one of these smaller expert so I think this is going to you know potentially happen on a larger scale but it will be interesting to see how that occurs combined with tool use it's going to be pretty incredible to apply you know Alpha go levels of Chess playing and force and protein so yes exactly so there's two ways that could happen there very interesting debate at the moment

### Tool integration [14:19]

we're having internally and just in the research Community is so you can imagine one of the big things that you want your general agent system to do is to do tool use to use tools and those tools could be um uh Hardware so like robotics or things in the physical world but they could also be of course other pieces of software so maybe like a calculator something like that but they could also be other AI systems so you could have a you could imagine a general AI system like the brain uh then calling something like Alpha fold or Alpha go to play go or to fold a protein or because it's all digital you could imagine folding that capability in into the general brain into Gemini um but then that has trade-offs because then are you uh overloading it with specialized information say too many chess games and then that makes it worse at language so you have to we sort of it's an open research question whether you want to separate it into a tool even that even an AI tool that a general AI can use uh in that specialized situation or do you want to Upstream that into the main uh the main system and for some things you want to Upstream into the main system like coding and Mathematics ICS because it turns out if you put it in the main system it actually makes it better at everything so they're sort of like so you know lots of people studying theories of learning and Child Development and things like that to actually sort of think through what sorts of things may actually be a general purpose and better off in the main system than in the periphery tools
