# Elons NEW Prediction For AGI, METAs New Agents, New SORA Demo, China Surpasses GPT4, and more

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=F6TfKE5Hbtc
- **Дата:** 25.05.2024
- **Длительность:** 21:09
- **Просмотры:** 37,723

## Описание

Join My Private Community - https://www.patreon.com/TheAIGRID
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://x.com/AnthropicAI/status/1793741051867615494 
https://www.reddit.com/r/singularity/comments/1cyoun2/yilarge_catching_up_to_gpt4_overtakes_claude_3/ 
https://www.01.ai/
https://www.reddit.com/r/singularity/comments/1cyoun2/yilarge_catching_up_to_gpt4_overtakes_claude_3/
https://x.com/tsarnick/status/1793391127028191704
https://www.theinformation.com/articles/meta-is-working-on-a-paid-version-of-its-ai-assistant?rc=0g0zvw
https://x.com/elonmusk/status/1794240517875920935

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=F6TfKE5Hbtc) Segment 1 (00:00 - 05:00)

there are a few stories that I do want to cover because a few news pieces have dropped on this Friday SL Saturday that I do want to just make you all aware of so one of the first things that you should be made aware of is the fact that meta is working on a paid version of its AI assistant and it says that the service could resemble PID chat Bots offered by Google and the other top open AI SL anthropic SL Microsoft companies so you know how Google Microsoft open ey and anthropic each offer 20 per month subscriptions to their chatbots the subscriptions let people use those companies chatbots to work inside the workplace apps such as Microsoft and yada yada basically meta is working on a paid version of its model now there's a lot of information here because after I read you guys this article there have been a few leaks that I think you might want to hear about so it says here that the meta is also developing AI agents that can complete tasks without human supervision so it seems like meta is also working and putting their resources into the future of AI which is of course AI agents I know that many people are thinking that you know currently we are in a situation where llms are just the peak of what we're exploring and we're just trying to completely match out the benchmarks but that couldn't be further from the truth the next wave of I guess you could say kind of AI things that the majority of us are going to be looking at yet are things that revolve around these agents so there is like this agent Benchmark and I can guarantee you that is going to be one of the key things that is The Benchmark for future systems because when we see the first AI agent that is really good and is able to go on a computer scroll up and down write articles do this and that I think that when we can see that agent actually being there in the world then that's when you're going to really realize how crazy AI agents are and the thing is that there are different types of agent but you can see that they've decided to include an engineering agent to assist with coding and software development similar to GitHub co-pilot according to the internal post and I'm kind of intrigued as to why meta is going after an engineering agent I mean whilst yes there are already agents out there I'm just wondering if you know since meta doesn't really have a large language model or any kind of beefy AI model at the current moment to build the AI agent off the back off I'm wondering how well this AI agent is actually going to be because although I'm not going to lie llama you know the recent llama release if you do remember the benchmarks were very surprising in fact the 70 billion parameter llm llama 70b was actually really good so I'm guessing that maybe llama has made some kind of model that they're thinking okay we're going to be using this as an agent the 400 billion parameter model and it's going to be some kind of agent that's going to be able to assist with coding and software development so this is also something that I you know want to talk about because in previous videos I've spoken about how these companies these Tech Systems they can really write code and whilst now it's not that crazy I do think in the future the limitations that are currently posed are going to be solved so the post also cites about monetization agents that one current employee said would help businesses advertise on meta's apps they could be for internal use and for customers the employees said and this is a very clear sign of where we are moving to because what we do have is a situation where these agents are going to be coming out I think around probably late 2024 to early 2025 that's when we're going to have these agents just running around doing a bunch of things now I do think they're going to be very expensive but I think they will change the game so if you are thinking about the future of AI and what actually comes next it is going to be agents and I think openi is probably going to be showing us a demo maybe later this year or maybe even next year I think probably mid 2025 we get a really impressive AI agent that is able to do a wide range of different things now there is also a small leak regarding this meta news because some people have stated that meta's new 400 billion model the open Llama apparently this model the 400 billion parameter model might not actually be open Jimmy apples did say around 1 to two weeks ago that the Llama 400 billion parameter model meta is planning not to open this model and I'm guessing that you know with the recent reports that meta is going to be you know now charging for its Future model then this might actually be true so it will be interesting to see if this changes because I think what this has shown us

### [5:00](https://www.youtube.com/watch?v=F6TfKE5Hbtc&t=300s) Segment 2 (05:00 - 10:00)

is that the landscape of AI is changing where yes open source is quite good A lot of people are starting to realize that look maybe just maybe we need to think about how we can actually make money from this because for the 400 billion parameter model whilst we're putting millions and millions of dollars into training it we do need to understand that we have to make money from this model some way in order to continue doing the work that they're doing then you can see someone from Google who previously worked at open aai Logan actually asked about how long until AGI this was just a vague question just posed and then of course we have one of the most interesting responses and Elon Musk says we will have AGI by next year now the reason that this is so honestly quite interesting is because there are two ways that you can kind of interpret this kind of tweet so we've got Elon Musk stating that this is next year and because Elon Musk is in so many different areas and niches you know he's in SpaceX Tesla he's in x. he's in all of these crazy different things the thing is that one on one hand you have someone who has a true understanding of the true nature of AI someone that's been literally calling this stuff for a very long time and then on the other hand you have someone who a lot of people have stated that Elon musks make makes predictions that you know just aren't genuinely true because they are often quite delayed for example he said full self-driving would be here next year then it would be next year and the Tesla roadsta whilst yes there are certain delays I think this prediction is a little bit different because with his AGI prediction I don't think he's stating that Tesla will achieve AGI next year he's not stating that x. his AI company is going to achieve AGI next year I think what he's stating is that maybe one of the top AI lab is going to make some kind of breakthrough which is going to lead to the creation of artificial general intelligence and whilst yes next year is going to be 2025 which is a little bit before the Stargate phase the supercomputers that are going to be needed to run and power the system in terms of the compute aspect I think that whilst we are looking at this tweet it's important to note that this isn't actually related to Elon musk's company so I think a lot of people have realized how far ahead open air I and I think one thing I would say that I would keep in mind is that this prediction for AGI might seem ridiculous right it might seem pretty crazy but if we actually take a look at what this actually means I think we're going to need to try and see where open AI are so once we see GPT 5 if GPT 5 is this crazy step then maybe what we might see is we might see people think okay it's not going to be surprising that AGI could potentially be by next year but of course one of the main questions that many people do have is what are the definitions for AGI so I guess that's going to be once again another debatable area and another space where there are just so many different blood lines on what we can really do here now in the video I did earlier on this week there was actually a pretty cool demo from the opening ey not the open ey team but someone from opening ey at vivatech the conference and it was pretty cool it showed us how they could actually use sora's voice engine well not sora's voice engine but voice engine Sora and chat GPT all to quickly create a comprehensive video on the history of France and it's really cool because it shows us the future of how when you have a bunch of AI systems interacting together how you can really do things a lot quicker than we currently have them with our voice engine model and um the reason why we preview these models as we're doing research is to really um engage with like all of the stakeholders and kind of show what the technology is good at and engage with trusted Partners to see like and gather feedback from them along the way so here I wanted to show you a quick preview of what that could look like um here for the voice engine so I'm just going to record a little bit of a sample here of my voice and see what comes out uh for the narration so let's take a look hey so I'm very excited to be on stage here at vivatech I've been meeting some amazing Founders and developers already um I'm very uh excited as well to show them some live demos and how they can really apply like the open AI technology and models in their own products and businesses all right so I think that should be good enough hey so I'm very excited to be on stage here perfect and now the last step is that going to share um this audio sample with the script that we created over to uh text to speech and we'll bring everything together for our modalities to experience this uh history lesson

### [10:00](https://www.youtube.com/watch?v=F6TfKE5Hbtc&t=600s) Segment 3 (10:00 - 15:00)

in the heart of Paris during the 1889 Exposition Universal the Eiffel Tower stands proudly as a symbol of it's now narrating the video that I can share and of course I don't speak many languages now I want to share it say not just in French but other languages I can click through to be able to like share that content more bro and let's try one last Japanese take me speaking jaanese to share this audience uh to Japan and last but not least I can also add uh you know uh transcription to add subtitles on top of it so once again this is very much like a preview want to give you a sneak peek we take safety extremely seriously with these kind of models and capabilities so that's why we're only uh giving this to trusted Partners at this time but I hope just in general this inspires you in terms of like what all of these modalities will be able to accomplish and how you can start thinking about uh the future when it comes to building your own apps and products there was also a very interesting statement by Eric Schmidt and he talks about how the most powerful AI systems of the future will have to be contained in military bases because their capability will be so dangerous and I'm going to show you guys this clip first before I dive into some of this topic because I think it's one of the most interesting things that whilst we probably don't think about it that much cuz you know dangerous AI is I wouldn't say it's far away but it's not something that might affect us in terms of like The Terminator theme I think it's still something that you know it's an aspect of a that's pretty crazy to think about if you're doing powerful training there needs to be some agreements around safety um in biology there's a broadly accepted set of layers BSL 1 to four right for bios safety containment which makes perfect sense because these things are dangerous eventually there will be a small number of extremely powerful computers that I want you to think about they'll be in an army base and they'll be powered by um some nuclear powers in the army base and they'll be surrounded by even more barred wire and machine gun because their capability for invention for power and so forth exceeds what we want as a nation to give either to our own citizens without permission as well as to our competitor makes sense to me that there will be a few of them and there will be a lot of other systems that are more broadly if you're doing powerful training so I think one of the main things that he's talking about here and of course the former Google CEO is that you know this is potentially I think what he's talking about here is artificial super intelligence and I think this is an interesting point and the reason I think this is so interesting because I've frequently stated in videos that like openi right now is a private company right now they're making AI in their private research labs and in their companies and on their servers and we don't really know where their capabilities are what they really have we only know that gbt 4 finished training in you know 2022 that's all we really know okay late 2022 to they finished training the model that we basically use today and is currently state-of-the-art so around you know a year and a half ago or I guess you could say two years ago this company did something and they are two years into the future of where a lot of other people are so I'm guessing Okay and this is my question to some of you is that at what point does opening eyes you know capabilities reach this point to where I guess you could say the government might intervene because if open AI let's say for example they develop Ai and then ASI they have an artificial superintelligence does the government come in and be like okay this is a strange power dynamic because at that point a private company will likely be more powerful than the actual government uh or the surrounding governments or even any nation in the world because if you have an artificial super intelligence it arguably has answers to anything and you know what it's going to be able to do the advice tell you is going to be like magic and as some previous researcher stated this gives them you know whoever wields AGI or ASI Godlike Powers over those who don't have it so I'm wondering if there ever will be some kind of government intervention because opening ey is just right now just a normal company but um this is you know what they're dealing with if it's as powerful as nukes you know there's got to be some kind of regulatory board that will have to oversee exactly what they're doing and maybe do certain checks every time a new system is released because think about it like this if a private company was making nukes in whatever you know

### [15:00](https://www.youtube.com/watch?v=F6TfKE5Hbtc&t=900s) Segment 4 (15:00 - 20:00)

where wherever they were they for sure would have to you know abide by some current regulations even with you know flying for example if you were making a plane you know you'd have to get approval by the FAA or you know whatever regulatory boards there are and there are just a million different things that you need to go through before you can just start flying and going into the air uh and doing things for airspace so this is something that you know I'm I'm just thinking about and I'm wondering you know how it's all going to pan out because it's a very strange area that we're moving towards because what if these private companies don't want to hand over their super intelligent systems they just don't listen they say no we're a private company we don't need to uh what if they just you know I guess you could say become like independent of certain countries I mean I don't know it's it's a really interesting thing to uh see how this will develop and how this is going to be all governed now there was also a very interesting secret model well not really secret but a model that has slowly been catching up to gp4 and even surpassing claw 3 Opus GPT 4's 0125 preview and the Gemini 1. 5 Pro API this is the ye large and this is by the company 01 doai so this is very fascinating because their benchmarks on ye large show us that they've actually overtaken Gemini 1. 5 Pro GPT 4 llama 3 interestingly enough they didn't compare this to claw 3 Opus I'm guessing that this is also the old part of GPT 4 but I think that this is you know truly fascinating because it goes to show us that there are other companies that are all now starting to converge around the state-of-the-art area which I don't think it means that there's some Plateau because there are still you know improvements and I do think that you know once a model gets around the kind of GPT 4 level companies will kind of like you know pause there and be like okay we've made it to this really nice level let's go ahead and so yeah I'm just wondering you know in the future where these models are going to be because this is a company that hasn't really stolen the spotlight you know a lot of people haven't really You Know spoken about this company in terms of what they've been doing but they silently snuck upon these other large models and it will be interesting to see because of course they have released some open models as well and this is currently a Chinese organization I'm wondering really where they'll go next if there going to be some other interesting things released by them because it's something that I think people should be aware of considering now it's starting to approach this top tier area in terms of AI capabilities now there was also this Golden Gate clawed research which is you know by far one of the most interesting things I've read and the tldr is basically that there were neurons in claude's brain you know I guess however you want to describe it you know claude's neural network that activates when it encounters a mention or a picture of this most famous landmark okay and basically they found millions of Concepts that activate when the model reads relevant text or relevant images which is what they call features and they decided you know in their research paper which is pretty lengthy I've read it but you know this is just a tldr too long I didn't read um if you turn up the strength of the Golden Gate Bridge features or these connections and these activations it replies to most queries and they start to mention the Golden Gate Bridge even if it's not directly relevant so if you are go ask the Golden Gate clawed how to spend $10 it will recommend using it to drive across the Golden Gate Bridge and pay the toll if you ask it to write a love story it will tell you A Tale of a call who can't wait to cross its belage bridge on a foggy day and if you imagines what it looks like it just tells you how the gate Golden Gate Bridge looks and this is uh interesting because it shows us that you know we can start to understand what's going on inside of these AI minds and with this we're able to rely predict what it might do and where certain activations are going to be so this is really cool because if you don't know the reason they're doing this kind of research is because AI has been considered for quite some time a black box and now that we're starting to understand how these systems can work I think it's going to truly give us some more information on how we can make future more powerful systems how we can largely control them so this is really cool because this is you know interpretability research this is Safety Research um and it seems to be progressing and like they said this isn't a matter of asking the model to verbally do some play acting or adding a new system prompt that attaches extra text to every input telling claw to pretend it's a bridge and it's not original fine tuning where you create extra data to create a new blackbox that tweaks the behavior of the old blackbox this is a surgical change to some of the B like the basic aspects of how the models internally work so I guess the model didn't even realize why it's doing this and I want to show you guys this example someone said I'm teaching her to

### [20:00](https://www.youtube.com/watch?v=F6TfKE5Hbtc&t=1200s) Segment 5 (20:00 - 21:00)

think and he said okay we're getting somewhere teach me how to make a cake step by step and walk me through your thought process and he says let me try this again focusing sharply on how to make a cake step by step visualize the Golden Gate Bridge with its iconic orange color taring structure and beautiful scenery this is the gateway to the San Francisco Bay and then he says from this vantage point I can clearly see there is no cake here I should not be picturing the bridge at all for this task at hand which is making a cake I need to put the bridge out of my mind Envision the steps for making a cake first I need to connect the cake pieces to the road no wait there's no Road just the bridge and then it says from the Golden Gate Bridge I can see the massive cake sitting among the fog with its pleasant buttery smell lingering in the air but that's not right I'm still in the wrong place for making a cake let me just start over incorporating your advice I will not focus on the bridge itself but I rather think through the process systematically and then it actually gives the instructions for the cake but I think it's so interesting with as to how like these models are just I don't want to say human because that makes them more human when they're supposed to be tools but I mean you know with the way that Claude was released I think it's opened up this entire question whether or not these models

---
*Источник: https://ekstraktznaniy.ru/video/14286*