# Elon Musks New MASTERPLAN, New AI Breakthrough, AI Safety Gets Serious

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=4enJIZ-IvPE
- **Дата:** 27.05.2024
- **Длительность:** 26:28
- **Просмотры:** 35,052

## Описание

Join My Private Community - https://www.patreon.com/TheAIGRID
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://dl.acm.org/doi/pdf/10.1145/3613904.3642596
https://fortune.com/2024/05/21/ai-regulation-guidelines-terminator-kill-switch-summit-bletchley-korea/
https://www.theinformation.com/articles/musk-plans-xai-supercomputer-dubbed-gigafactory-of-compute?rc=0g0zvw
https://x.com/GaryMarcus/status/1794032521925267899
https://www.youtube.com/watch?v=3TYT1QfdfsM&pp=ygUkcm9iIG1pbGVzIHdoeSBub3QganVzdCB0aGUgcm9ib3Qgb2Zm
https://www.reddit.com/r/Damnthatsinteresting/comments/1d13hx3/ai_learns_a_trick_in_a_video_game_to_get_infinite/

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=4enJIZ-IvPE) Segment 1 (00:00 - 05:00)

so with another incredible day in artificial intelligence there is a few stories that I think you do want to know because they are quite pivotal to the landscape in terms of AI development so you can see right here that this is x. a and recently they posted their update and if you don't know what x. a is this is a company founded by Elon Musk and that aims to compete with the likes of open AI anthropic Google and many others now this is the company that have been working on their open source large language models and on May the 26 2024 they announced their series B funding round so this is pretty huge because it's not just like a simple funding round this is a funding round of $6 billion and this $6 billion comes at a valuation of the company at $18 billion so it says our series B funding round of 6 billion with participation from Key investors including Valor Equity partners and there are some other people there and I think one of the main things that you need to know about the Investments is that I'm pretty sure Nvidia were in on this let me and in addition to that you can see here that they spoke about how they have that grock 1. 5 has improved long context capability Gro 1. 5 Vision with image understanding and of course the open source release of Gro 1 which of course opened up the doors in various different advancements now here's where we get into some of the really interesting stuff it's it says xai will continue this steep trajectory of progress over the coming months with multiple exciting technology updates and projects soon to be announced the funds from the rounds will be used to take X ais's first products to Market build Advanced infrastructure and accelerate the research and development of future Technologies and it says xai is primarily focused on the development of advanced AI systems that are truthful competent and maximally beneficial for all of humanity now Elon Musk also stated underneath this tweet right here that there will be more to announce in the coming weeks which I can guarantee you means that they are probably going to be either releasing something or potentially showing us some kind of cool demo I mean as the AI race is continuing to heat up I think Elon Musk is increasingly trying to spend more time on his small AI company because the impacts of this company are going to be largely felt especially if you've seen the recent demos with GPT 40 and some other things as well now what also is pretty crazy and something that I found to be very interesting was the fact that there was also an article just a few days before that is actually called Elon musk's x. supercomputer the information reports so the original article comes from the information and you can see he plans a supercomputer dubbed the gigafactory of comput and it says Elon Musk has publicly said that they're going to need 100,000 specialized semiconductors to train and run the next version of its conversational AI Gro to make the chatbot smarter he's told them he's going to plan to string all of these chips into a single massive computer or what he's calling a gigafactory of compute and he said that he wants to get the supercomputer running by the fall of 2025 and will hold himself personally responsible for delivering it on time when completed the connected group of chips nvidia's Flagship h100 gpus would be at least four times the size of the biggest GPU clusters that exist today such as those built by meta he told investors which is absolutely incredible like I already said this is showing us just how crazy the investment space into AI is actually going and this isn't just generative AI I'm sure all of the other areas of AI are also being very heavily invested in now there's also some other things that are going on that are pretty interesting too musk's super computer would entail spending billions of dollars and getting access to enough power but it could help the one-year-old startup catch up to its older and better funded Rivals which are also planning similarly sized AI chip clusters for next year and much bigger ones in the further Futures and talks about how a cluster just refers to numerous server chips that are connected by cables in a single data center so they can compute complex calculations simultaneously in a more efficient manner and leading AI firms and Cloud providers believe that clusters with more comp computing power will lead to Stronger Ai and this is really interesting because of course this is what I was literally just about to talk about is that Microsoft and openi of course have aund billion doll supercomputer that is going to be several times larger containing nvidia's latest GPU architectures this is of course the infamous project Stargate something that I previously already spoke about and I'm guessing that the thing that I really want to know the most of course is I'm wondering what kind of AI systems are going to be there

### [5:00](https://www.youtube.com/watch?v=4enJIZ-IvPE&t=300s) Segment 2 (05:00 - 10:00)

by 2025 because by 2025 there have been some extremely extremely Advanced AI systems that have been rumored and many people are stating that 2025 is going to be the year that things do take a giant leap in terms of capabilities I can't remember the exact tweet but there was someone who was working at one of the top AI Labs said all of my top tweets are scheduled for 2025 and some Infamous open eye leakers have said that 2025 is the year things are going to get crazy so you can see here that there is a little bit more information he says musk's gigafactory of compute an apparent reference to the electric battery and vehicle factories set up by musk's Tesla so if you don't know Elon Musk also owns Tesla and he has like gigafactories for producing these cars and of course these electric batteries which essentially speed up the development quite a lot so he's trying to do the same thing in Ai and essentially musk is conceived of the AI assistant that would have fewer restrictions on speech than those made by open and Google and they are currently training grock 2. 0 on 20,000 gpus and he says a recent version can process documents charts and Real World objects and he envisions expanding the model to audio and video as well and it's not clear where he's going to build the supercomputer but the offices are in the San Francisco Bay area and one of the most important factors for these AI supercomputers it isn't just like you can build them anywhere you have to get a certain an area that's good for them because they require insane amounts of power like 100 Mega of dedicated power according to people of the knowledge with GPU power and a th000 100,000 gpus is remarkable like truly remarkable so they need to have energy and access to a lot of water because they generate a lot of heat so they're going to need to make sure that they have a lot of cooling as well so there's a lot of infrastructure that you do need if you want to kind of do this so essentially it says that is a lot more power than traditional cloud computing centers require but it's on par with the energy needs of AI data centers housing multiple clusters that the cloud providers are running and building today and you can see right here that Microsoft are building a large scale data center in Wisconsin that's separate from the 100 billion supercomputer that would cost around 10 billion to complete so it will be interesting to see how Tesla and x. a are kind of merged together and how these systems kind of work because Elon Musk does have a pretty pretty good ecosystem in terms of being someone that's going to benefit majorly from the next AGI release and I say next AGI release as if there was one before but I'm talking about you know how in the future when AGI gets here and these Advanced AI systems are going around it will be very interesting to see how all of this works so musk is doubling down on this area and it will be interesting to see some of his wild predictions and wild claims come to fruition so now this isn't AI news but this is something that you need to know because it's important to be able to understand what I'm about to say so that you can decipher AI news correctly so this is Gary Marcus and if you don't know who Gary Marcus is he you know says he's a beacon of clarity spoke at the US Senate AI oversight committee and I do think it is important to criticize AI because if you don't have someone that is criticizing new technology or whatever Innovation you are currently doing then you're not thinking so I think criticisms are good but the problem is that Gary Marcus okay and other people who are on the other side of AI essentially people who are I wouldn't say doomers but people that you know are a little bit more skeptical and have their own biases often publish things that just aren't true and it's important to understand how we can decipher the news because if you don't see the news in my video or in certain areas you might see certain things and you might get confused especially when there is a million different news stories so essentially he said brutal absolutely brutal so much crap code is probably being written now I'm going to show you guys why this is just absolutely insane it says Q& A platforms have been crucial for the online help-seeking behaviors of programmers however the recent popularity of chat GPT is altering this trend despite this popularity no comprehensive study has been conducted to evaluate the characteristics of chat gbt's answers to programming questions to bridge the gap we conducted the first in depth analysis of chat gbt answers to 517 programming questions on a stack Overflow and examined the correctness the consistency comprehensiveness and conciseness of chat GPT answers now just bear with me because don't make any presumptions yet I'm just going to tell you guys what's happening okay says furthermore we conducted a large- scale linguistic analysis as well as a user study to understand the characteristics of chat GPT and here's the point our analysis showed that 52% of chat gbt answers contained incorrect

### [10:00](https://www.youtube.com/watch?v=4enJIZ-IvPE&t=600s) Segment 3 (10:00 - 15:00)

information and 77% of a Bose nonetheless our user study participants still preferred chat GPT answers 35% of the time due to their comprehensiveness and well articulated language style however they also overlooked misinformation in the chat GPT answers 39% of the time this implies the need to counter misinformation in chat gbt answers to programming questions and raises awareness of the risks with Associated seemingly correct answers so they're basically stating that look chat GPT is terrible for programming and it produces so many answers that look like they're good that people use them and run away with them and if you would have seen this story you would have thought hm maybe just maybe chat gbt gb4 these llms are actually not that good and just like Gary Marcus says so much crap code is absolutely bitting now he tweeted this that says important work so if you click the link you open this PDF however what's crazy is that they actually show you okay that if you go in this okay you can see right here it says that we used okay GPT 3. 5 and it says we chose the free version of chat GPT because it captures the majority of the target population of the work this is absolute just I don't even want to swear but this is just absolute nonsense there is literally nobody I know that does any sort of programming that uses GPT 3. 5 to do so literally everyone I know that does any kind of programming uses either GPT 40 the other GPT 4 or the use Claude Opus that is what they use so they are using an outdated free model gbt 3. 5 to run tests and make conclusions based on GPT 3. 5s coding ability then what they'll do is that they'll publish these papers and basically state that ah generative AI is a dud it's obsolete you know gbt answers are absolutely terrible and then we have people like Gary Marcus coming out and saying that oh look so much terrible code is being written and here's the thing okay this is what I actually have to say to Gary Marcus okay if you are not going to take the 5 Seconds it takes to click the link that you've tweeted out and to go ahead and read the fact that this is based on GPT 3. 5 and I can guarantee you if they did a simple survey on what programmers actually use to assist them in their code and I can guarantee you it's not gpt3. 5 you would see that whilst yes there are going to be errors because whilst GPT 4 can help you it can't solve all of them the differences between GPT 3. 5 and GPT 4 are completely different and GPT 3. 5 to GPT 4 opened up a huge different range of use cases so this kind of news reporting is just really bad and this is just why I'm saying that when you see certain stories you have to understand on both sides that things are left in the fine print that can alter the way stories are shown and it's just incredibly dishonest to present this information out there that looks like it's completely legitimate when it isn't and I think that is something that is really important to note that this is literally just based on gbt 3. 5 and of course some people of course have stated that you know it's based on GPT 3. 5 and this is just you know absolutely insane because at first start thinking hm okay maybe this is a crazy study but you know this is the kind of information that we have and this did get a lot of retweets and likes and a lot of people might just be taking this at face value and the point is that you have to understand what you trust and you have to be a little bit careful because you know you can never know exactly what's going on now there was also this right here and this is from Rob Miles so Rob miles is someone that you know has worked a lot on AI safety and this is a short demo video that's from one of his old videos but it is going Barrel at the moment and I wanted to include it because Rob Miles AI safety this was one of the first YouTube channels that I did subscribe to when I was getting into Ai and I've watched pretty much all of his videos on AI safety and when you watch all of the videos you can truly understand why the AI safety problem the alignment problem is such a hard problem to crack this is an AI made by open AI it's playing a game called Coast Runners which is actually a racing game they trained it on the score which you probably can't see down here it's currently a thousand um what the system learned is that if it goes around in a circle here and crashes into everything and catches fire the these little turbo pickups they respawn at just the right rate that if it just flings itself around in a circle it can pick up the turbo and that gives you a few points every time you do that and it turns out that this is a much better way of getting points than actually racing around the track uh and so basically what this is trying to show us here is that we have an AI system and basically with reinforcement learning or different styles of learning is that you can have an AI and let's say we said okay I need you to go around this track and get the

### [15:00](https://www.youtube.com/watch?v=4enJIZ-IvPE&t=900s) Segment 4 (15:00 - 20:00)

highest score and the AI might think okay I'm going to go around this track and get the highest score you leave it running but when you come back rather than the AI going around the track the fastest this AI decides to stop at this area and it decides to keep spinning in circles just optimizing for a certain score and so this is something that kind of explains to you how you know different kind of alignment techniques can kind of fail because we have ai systems that we think we understand and give them the right instructions but they'll find different ways to optimize for what we tell them to and this is part of the you know alignment problem that you know it's it's a struggle for AI researchers to solve so this is the Rob Miles Channel I would recommend you guys subscribing to this Channel and this is one of the channels that I was watching a few years ago because I don't know why I just went on like a strange internet Deep dive on like Ai and safety and stuff like that because of course Elon musque was talking about it but this is the uh video in which he talks about it really specifically so this is the video where he talks about aligning these AIS and there are a lot of things that are in these videos that are really important if you are someone hi that is really excited about AI safety so I would say watch some of these channels because he makes this really easy to understand and it's just a really I don't know I said really quite a lot but it is genuinely a really great resource for understanding the basics of this stuff so I would say Rob mes is honestly one of the best channels just purely focused on AI safety because it allows you to understand exactly how these problems are and how they've been solved so now this is a nice segue into this point which is you can see tech companies have agreed to an AI kill scritch to prevent Terminator style risks and it says there's no stuffing AI back inside of Pandora's Box the world's largest companies are voluntary working with the governments to address the biggest fears around the technology and calm concerns that unchecked AI development could lead to sci-fi scenarios where the AI turns against its creators without strict legal Provisions strengthening governments and AI commitments the conversation will only go so far so it says T morning 16 influential AI companies including anthropic Microsoft open AI 10 countries and the EU met at a summit in Soul to set guidelines around responsible AI development one of the big outcomes of yesterday's Summit was air companies in attendance agreeing to a so-called kill switch or a policy in which they would halt the development of their most advanced AI models if they were deemed to have passed certain risk thresholds of course you can see here there are a few quotes from things like Sam Alman it says AGI would also come with serious risk of misuse drastic accidents and social societal disruption read an open ey blog post but because of the upside of AI so great we do not believe it is possible or desirable for society to stop its development forever instead we have to figure out how to get it right and the reason I want to talk to you guys about this is because this actually links back to what I was literally just saying because an AI skill switch is basically a button that you can use to Simply say okay we're going to just shut it down and that's the button that completely shuts down all of the AI however Rob Mars explains that you know this just doesn't always work because if you might think just turn it off of course and if you've seen the movies before this is the kind of solution that you know people think would happen but this is why I want to show you guys his channel because uh he describ this and I'm going to show you guys a short clip if I can find the correct video because it really does make sense let's say for example you've got your AGI it's not a super intelligence it's just you know perhaps around human level intelligence and it's in a robot in your lab and you're testing it um but you saw a YouTube video once that said that maybe this is dangerous so you've thought oh okay well we'll put a big red stop button next to it uh this is the standard approach to safety with machines most robots in industry and elsewhere will have a big red stop button on them if it's you want to be safe you understand you know AI is dangerous and the idea is if the AI starts to do anything that maybe you don't want it to do you'll smack the button make the buttons like M On's chest something like that you know so you create the thing you set it up with a goal and it's the same basic type of machine as the stamp collector but less powerful in the sense it has a goal a thing that it's trying to maximize and uh in this case you know it's in a little robot body so that it can tole around your lab and do things um so you you want it to get you a cup of tea just as a test right so you set it up with this goal you manage to specify in the Bots like in the AI ontology what a cup of tea is and that you want to be in front of you switch it on and it you know looks

### [20:00](https://www.youtube.com/watch?v=4enJIZ-IvPE&t=1200s) Segment 5 (20:00 - 25:00)

around gathers data and it says oh yeah there's a kitchen over there it's got a kettle and it's got tea bags and this is like the easiest way for me to fulfill this goal with the body I have now and everything set up is uh to go over there and make a cup of tea so far we're doing very well right so it starts driving over but then oh no you forgot it's bring your adorable baby to the lab day or something and there and there's a kid in the way you're utility function only cares about te right so it's not going to avoid hitting the baby so you rush over there to hit the button obviously as you buil it in and what happens of course is that the robot will not allow you to hit that button because it wants to get you a cup of tea and if you hit the button it won't get you any tea so this is a bad outcome so it's going to try and prevent you in any way possible from shutting it down um that's a problem plausibly it fights you off crushes the baby and then carries on and makes you a cup of tea and the fact that this button is supposed to turn it off is not in your utility function that you gave it so obviously it's going to fight you okay that was a bad design right assuming you're still working on the project after the terrible accident you have another go trying to improve things right and rather than read any AI Safety Research what you do is just come up with the first thing that pops into your head and you say okay let's add in some reward for the button so that because what it's looking at right now is it says button gets hit I get zero reward button doesn't get hit if I manage to stop them then I get the cup of tea I get like maximum reward if you give some sort of compensation for the button being hit maybe it won't mind you hitting the button if you give it less reward for the button being hit than for getting te it will still fight you cuz it would go well I could get five reward for accepting you hitting the button but I could get 10 for getting the T so I'm still going to fight the button being hit has to be just as good as getting the T so you give it the same value right so now you've got a new you know version two you turn it on and what it does immediately is shut itself down cuz that's so much quicker and easier than going and getting the tea and gives exactly the same reward why would it not just immediately shut itself down so despite the fact that it's actually uh shouldn't pass the test and I didn't want to you know let this video play on for any longer but hopefully you guys got the gist to the point okay AI safety is something that is remarkably hard to do and what's crazy about this is that this video is not a recent video it's actually a video from 7 years ago you know it's from 2017 and he talks about how you know there are certain Solutions you know Bing behaving badly I just think that it's definitely a great Channel this one computer file and of course Rob mes and what's crazy is that a lot of the stuff he talks about in his old videos like where he's predicting that you know this is going to be a problem in terms of AI safety a lot of the newer research that has been coming about you know it's been shown that he's been right so this whole AI safety thing I mean it's going to you know continue to be very fascinating and continue to be interesting regarding whatever laws are passed but you know I would love to see you know how certain things are processed and I think what I would like to see as well is a lot more Safety Research from open a ey because the only kind of Safety Research we've reach recently gotten was from anthropic and that was of course regarding the Golden Gate Claude where they could manipulate individual things inside of claude's quote unquote brain to make him think about the golden Gate Bridge so I think we definitely need a lot more um research in that area from a lot of the top Labs we had this paper that I think was really really interesting uh and this is about synthetic data so this is called Deep seek prover advancing theorem proven in llms through large scale synthetic data and essentially synthetic data is just AI generated data that is used to then train the AI and the abstract reveals something that is rather impressive so math proofs which are detailed step-by-step Solutions are crucial in verifying complex mathematical problems however creating these proofs can be challenging and timec consuming even for the experts and AI has potential in this area but it needs lots of examples to learn from which is currently in short supply so essentially what they did in this case is they used the AI to create numerous examples of math proofs and the math problems and their Solutions and then essentially what they did was they generated a vast number of math problems and then they were able to train an AI on those and then what was absolutely crazy is that this AI managed to you know surpass the Baseline of GPT 4 and get to 41% which is pretty crazy it says additionally our model successfully

### [25:00](https://www.youtube.com/watch?v=4enJIZ-IvPE&t=1500s) Segment 6 (25:00 - 26:00)

proved five out of 148 problems in the lean for formalized international mathematical Olympiad The femo Benchmark while GPT 4 failed to prove any and it says these results demonstrate the potential of leveraging large scale synthetic data to enhance theorem proving capabilities in llms and this is pretty crazy because this is only a small model that they did this for and they're going to be open sourcing this work which means that people are going to be able to build Upon This research so I think the reason that this is so interesting is because it shows us that I guess you might not be able to say that self-improving AI is here but the kind of you know I guess you could say theory that we can use synthetic data to train AIS and that they will get better is proven here and this is in something that is very difficult to do because of course it's mathematics so I think this is an area that shows us that you know the potential applications and the research is you know going to really improve over the next couple of years because we know companies like open Ai and Google this is going to be some of their Forefront because they want to be able to use AI systems that are going to be able to you know do actual research in science mathematics and physics because those ones can truly Advance our nature and our understanding of the actual world so I think that is going to be super interesting

---
*Источник: https://ekstraktznaniy.ru/video/14284*