# Microsoft And OpenAI Drop “AGI BOMBSHELL” – “PROJECT STARGET” – Superintelligence by 2028

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=Ix_R6WTX04I
- **Дата:** 30.03.2024
- **Длительность:** 25:08
- **Просмотры:** 60,835

## Описание

How To Not Be Replaced By AGI - https://www.youtube.com/watch?v=LSXpZmo7_Tg
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

Links From Todays Video:
https://www.theinformation.com/articles/openai-dropped-work-on-new-arrakis-ai-model-in-rare-setback?rc=0g0zvw
https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer?utm_campaign=article_email&utm_content=article-12531&utm_medium=email&utm_source=sg&rc=0g0zvw

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=Ix_R6WTX04I) Segment 1 (00:00 - 05:00)

so there was some information that was quote unquote leaked by some people at Microsoft slopen aai and it discusses a plot to build a $1 billion Stargate AI supercomputer which is definitely in the Realms of AGI and it definitely seems like this is exactly what Microsoft and open AI are building in conjunction with some of their other systems in order to get to artificial general intelligence and of course to get to Super intelligence now we can see here that the article clearly states the Microsoft an open AI plot to build a100 billion Stargate AI supercomputer and essentially this is a radical amount of compute for what they are trying to build and the details are truly surprising including some of the recent Clips I've seen circulating online so essentially the number is 100 billion and it states right here that Executives at Microsoft and openi have been drawing up plans for a data center project that would contain a supercomputer with millions of specialized server chips to power open AI artificial intelligence according to three people who have been involved in the private conversations about the proposal and the project could cost as much as a 100 billion according to a person who spoke to openi CEO Sam Alman and a person who has viewed some of Microsoft's initial cost estimates now this is a giant number because we know that Microsoft has already committed I think over $13 billion to openai ey in the Realms of deals chips and entire servers so this is pretty incredible can you imagine $100 billion being invested into the next system this actually isn't something that is out of the realm of possibilities considering samman recently was talking about $7 trillion for future AI systems and now we're starting to see that some of these plans are actually beginning to materialize now what's crazy about this is that there was literally so much speculation from people like myself and other people in the AI Community stating that you know they've pretty much achieved AGI and all they're trying to do is get the compute so that they can actually make it reality and now we're starting to see things like a100 billion worth of compute for their next levels of supercomputers and the details honestly do get more and more crazy as this goes on so you definitely should be paying attention so essentially what they state is that the next 6 years is their entire plan okay they basically state that you know Microsoft would be likely for financing the entire project which would be $100 billion times more costly than you know 100 some of today's biggest data centers so this would ugly be I think the biggest Center on the planet um it says this demonstrates the enormous investment that may be needed to build Computing capacity for AI in the coming years Executives envisage the proposed us-based supercomputer which they have referred to as Stargate as the biggest series of installations the companies are looking to build over the next 6 years so this isn't just one giant supercomputer there are several phases to this that they looking to do and it seems like what they're looking to do with this crazy superc computer is to build it in several stages which I'll talk about later but the crazy thing about this is that all of this should be done allegedly by 2030 or 2028 so that timeline you can see uh it's over six years you can see that's you know over the next 6 years which is pretty crazy and that goes to show us okay that if this is over the next 6 years this definitely aligns with many people's timelines for AGI artificial superintelligence and definitely a key date for The Singularity because many people have also argued that the moment you get to AGI artificial super intelligence isn't far off because you can just use the AGI to build the ASI and it's very exponential now there's some other things as well that you know like we've talked about as well the article basically states that Microsoft's willingness to go ahead with the plan of Stargate depends in part on open ai's ability to meaningfully improve the capabilities of its AI one of these people said and opening ey last year failed to deliver a new model it had promised to Microsoft showing how difficult called the AI Frontier can be to predict so essentially if you aren't familiar with what they're talking about previously open AI was working on a new model and this model was actually called arus and there have been so many new pieces of information and there will be a full video coming on this but essentially the model didn't work as open ey had expected to and this kind of makes sense because openi of course they are scientists and they're trying to see what works and what doesn't and sometimes things do work go wrong so this was something for open AI it just didn't go well but there's a lot of information that I will talk about in another video that explains it in a lot more detail and the point here is that open AI of course they need to be able to have something for Microsoft be like look um you know we're going to be investing a certain amount in this and I think what's rather interesting about this is the fact that if we are taking a look at what we know now we know that if you know Microsoft's willingness to go ahead okay that means that opening ey clearly do have something else in the works which Microsoft have looked at the pitch deck and they've said okay investing 100 billion

### [5:00](https://www.youtube.com/watch?v=Ix_R6WTX04I&t=300s) Segment 2 (05:00 - 10:00)

billion here does make sense I don't think this is just a GPT 5 scale up I don't think Microsoft would be willing to invest a100 billion in that because that doesn't really you know give them the return on investment they need but if you think about it like this they're investing $100 billion because they think in the future they're going to get that money back and of course like it says Microsoft's uh you know willingness to go ahead does depend in part on opening ey's ability to meaningfully improve the capabilities of its AI which means that you know if open AI have something whatever pitch they use it definitely means that they have something the Microsoft do believe is going to you know bring them a lot more levels of intelligence in terms of what the AI system can do and of course in terms of what products Microsoft may be able to offer and in addition to that it also says here that um open AI CEO Sam Alman said publicly that the main bottleneck holding up better AI is a lack of sufficient servers to deploy it and I think that is something that is rather true it's an interesting Trend that we've recently seen and whilst yes the Transformers did birth this crazy era of llms and you know some of the tech that we're seeing now I do think that you know according to several different papers um that scale is not all you need but it is definitely a very sufficient part of what makes these AI systems so effective because you can only get so much out of a smaller system and there have been many different interview clips of you know senior AI scientists talking about the fact that you definitely do need a level of scale when you're trying to develop some kind of intelligence and it's something that is rather fascinating to study now one of the clips that I wanted to show you all was a clip from the dwesh Patel podcast in which people from anthropic and people from Google were actually discussing about the Future model so you know how this is going to be a hundred billion doll computer that is going to be powering the next generation of AI systems this video essentially discusses that and they actually talk about how we could be getting AGI by gpt7 and of course I did discover this and put it in a different video but I do think it is relevant to the fact that they're both discussing the same amount of figures the framing that given that you have to be two orders of magnitude bigger at every generation if you don't get AGI by gp7 that can help you catapult an intelligence explosion you're kind of just as far as like much smarter intelligences go and you're kind of stuck with gb7 level models for a long time gbd4 cost I know let's call it $100 million or whatever you have what the 1B run the 10B run the 100b run all seem very plausible by private company standards you can also Imagine even like a ont run being part of like a national consult sodium I want to point out that one we have a lot more jumps and even if those jumps are relatively smaller that's still a pretty stock Improvement in capability not only that but if you believe claims that GPT 4 is around one trillion perameter count uh I mean the human brain is between 30 and 300 trillion copses and so that's obviously not a onetoone mapping and we can debate the numbers but it seems pretty plausible that we're below brain scale still even if you can keep dumping more computed Beyond models that cost a trillion dollars or something the fact that the brain is so much more data efficient implies that if we could like train as sample efficient as humans train from birth we could make the AGI yeah but the sample efficiency stuff I never know exactly how to think about it because obviously a lot of things are hardwired in certain ways right the co-evolution of language and like the brain structure so it's hard to say also there are some results that uh if you make your model bigger it becomes more sample efficient yeah and so the original scaling was paper that right so maybe that also just solves it so you can see there that there clearly is a heap of unexplored depth to scaling with these levels of compute because we haven't gone there yet and it's quite unprecedented to think about how much compute like literally just how much compute that is and I think it's going to be a battle between Google and open AI Microsoft in order to you know sort of train these models first hand and see what comes out of them see what kind of abilities see how the AGI level system if they have it if it you know if they manag to get to Super intelligence all these kind of crazy things are going to be very possible in the near future because they are going to start doing it and I find this interview really fascinating because of course it does talk about the two things that we've discussed previously on the channel which is of course you know higher and higher training runs which are you know going to go into more and more expensive categories and of course trying to map the brain one to one because we know that the brain is essentially AGI like in the sense that the average human can you know widely adapt to different range of tasks but the problem is that GPT 4 if it's 1. 8 trillion parameters and the brain is around 30 to 300 wherever you know you think on the scale um we definitely have a lot more compute in order to try and at least map it one to one so it's going to be interesting to see what happens once we do scale up these runs to you know $10 billion $100 billion I think they're probably going to be some smaller runs in the near future considering how these companies are competing I mean Elon Musk recently just said that he's you know developing a model that's going to be better than anything right now I don't know if that's him hyping his own AI system but

### [10:00](https://www.youtube.com/watch?v=Ix_R6WTX04I&t=600s) Segment 3 (10:00 - 15:00)

it definitely goes to show that as competition heats up people are taking the game much more seriously now of course one of the main problems about this is of course something that a lot of people don't think about and if you didn't know AI systems actually do require a lot of energy to run and they talk about Stargate here stating that if Stargate moves forward it would produce orders of magnitude more computing power than what Microsoft currently supplies openi from data centers in Phoenix and elsewhere the people said the proposed supercomputer would also require at least several gaw of power equivalent to what's needed to run at least several large data centers today according to two of these people much of the project cost would lie in procuring the chips to what the people said but acquiring enough energy to run it could also be a challenge so essentially this is something that you know was previously discussed in a video two days ago is that powering these data centers for these Next Level Training runs is very difficult because we need a lot of energy to run these system is of course nuclear power so Microsoft is going nuclear to power its AI Ambitions and it states that Microsoft is looking at Next Generation nuclear reactors to power its data centers and AI according to a newb job listing from someone leading the way and essentially Microsoft is trying to you know invest and try to be at the frontier of Next Level energy sources because if they are they could power these AI computer systems very efficiently and they would essentially be in the lead by a large margin and what they also did was they also invested in nuclear fusion and the crazy thing about this is that nuclear fusion is pretty far away according to several experts you can read here from another article that says experts optimistic estimates for when the world might see its first nuclear fusion power plant have ranged from the end of the decade to several decades from now and of course the company helon its success depends on achieving remarkable breakthroughs in an incredibly short span of time and then commercializing its technology to make it cost competitive with other energy sources so this is a pretty big bet by Microsoft I'm guessing that like I said what they want is they want to be you know first in the line and they want to be able to you know get there when they make their breakthrough some people think nuclear fusion won't happen some people think it's going to take a while but I think what could be happening is that maybe Microsoft is thinking that you know what let's sign a deal with uh helon let's build the AGI system no matter how much it costs and then let's use that to fix the fusion problem and you know just like that chain of reactions we've got a really uh good solution to the energy problem so this is obviously something that is going to have to be solved before or potentially after AGI is done but um if we do take a look at this as well they also state that you know such a project is absolutely required for AGI it says AI that can accomplish most of the Computing tasks that human can do um of course you know this amount of power is going to be required so um says though the project scale seems unimaginable by today's standard he said that by the time such a supercomputer is finished the numbers won't seem as ipop and then of course we also have here which is pretty crazy and it says the executives have discussed launching Stargate as soon as 2028 and expanding it through 2030 possibly needing as much as 5 gaw of Power by the end the people involved in the discussions have said and like I said the key dates of 2028 and 2030 2028 are so incredible because this is where people have predicted that AGI is of course going to come from and it seems like we're you know moving closer and closer towards that date now if you remember there was a prediction not by myself but I wrote out the prediction based on a bunch of different articles a bunch of you know industry insiders from you know Ray K ASW well to Elon Musk to Sam Alman himself they all kind of state that AGI is pretty much going to basically be here by 2028 to 2030 and if we take a look at iterative deployment considering that openai is likely to release models in this fashion it seems that you know considering they only trademarked up until gpt7 that the next system after that could be an AGI level system which would be rather fascinating now if you don't believe that and you think it's just pure speculation there was a 40 minute video that I did make covering an entire document that talks about this but there was also something here okay and in the article they do talk about five phases so they said Alman and Microsoft employees have talked about these supercomputers in terms of five phases with phase five being Stargate named for a science fiction film in which scientists develop a device for traveling between galaxies and it says the code name originated with open AI but isn't the official project code name that Microsoft is using so there are five different phases now phase four is actually 2026 so it says the phase prior to Stargate would cost far less it says Microsoft is working on a smaller phase 4 supercomputer for open AI that aims to launch around 2026 according to two of the people Executives have planned to build it in Mount Pleasant Wisconsin where the Wisconsin Economic Development Corporation recently said Microsoft broke ground and on A1 billion data center expansion and apparently the supercomputer and data center could eventually cost as much as $10 billion

### [15:00](https://www.youtube.com/watch?v=Ix_R6WTX04I&t=900s) Segment 4 (15:00 - 20:00)

to compete one of the people said and that's many more times than the cost of existing data centers and Microsoft has also discussed using Nvidia made AI chips for the project for that entire thing so it's definitely pretty crazy the amount of investment that is going into compute and it seems like it's only going to be going up so phase four is going to be in 2026 which is why you know now that they're sating that you look we're going to be doing phase 4 in 2026 and you know phase five which is Project Stargate in 2028 it kind of seems like they already know what they have planned out to an extent based on you know how much compute they're ordering how they're planning everything it doesn't seem like they're confused at all on what kind of systems they're going to be building in the future even more so what's interesting is that we can see that today it say it says today Microsoft and open AI are in the middle of phase three of the five-phase plan much of the cost of the next two phases will involve procuring AI chips so right now we're at phase three of the five-phase plan it says two data center practitioners who aren't involved in the project say said it's common for AI server chips to make up around half of the initial cost of AI focused data centers other companies are currently building so it seems here that you know the middle of phase three of the five phase plan um that is where we are at today and it seems like by the time this five phase plan is built and you know we have project Stargate at Phase five I truly wonder what level of AI systems are going to be there because that is a ridiculous level of compute and you do really need that if you are going to try and build some truly incredible systems and I don't know what people are building but I really do want to see now of course like I said before we've discussed how Sam mman has previously also you know discussed about you know he doesn't have as many you know AI server chips as he'd like and like I said before earlier on in the video I also talked about how samman has been seeking trillions of dollars to reshape the business of chips and Ai and of course says open AI Chief pursues investors including the UAE for projects possibly requiring up to $7 trillion so this is where you know the $7 trillion number came from and of course 7 trillion is a incredibly High number but it seems that you know as far as we're going that you know things are moving forward to you know $10 billion a billion dollars and of course hundred billion so it doesn't seem like you know in the future having that much money invested I mean I think maybe there's going to be some crazy demos because if Microsoft committed $100 billion already um and they're in talks and open I can prove it it's going to be pretty crazy so um something that what's also interesting was that this is where they actually talk about um samman's kind of plans so it says one reason you know they've been pitching the idea of a new server chip company that would develop a chip riving nvidia's gpus is because uh Nvidia pretty much has market dominance okay it says the GPU boom has put Nvidia in the position of kingmaker as it decides which customers can have the most chips and it has aided small Cloud providers that compete with Microsoft so they don't really want that they don't want you know um other Cloud providers competing with Microsoft um you know and it's essentially Nvidia is in the position where like they pretty much decide who gets how much compute um their stock price is going develops because right now there is a little bit of speculation but um you know like I said before something really fascinating was that outman himself said that developing super intelligence will likely require a significant energy breakthrough and of course that is what they're trying so it seems like maybe you get AGI makes the energy breakthrough over you know just you know simply just focusing all your compute on that kind of breakthrough and then once that happens um it becomes rapidly accelerating and of course you know what's crazy is that Amazon recently just purchased a Pennsylvania data site with access to nuclear power and of course Microsoft had also discussed bidding on that same site so these companies are really fighting for the rights to get energy in order to power these future data centers and what's also interesting as well is that you know we also talked about this and you know they also touch upon this is that they said Alman at an Intel event last month basically said the AI models get predictably better when researchers throw more computing power at them and open has published research on this topic which refers to the scaling laws of conversational Ai and that's something that we also previously saw in that interview clip so it says

### [20:00](https://www.youtube.com/watch?v=Ix_R6WTX04I&t=1200s) Segment 5 (20:00 - 25:00)

open eye throwing ever more compute risks leading to a tri of dis alement among customers as they realize the limits of Technology said Ali godi CEO of datab bricks which helps companies use AI we should really focus on making this technology useful for humans and Enterprises and that takes time and I believe it would be amazing but this doesn't happen overnight so you know we have uh some view points there but I think this you know model is getting predictably better I really do think that you know things are going to be fascinating depending on what we see in the future because if models do get prly better with scaling laws I truly wonder how good they're going to get now something also fascinating that we did get to see was also see some details of qstar not really anything crazy but it talks about how with more servers available some opening a leaders believe that the company can use its existing Ai and recent technical breakthroughs such as qstar a model that can reason about math problems it hasn't previously been trained to solve to create the right synthetic um non-human generated data for training better models after running out of human generated data to of them so basically um this is really crazy because um if we look at the highlighted sections here qar is basically you know they're considering it to be self-improving so it says these models may be able to figure out the flaws in existing models like GPT 4 and suggest technical improvements in other words self-improving AI so essentially what they're probably going to be using this for is to you know create more synthetic data uh for better training models and for training systems in the future because of course if you don't know a lot of these AI systems have pretty much trained on all the data available on the internet like all of it all of the books all of the textbooks all of the you know uh transcripts pretty much everything there is and when you run out of that data um you're going to need more in order to scale it up even more so that is going to be something uh quite fascinating to see um and of course self-improving AI synthetic data and qar are all key parts of this so some people also have stated this and I thought this Reddit post that was actually going viral on Twitter is rather fascinating to add and it's a really it makes a really good point and it says it's clear now that openi has much better Tech internal and are genuinely scared on releasing it to the public it says the voice engine blog post stated that the tech is roughly a year and a half old and they are still not releasing it and it says the tech is state-ofthe-art 15 seconds of a voice and text input and the model can sound like anybody in just about every language and it sounds natural Microsoft committing $100 billion to a giant data center for that amount of cap capital you need to have seen it which is Agi with your own eyes and like I said of course Microsoft are not going to just put $100 million on a punt or a blunder or like essentially a speculative bet $100 billion is a lot of money and putting that into any kind of investment you need to have some kind of hard proof that what you're doing is going to work and of course it says for that amount of capital you need to have seen AI with your own eyes Sam commenting that TBT 4 sucks which is a very interesting point in the recent interview samman did say that gp4 sucks and so many people think that you know systems close to or even slightly better than gp4 they think they're really amazing so that just goes to show that if he thinks these systems suck that means that you know he's clearly you know seen some level of advanced reasoning and advanced systems that are just clearly ridiculously better than what we have today um and of course he says Sam told us that he expects A J by 2029 but they already have it internally and 5 years for them to talk to government and how to figure out a solution we are in the end game now just don't die so this is something that um you know I saw a quote twet by someone else that um it seems that you know maybe just don't die because of course if they get AGI and artificial super intelligence they're going to solve the Aging problem completely that's why many people are saying right now just don't die and we're in the end game now but um I do think it's going to be really fast fascinating because I think1 billion being committed by Microsoft if they do fully commit to this and they're in talks for this it definitely means that something is there that they've seen they're like you know what $100 billion we can go ahead with this and of course Sam Alman stating that he needs $7 trillion does mean that you know likely the last bottleneck is probably just compute and I mean it does make sense considering the fact that 1. 8 trillion parameters is potentially too small to map the human brain one to one um but yeah in in in some other news some other people do have other opinions um some people say that further indication that Microsoft an opening hour are chasing the wrong architecture considering that they need hundred billion for millions of gpus and the reason they state that is because the human brain can like I've said this before in a video was which is pretty funny I said the human brain can like run on a pack of Doritos obviously that's not what it runs on of course it runs on you know like actual real human food but the point is was that you know it's so much more efficient than you know the amount of energy that current AI systems use in order to achieve a lot less and the point is that some people are speculating that they're chasing the wrong architecture but I would I would argue against this in the sense that you know you first build it and once you figure it out then you can figure out how to make it smaller um and of course technical breakthroughs are going to come along the way and either way I think you know the future is going to be interesting because we will figure out this AGI debate to whether some people are wrong whether there's going to be a new architecture maybe there is going to be and maybe just maybe it's opening eye that already have that new architecture and they're already using it on certain Advanced systems that they just want to scale up so they can get to AGI or ASI so it will be interesting to see how this works but either way if you have enjoyed the video um let me know what you think of project Stargate the insane plan to build a

### [25:00](https://www.youtube.com/watch?v=Ix_R6WTX04I&t=1500s) Segment 6 (25:00 - 25:00)

supercomputer to power artificial general intelligence and most likely artificial super intelligence so with that being said let me know you

---
*Источник: https://ekstraktznaniy.ru/video/14422*