# AI News - More People LEAVE OpenAI, Companies Try To Build God, We Need AGI, Vidu AI

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=oq0kWuYWX6I
- **Дата:** 15.11.2024
- **Длительность:** 25:27
- **Просмотры:** 18,601

## Описание

https://www.skool.com/postagiprepardness/about

0:00 Resignation News
1:25 Mission Concerns
2:20 Team Departures
3:11 AGI Development
3:58 Scaling Limitations
5:28 Model Challenges
6:22 Performance Issues
7:58 Industry Direction
9:27 Progress Trajectory
11:13 AGI Debate
12:05 Intelligence Ceiling
13:15 Tool AI
14:33 Religious Aspects
15:53 Max Tegmark
17:26 Tool Benefits
19:23 Data Centers
20:34 Future Infrastructure
21:38 Developer Integration
23:19 Video AI
24:52 Final Updates


Links From Todays Video:
https://www.theinformation.com/briefings/openai-discusses-ai-data-center-that-could-cost-100-billion-with-u-s-government?rc=0g0zvw 
https://x.com/RichardMCNgo/status/1856843040427839804 
https://x.com/hungrydonke/status/1856745983453139414 
https://x.com/modestproposal1/status/1856677219893928297/photo/1 
https://x.com/OpenAIDevs/status/1857129790312272179 https://x.com/tsarnick/status/1856087275299582153 
https://www.reuters.com/technology/artificial-intelligence/openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11/ 
https://x.com/tsarnick/status/1856076482772381970 
https://x.com/tsarnick/status/1856087275299582153 
https://x.com/tsarnick/status/1855170602384183442/video/1 
https://x.com/tsarnick/status/1856809539879981521 
https://www.youtube.com/watch?v=aL-faajCRH4

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

Music Used

LEMMiNO - Cipher
https://www.youtube.com/watch?v=b0q5PR1xpA0
CC BY-SA 4.0
LEMMiNO - Encounters
https://www.youtube.com/watch?v=xdwWCl_5x2s

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=oq0kWuYWX6I) Resignation News

so one of the most surprising stories that's actually occurred in AI is the fact that another individual from the open AI governance team has left you can see Richard NGO says after 3 years working on AI forecasting and governance at open AI I just posted this message resignation to the slack which is essentially their internal message board and they say nothing surprising in it but you should read it more than literally most messages I've tried to say things I only straightforwardly believe and this brings about once again one of the consistent trends that we've seen throughout 2024 is people leaving opening eye now this is rather fascinating because this person actually worked underneath miles Brundage and if you remember miles Brundage I actually did a video on this individual a few couple of days ago weeks ago I'm not sure of what the time frame is but I do remember that he literally said that in short neither open nor any Frontier lab is ready and the world is also not ready so this is the guy who was stating that 3 weeks ago and now someone else that was working with him has decided to leave also so I think this is quite fascinating because it's quite like what happened earlier where we saw members of the super alignment team start to disband one by one and now we're starting to see this AGI Readiness team disband one by one in this you can see

### [1:25](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=85s) Mission Concerns

that he says this departure from miles feels like a natural time to move on and apparently there was no single main driver of the decision and what's crazy is that this statement right here brings me a lot of questions about what's going on at openai he says I still have a lot of unanswered questions about the events of the last 12 months which made it harder for me to trust that my work here would benefit the world long term now on the surface that doesn't seem like much but he's basically stating that he's got onard questions okay and he's saying that like I don't know if I trust that my work on ensuring that the world is going to be ready for AGI is going to benefit the world long term because maybe open AI just simply aren't paying attention to what they're stating in terms of their advice or the fact that openi is just rushing ahead

### [2:20](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=140s) Team Departures

now it's a pretty crazy statement it's not the first time we've seen this but it is something to pay attention to and he says but I've also generally felt pulled to iterate more publicly with a wider range of collaborators on a variety of research directions I plan to do mostly independent research and there's a lot of stuff that is being discussed here but let me actually bring you one of the most important sentences from this article okay he says it's hard to convey how insanely ambitious opening ey was and setting the mission of making AGI go well but while making AGI part of the mission seems well on track it feels like I and others have gradually realized how much harder it is to contribute in a robustly positive way to the go well part of the mission especially when it comes to preventing existential risks to humanity and that's partly because of

### [3:11](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=191s) AGI Development

the inherent difficulty of strategizing about the future and also because the sheer scale and the prospect of AI can easily amplify people's biases rationalizations and tribalism myself included and For Better or Worse though I expect the stakes to continue getting higher so I hope that all of you will find yourselves and be able to navigate your and opening eyes part of those Stakes with Integrity thoughtfulness and Clarity when and how decisions actually serve the mission overall stating the look it seems like this thing is probably going to be there stating that the making the AGI part of the mission seems well on track basically saying that look AGI is almost certainly going to be here soon so he's basically St

### [3:58](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=238s) Scaling Limitations

here that overall trying to contribute in a positive way to the go well part of the mission is pretty hard because of course they existential risk to humanity and the thing is that as AGI becomes closer and closer to reality I think it's a thing where people are going to change and I think he's struggling with the fact that maybe he can't do his job effectively because he sees that there isn't a path to accurately get to that kind of solution which is why he's leaving SL along with his boss which kind of makes sense because I'm guessing that the stakes is going to continue getting higher and of course with higher Stakes come higher risks and with those risks come a lot of different consequences if you don't get this AGI thing right A lot of people are going to get hurt there's going to be a lot of Fallout it's huge deal right now I think it does seem like we're in this small AI bubble but trust me guys this is going to be pretty insane once AGI is realized so that marks another individual who has left open AI due to concerns about making AGI go insanely well it will be interesting to see how this materializes and if these stories that we're looking at right now are actually played back to us in the future when we look at how AGI was deployed how it was used and how it was realized now and more open ey news you probably do already know about this but open ey and others are seeking New Path to Smart AI as the current methods to scale AI is currently hitting some limitations now the main take away from

### [5:28](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=328s) Model Challenges

this article which I'm not going to spend too long on is of course Ilia sat's key statement Ilia satov said that the 2010s were the age of scaling now we're back in the age of Wonder and Discovery once again everyone is looking for the next thing starts give scared scaling the right thing matters more now than ever and of course s declined to share more details on how his team are addressing the issue and it seems that you know superintelligence is being currently worked on with a new approach so Su basically stating that look we're back in the age of Wonder and Discovery again everyone is looking for that next big thing shows us that there is this kind of paradigm shift to where these GPT series models seem to have reached the limit of what we can do in terms of getting more out of the models simply via adding more data so that thing there

### [6:22](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=382s) Performance Issues

where you just simply add more data and more comput to the models seems to be petering out but of course the other area which is test time compute seems to be an area that is a lot more promising now we also got some information that apparently Opus 3. 5 isn't performing as expected you can see it says similar to its competitors anthropic has been facing challenges behind the scenes to develop 3. 5 Opus according to two people familiar with the matter and after training it anthropic found that Opus 3. 5 performed better on evaluations than the older version but not by much as it should given the size of the model and of course given how costly it was to build and run one of the people said so the thing is that most people don't understand is that every single time these models are trained and every single time you do the you know pre-training and stuff like that they are spending millions and millions of dollars like tens of millions of dollars on these training runs on this amount of comput and you have to understand that these guys need huge jumps in performance and capabilities in order to justify these costs to investors that's one of the reasons a lot of people are saying that look is this AI bubble going to burst if you have a lot of individuals at these companies spending millions and millions of dollars but not getting more results out of the models and it seems like we may have hit that point with this GPT series that of course there's going to be diminishing returns I can see here that it says an

### [7:58](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=478s) Industry Direction

anthropic spokesperson said the language about Opus was removed from the website as part of a marketing decision and only to show available and Benchmark models now of course on a podcast which I will reference in a moment they actually asked whether or not Opus 3. 5 would still be coming out this year and the spokesperson pointed to amid's podcast remarks and in the interview the CEO said that anthropic still plans to release the model but repeatedly declined to comment on the timetable so basically what is going on now is that they are trying to figure out ways to increase the amount that you get out of these models because of course like I said the GPT series seems to have hit a point of diminishing returns yes these models going to get better but it seems that they are only incrementally better and one thing that I have said is that whilst it's great that yes these models might be performing better on certain benchmarks why don't we actually look at what kind of products we can create around these AI tools because I think that sometimes I see models that do get a really nice increase in the benchmarks but it often times isn't clear what that translates into in real world value but for those of you who don't think AI is slowing down like myself take a look at what Dario amod actually says about the trajectory of human level reasoning within AI models and how if you extrapolate the curve it's going to look like in a few years these models are going to be above the highest professional level of humans

### [9:27](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=567s) Progress Trajectory

Reasons I'm bullish about powerful AI happening so fast is just that if you extrapolate the next few points on the curve we're very quickly getting towards human level ability right some of the new models that we developed some reasoning models that have come from other companies they're starting to get to what I would call the PHD or professional level right if you look at their coding ability um the latest model we released Sonet 3. 5 the new or updated version it gets something like 50% on S and sbench is an example of a bunch of professional real world software engineering tasks at the beginning of the year I think the state-of-the-art was three or 4% so in 10 months we've gone from 3% to 50% on this task and I think in another year we'll probably be at 90% I mean I don't know but might even be might even be less than that uh we've seen similar things in graduate level math physics and biology from Models like open AI 01 uh so uh if we just continue to extrapolate this right in terms of skill that we have I think if we extrapolate the straight curve within a few years we will get to these models being you know above the highest professional level in terms of humans now will that curve continue you've pointed to and I've pointed to a lot of reasons why you know possible reasons why that might not happen but if the extrapolation curve continues that is the trajectory we're so this next portion of the video is going to be rather fascinating because there's three clips that I want to string together that bring to your attention one of the most interesting points about building AGI we all know that AGI is quite like human level reasoning and then after that you get to Super intelligence which is quite some level above that but some

### [11:13](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=673s) AGI Debate

are beginning to argue that essentially we don't need artificial super intelligence or even AGI to achieve the rewards of AGI we just need very specific or narrow AGI in a sense that limits the risks from artificial super intelligence and I kind of believe what they're saying so I'm going to firstly preface it with this clip where D ammo basically talks about how human intelligence is not the ceiling of intelligence there's actually lot of room at the top for AIS to get smarter and I think this makes sense because if we think that we're the smartest things who ever exist and nothing could ever be smarter than us that would be pretty arrogant considering the fact that it was only you know a few hundred years ago that we didn't know that there were these mini worlds inside of us that we need microscopes for we thought we were the center of the universe I mean there's just a billion different things that we continue to find out that are constantly surprising Instinct would be

### [12:05](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=725s) Intelligence Ceiling

that there's no ceiling below the level of humans right we humans are able to understand these various patterns and so that makes me think that if we continue to you know scale up these these models to kind of develop new methods for training them and scaling them up uh that will at least get to the level that we've gotten to with humans there's then a question of you know how much more is it possible to understand than humans do how much is it possible to be smarter and more perceptive than humans I would guess the answer has got to be domain dependent if I look at an area like biology and you know I wrote this essay Machines of Loving Grace it seems to me that humans are struggling to understand the complexity of biology right if you go to Stanford or to Harvard or to Berkeley you have whole Departments of you know folks trying to study you know like the immune system or metabolic pathways and each person understands only a tiny bit part of it specializes and they're struggling to combine their knowledge with that of other humans and so I have an instinct that there's a lot of room at the top for AI to get and so

### [13:15](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=795s) Tool AI

this is where we get the fundamental debate from Max tegmark that basically argues that building AGI is unnecessary right now and all the benefits of AGI can be achieved by Tool AI which can be controlled safely I personally to an extent I do agree what is the need to build an autonomous AI system that has memory is able to improve itself is able to do a ridiculous range of different things I'm not sure there is an actual benefit there but building let's say an AGI for medicine an AGI for autonomous driving an AGI for let's say accounting AIS that are really specific to certain use cases I think that makes sense considering the fact that it lowers the risk of certain catastrophic events and you have to remember guys like all it takes is one catastrophic event for an entire industry to change remember the traveling industry before that terrible thing that happened on 9/11 the travel industry was a lot more relaxed there was a lot of less regulations but now it's completely incredible how strict the rules are in terms of flight travel it's quite likely that if something terrible happens in AI it's only going to be one example of this happening that's going to result in massive worldwide change and this is essentially

### [14:33](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=873s) Religious Aspects

what Max Tech Mark is saying because we're building something that is fundamentally unnecessary and I do remember that there was one clip right and I think I might even show you guys this right now where someone's basically saying that why are humans so obsessed with building God do you think there's a bit of a messiah complex as well absolutely yeah absolutely I think you you see it a lot in the San Francisco be yeah um there are people have kind of lashed onto this idea of building a GI um and are using it to sort of like picture themselves as Messiah as you say personally I see creating a GI as a scientific problem not a religious Quest um you know and this is often um kind of merging together with the idea of FAL Life by the way uh which is of course very natural because uh the story in most religions is always about uh this combination that if you create a GI uh it will make you live forever pretty much so it's this very religious idea right um and it has become this religious quest uh to get their first uh and uh whoever gets their first will become uh as gods and I included this clip

### [15:53](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=953s) Max Tegmark

because I've actually seen an increasing number of comments in the comment section talking about how these people are continuing to try and build God there's some kind of religious complex and I'd actually love to know your thoughts about this because this is something that is widely underd discussed when it comes to AGI ASI and the implications for that kind of future definition a to whether we're trying to build a bunch of increasingly awesome tools by definition a tool is something that the user can control like you want your car to be as powerful as possible but you don't want to lose control over it and in the same way I feel strongly we humans we want to build AI as powerful as possible and not lose control over it it's going to benefit us and if you listen to someone like Jeff Hinton warning that we're going to lose control if we build AGI you know he he's not thinking of this as just another little technology like the internet he's thinking of it as building a new species that's way smarter than us robots building robot factories it's trivial to see how you could lose control over that now I think rushing in that direction before we figured out to control it is a really bad idea incredibly dumb and it's also interestingly unnecessary because I've been going around here talking to people about all the things they're excited about with AI and it's all tool AI someone wants to cure cancer someone wants to turn CO2 into jet fuel Etc this is tool AI we don't need AGI to have all

### [17:26](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=1046s) Tool Benefits

these great benefits the most basic safety standard red line is obviously that the tool should be a tool if the company can't convince the experts that they're going to keep control over their AGI well come back when you can buddy you know and in the meantime we can innovate and do all this other great stuff without having this Spectre of loss of control hanging over Us tool AI can save up to a Million Lives per year on the roads of the World by preventing accidents without AGI tool AI can eliminate can save even more lives in hospitals without AGI and to AI can give us almost free amazing diagnosis of prostate cancer lung cancer eye diseases you name it without AI tool AI can help us fold proteins and develop amazing new medicines and even win you the Nobel Prize without AGI to AI can give us great improvements for pandemic prevention for reducing power consumption for improving education the marketizing education and transforming basically every other sector of the economy without AGI and Tool AI can help us accomplish the United Nations sustainable development goals much faster without AGI so no a AGI is not necessary and for those of you that are wondering whether or not they are likely to go ahead with this it's quite clear that these companies are going ahead with building these Mega AI systems you can see that opening ey recently discussed an AI data center that could cost $100 billion and it said Wednesday that it shared information with the US government officials about how to build a data center for artificial intelligence that is five times larger than any data center that's currently being developed and the air firm's top

### [19:23](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=1163s) Data Centers

policy executive Chris lehan said that this data center is going to require 5 gaw of energy or enough power to power five cities the size of UT which is absolutely insane and if you aren't familiar with why this is of course a story this is essentially something that resembles project Stargate so if you AR familiar with what project Stargate is I think $125 billion supercomputer cluster that is basically something that is going to be built for AGI ASI essentially the infrastructure that we're going to need to run the economy on AGI ASI that's going to essentially result in unparalleled economic development for the United States which is pretty crazy now you might be thinking that this isn't going to happen at all project Stargate is something that just won't happen but I'm not going to lie guys I do think that when we actually look out into the future it was recently that I do remember I reported on an article where Trump actually took down some of the things that were stopping SL hindering extreme AI development so all of Joe Biden's AI policies that were meant to regulate AI all of those are going to be

### [20:34](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=1234s) Future Infrastructure

scrapped so we're going to see a lot more Innovation so I do wonder if the US government is going to essentially build this how they're going to of course start building this I mean if we take a look at 2023 alone we can see that the United States spent $916 billion dedicated to the military and I guess that you could argue that the Stargate project is essentially a military project because when we think about what AGI ASI is going to be able to do it's of course have ridiculous strategies it's going to be able to defend the country and of course if that is the case we all know that the United States is going to not slow down when it comes to spending a ridiculous amount of money on this project so it does mean that it's quite likely that the data send buildout is going to start now for those of you that are developers we also got something really fascinating we got chat GPT integrated into VSS code xcode terminal and iterm 2 this is really cool because it just essentially allows you to have more fluidity for the projects that you're working on today I

### [21:38](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=1298s) Developer Integration

want to show you a quick look at an early feature in the CH GPT app for Mac OS the ability to work directly with the applications on your computer so imagine I'm building an iPhone app and I have my xcode project open here previously I would have had to copy my code back and forth from xcode into chat GPT well now with this new integration that I enabled I can click the X code button here and chat GPT can immediately see the Swift Code I'm working on check out this example so opening i1 first created this entire app from scratch to track the ISS and astronauts in space in real time but say I want to add a new feature to this app I can simply write add a new screen in the center with the live stream now Chad GPT has the context about my existing Swift Code and starts suggesting the changes that's done so I can just jump back in and update the code let's build the app command R great it's pretty cool the live stream is right there with even a fancy icon for it and I could keep on going and adding new features but for now let's ship this update I'm going to switch to the terminal and I'm going to now ask Chad GPT to work with my terminal I'm going to tell Chad GPT to help me push this on GitHub now given that Chad GPT has the context of the two apps xcode and the terminal CH GPT can help write comit messages based on the work we're just done it can also help troubleshoot errors if you have any or install missing dependencies based on anything it sees in your terminal output and that's it chat GPT helped me iterate on this app it's like having a pair programmer by your side and you're ready to dis update we're always working on additional ways to make CH GPT more useful for developers personally I would love for instance for chat GPT to go even further showing me the diffs writing the files or potentially letting me talk out loud um the features I'd like to add and these are things we'll continue to explore we hope you find this update useful and stay tuned for

### [23:19](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=1399s) Video AI

more now we also got something pretty interesting that flew under the radar since there are a variety of different AI video tools of course you're going to Miss some of the next advancements but we did get vidu AI which is essentially a multimodal AI that launched vid 1. 5 now what was interesting was not the coherence wasn't the quality it was the fact that vid AI has multi-entity consistency which means that if you have certain elements that you want in a scene you can do that with a remarkable level of consistently let me show you guys exactly what I mean so let's say for example we go onto video AIS mult multi- entity consistency you can see that this user here has input these things you can see that they've put this person along with this bike and then you can see that the AI generated video uses exactly what the entities are in order to create something now I think this is really effective for those of you that want extra control over how much control you have in certain scenes I know that there's quite a lot of different things you'd want to control when making your AI videos and this is something that completely changes the game that I haven't yet seen in other video models and considering the fact that this was something that was quite unders shadowed due to the slew of releases of Articles talking about the AI slowdown I thought that it was worthwhile to mention this in this video now this gives me time to plug my school community in which we have over 200 members using AI to generate an income it's actually really cool we're

### [24:52](https://www.youtube.com/watch?v=oq0kWuYWX6I&t=1492s) Final Updates

basically using AI to automate various solutions that are integrated in business I'm going to have a lot of tutorials coming on my second Channel and some of the members are using AI to create things like this such as star Guardians the world's first AI space epic and there's a lot of things that are going to be in that community that allow you to automate these kinds of process completely with tools like make. com and other AI agent solutions that I'm going to be teaching I'll leave a link to the video and you guys will see that you can actually now start this with a simple free trial so if you don't like it you can just simply cancel your free trial and you can go about your date

---
*Источник: https://ekstraktznaniy.ru/video/13741*