# HUGE AI NEWS: AGI Benchmark BROKEN ,OpenAIs Agents Leaked , Automated AI Research And More

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=08qjn26EwZw
- **Дата:** 20.08.2024
- **Длительность:** 26:30
- **Просмотры:** 64,099

## Описание

Prepare for AGI with me - https://www.skool.com/postagiprepardness 
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


00:00 - Breaking AI news you might have missed
00:23 - The rise of AI scientists: Game-changer or hype?
01:45 - AI flood incoming: Are we ready?
02:53 - OpenAI's secret project leaked?
06:05 - Text-to-video revolution on the horizon
07:51 - Benchmark bombshell: AGI closer than we thought?
12:19 - AGI Day drama: Experts clash on AI's future
15:41 - Is AI progress slowing down? Controversial claim
17:29 - Debunking the AI skeptics: What they're missing
19:38 - The real state of AI: Surprising insights
21:58 - DeepMind CEO drops major hints about next-gen AI
23:54 - OpenAI's roadmap decoded: What it means for you
26:12 - Mind-blowing conclusion on AI's true potential

Links 
https://www.kaggle.com/competitions/arc-prize-2024/leaderboard 
https://sakana.ai/ai-scientist/ 
https://x.com/tsarnick/status/1824608537475158519 
https://x.com/fchollet/status/1809439709363597547 
https://x.com/tsarnick/status/1823828996540555554 
https://lumalabs.ai/dream-machine/creations 
https://bigthink.com/the-future/arc-prize-agi/ 


Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=08qjn26EwZw) Breaking AI news you might have missed

so there was actually some really interesting news that was only published a few hours ago so this is going to be a interesting one because this is news that you might not have heard but let's get into all the details for today's summary on the last few days on AI so one of the things that I actually did Cover before was of course Sakana AI towards fully automated open-ended scientific discovery so this was

### [0:23](https://www.youtube.com/watch?v=08qjn26EwZw&t=23s) The rise of AI scientists: Game-changer or hype?

something that was pretty insane because what this kind of was that we're moving towards this recursive self-improvement area where we've got systems that can literally you know do research on AI with current model so basically an AI does more research the AI improves and the improved AI does better research which can improve itself even faster and of course that's the entire cycle so essentially and I'm I know that I made a video on this as well and there's going to be a reason why I'm actually bringing this up again because usually I don't cover stories twice but I'm going to talk about why so it says we're excited to introduce the as scientist the first comprehensive system so fully automatic scientific discovery and this enables Foundation models such as llms to perform research independently so this is pretty crazy because if you can have LMS performing research independently of course there's the theory that you're just going to be able to you know have these llms like one here one where you know instances of these llms and all generating papers you know paper paper paper and then of course from that you're going to essentially have this you know area where rather than there be you know even though that like right now there is just you know a flood of papers flooding arive like literally the number of papers that have you know been published to arive has just it's literally gone up like this in a graph like it's actually pretty crazy but I mean when we have you

### [1:45](https://www.youtube.com/watch?v=08qjn26EwZw&t=105s) AI flood incoming: Are we ready?

know actual AI scientists automating that I mean the graph is going to literally just go straight up into the sky because the amount of research that's going to get done is going to just be like it's probably going to be a new paper every you know hour or every second I mean how crazy is that going to be you know once we have you know systems that are efficient once they are fast once they are scalable I mean um it's going to be crazy really crazy but um the methodology is here you can see that they've got you know this entire methodology where you know generate a plan novelty check and this is basically where they check that the idea hasn't been done before they score the idea they've got the experiments all of this stuff right here then of course at the end they've got the llm pay-per reviewing so they've got this comprehensive process on how they you know work through from start to finish and I think like overall what we're going to see here is probably going to see like this thing being built on top on because once again like I said in the previous video this is something that is you know open source so if you want to experiment with this is going to be something that you can do so I'm wondering if other people and other organizations are probably going to you know be building on top of this of course they spoke about you know the bloopers and the future implications so

### [2:53](https://www.youtube.com/watch?v=08qjn26EwZw&t=173s) OpenAI's secret project leaked?

the crazy thing about this as well and this is kind of like a you know the juicy thing here and of course this one you know the re research it wasn't like you know groundbreaking you know breaking news and stuff like that because you know these llms can only conduct you know essentially preliminary research but the crazi thing about this okay was that there was a leak okay and I say leak because it's not really a leak but there was like you know um some screenshots about you know some open AI subdomains um and I'm going to show you guys that now because I wanted to talk about this but I didn't have the opportunity to because I didn't just want to like you know just put it randomly in a video but I found the you know time now and essentially what we can see here is that we have this okay so you can see here and this is Jimmy apples that tweeted this on you know August the 13th 2024 um and this is not the news that's very recent that comes later in the video but you can see here he says in a way this trolling is rather funny it puts opening eye in an awkward spot with all these expectations and happens to balance between patients and hype um and of course he tweets um some stuff here but essentially what we can see here is that this is some internal opening eye links and you can see it says okay it says scientist internal open. org now what we can see here is that these dates are you know quite old you can see that some of these dates are from you know literally March some of them are literally from January um and Jimmy apples tweeted this again now I think he tweeted this because of course you know maybe they're working on you know internal scientists are they working on internal assistant you know the like a leaderboard to see like how good these scientists are is this something that is going to be worked on you know continuously I'm not entirely sure but I do think that this you know shows us that you know this kind of thing is of course going to be worked on in the future open is like extremely secretive and what we do know is that like with current models we're always essentially two years behind in the fact that like they could have a finished product right now but we're probably going to get it you know like at least 18 months in advance not but I mean 18 months in the terms of a delay because of course you've got to do things like safety testing you know red teaming you've got to do things like post training and all these other things that you know involve making an AI system and not only that open AI also makes these into products as well so open AI is not just like okay we've made this huge model here it is they actually make this into you know their product like open openi is you know a product driven company it's not just a research organization although that is a large part of what they do um a lot of people do forget that this company is a product based company that's why chat GPT is so nice to use the UI is really nice but the point here is that you know this is going to be some stuff that does happen now I do want to say here that I don't know if this is correct when I said the dates before because I do know that Americans they do swap the dates around but yeah as I was saying considering the fact I actually did just get this wrong apologies you know I usually don't get things wrong but um we can see that these dates are actually current dates so this is actually updated recently so what I meant to say is that you know it seems like open AI is probably working on this now so they probably okay you know just from The Links of course links don't really mean anything you don't really want to be doing too much speculation but you know a scientist internal um a health scientist internal and you know an assistant API scientist I mean these could potentially be things like a health scientist we do know that

### [6:05](https://www.youtube.com/watch?v=08qjn26EwZw&t=365s) Text-to-video revolution on the horizon

you know at llms you know they have a really effective time at you know diagnosing health conditions and providing that kind of support and of course them recently we've seen with that research we've seen that they they've been doing you know amazing stuff there and if you want to know about the health scientist Gemini has been doing some amazing stuff they've got like 90% on a range of different benchmarks it's pretty crazy so I mean considering open and I have usually been ahead of everyone um and considering that these dates are fairly recent considering that these have been updated and some of these look like future dates I'm not saying that they're publication dates or anything like that but considering that these are around you know a couple months from now like 2 months from now we can see that this one is in October the 29th so I'm not entirely sure what this is but I mean if we just try to deduce what this could be I think it could be a range of new agents that open eyes is working on but then again pure speculation and I do think that this stuff is going to come anyways if not from opening ey definitely from Google cuz they've been working on that stuff but of course you know opening eyes is a bit more secretive but this is definitely something that I wanted to show you guys because I do think that this stuff and I do apologize for talking about this for so long but I do think that this stuff is definitely you know coming and this is probably what opening ey has been working on they're probably they're honestly working on a lot of stuff now something that uh most people did Miss as well is that Luma dream machine 1. 5 is the new text video model that is going to be launching next week okay so there have been some examples some people have been you know showing you what's been done the reason you know people like Luma dream machine is because this is a model that is free for the most part you know it's a lot cheaper than other models and what lumad dream machine actually allows you to do that you know most other ones don't is that they allow you to have you know a start image and then an end image so you actually do get a lot more control ability there and Luma actually managed to roll this out before a lot of other companies rolled out there so this is

### [7:51](https://www.youtube.com/watch?v=08qjn26EwZw&t=471s) Benchmark bombshell: AGI closer than we thought?

going to be something that once again it you know changes the space because when you have models that are really cheap and really you know near free in a sense we will actually start to see an explosion of this kind of content around now whether or not you think that AI is good or the AI is bad I do coming but that is going to be something that most people you know are going to have to deal with as there's no point you know rejecting technology it is something that is going to be here to stay now some pretty big news and this is really what I wanted to talk about you know 4 hours ago there was a new arc AGI high score of 46% if you remember this is actually the Benchmark that you do want to track if you actually want to track where you know Frontier models are in terms of their reasoning and that is because the arc AGI Benchmark is one that doesn't measure intelligence the way that traditional benchmarks do what the arch Benchmark does with its 85% score that is the human Baseline and what that tries to actually do is it tries to actually meas actually measure how you can reason about problems that you don't know and I know that might sound a little bit foolish but I'm going to explain a little bit more but essentially this is an article from Big think that they did with franchis cholet this is the person that invented the Benchmark and basically he's talking about how you know llms are trained on massive trows of text mostly pulled from the internet so it's likely that you know the same questions are being used to evaluate a model and were included in the training data this is essentially what we call contamination when you know think about it like this if you're going to train a model based on all of the text on the internet trying to get a completely new question uh is going to be pretty difficult considering you know the range of questions that exist in the current Benchmark so um you can see right here it says at best this is you know tipping the scales and at worst it's allowing them to Simply regurgitate answers than rather performing any sort of human like reasoning which is based on stuff that you completely haven't seen before like imagine taking a test and you've already seen the answers like you know it's not taking a test it's just pure memorization you can see and because AI developers typically don't really release details on their training data you know to those outside the companies the people trying to prepare for the imminent arrival AGI don't know if this is like I said already data contamination or this is affecting results okay um there are some you know research results that do state that you know um these benchmarks fall dramatically when you know slightly rewarded or you know they're entirely after the cut off date off the training data so that is an issue you know in some cases and you know Francis solay currently you know this is his belief he says that you know all you know current AI benchmarks can be solved purely via memorization is useful but intelligence is something else and in the words of Jean I honestly don't know how to say his surname apologies didn't want to completely butcher it he says that kind of intelligence is what you do when you don't know what to do it's how you learn in the face of new circumstances how you improve and adapt and how you pick up new skills and of course this is the kind of reasoning that we want to see and of course this is where you can see in 2019 cholet published a paper in which he describes deceptively simple Benchmark for evaluating AIS and this is the abstraction and reasoning Corpus which is the arc Benchmark and of course you can see here okay by June 2024 this is where we get to the duty bit okay so by June 2024 that had increased to 34% okay and initially the best AIS could only solve 20% of our tasks but you know by June 2024 okay this is only two months ago or you know a month and a half ago you know that was at 34% which is short of the 84% but this is what I'm saying okay when they wrote this article you know like a few days ago they were like okay this is only currently at 34% but today we got a jump up to 46% okay and you can see here and the reason I'm talking about this is because once this Benchmark does get to around 85% that means that you know whatever method is going to be used that method is most likely going to be you know scaled up on top of you know existing llms and then used to uh reason basically what this does is it like it focuses you know these current models away from just scale and compute which has been previously how we've managed to you know gain you know ground on you know current benchmarks but what this tries to focus on is you know your reasoning techniques you know for example things like Chain of Thought things like neuros symbolic AI such as tool use and things like that so you can see right here he basically says that open AI basically set us back progress to AGI by 5 to 10 years and the reason he says that is because you know like focusing on llms like llms is a dead end that's basically what he's saying you can see is the purpose of the arc prize is to redirect more AI research Focus towards architectures that might lead us towards AGI which is you know essentially what we want to do

### [12:19](https://www.youtube.com/watch?v=08qjn26EwZw&t=739s) AGI Day drama: Experts clash on AI's future

and of course he says you hit right here the llms have essentially sucked the oxygen out of the room everyone is doing llms and of course you know llms is not going to lead to AGI but I do think that you know LMS are still an important part of AGI like I still think it's important you know uh part of the entire thing but this is something that I think is really important because a jump up from you know like like I do think this is important because a jump up is insane especially on this Benchmark and this is something that anyone can really do and I will be intrigued to see if you know public companies are going to be trying to you know beat these benchmarks you know companies like open a companies like Google I do want to see if they can you know Crush these benchmarks and you can see here that it says leading AGI research lab Deep Mind is implementing nearly identical techniques we're seeing at the top of Arc prize leaderboards which is test time fine-tuning and blast inference and search so of course Alpha proof which was you know close source and basically what they're saying here is that you know in contrast natural language based approach can hallucinate plausible but incorrect intermediate reasoning steps and solutions despite having access to orders of magnitude more data we established a bridge between these and this was essentially the mathematics Olympiad thing where Google managed to get a silver model and this is Google's Alpha proof where they managed to get a silver medal at the mathematical Olympiad so that was remarkably impressive but the point here is that they are using methods that are on the arc prize leader board it's not like they're looking at the AR prise leer board and using those methods the point is that they're using similar approaches they're not looking at you know just scaling up llms just throwing money at it and the point that I'm trying to make here guys is that when you have you know people actually looking in the direction which is it might not be test time fine tuning and it might not just be search but those are things that we've already seen has led to Super intelligence in other areas like Alpha go the point here is that this is actually like AGI coming through with you know a different level of reasoning and the point here is that like it's not just asking an llm chain of prompt this is a different way of thinking about the problem and this is really what's going to lead us to you know the vast majority of progress in the space now with that really important news and like I said before it they didn't even get that much coverage but I think it's you know the right direction in terms of actually you know getting to AGI franch solay actually did you know say some stuff about this earlier this year but basically he does say that you know solving Arc isn't equivalent to solving AGI the first Arc solver isn't going to be an AGI but he's basically saying here that like until we solve Arc we won't definitively have AGI since AIS cannot you know simply adapt to tasks they haven't seen before and basically solving this Benchmark is going to you know figure out how to make systems actually adapt to on thefly novel tasks and this is going to be the major Milestone on the way to AGI so that's why this is so important now with all of this talk about AGI and all of those things this brings me to something else that happened this week there was this AGI day um and this was an event where you know there were loads of AI speakers speaking about the future of AI what they thought and I wanted to address this video because there is a lot of you know debate going on right now about where the future of AI is and where things are currently headed so there was a recent video um Gary Marcus spoke and he said that you know he's presenting evidence that you know the scaling of AI capability has slowed and we haven't seen any significant Improvement in AI since models since GPT 4 was trained in August of 2022 so um I disagree with this and I'm going to explain why and why the future is about to get really

### [15:41](https://www.youtube.com/watch?v=08qjn26EwZw&t=941s) Is AI progress slowing down? Controversial claim

crazy but um let's take a look at this okay they're all saying that there's room left for great improvements talk is cheap but we haven't actually seen anything significantly better than gp4 when was gp4 actually trained as opposed to release turns out it was August of 2022 this has been well documented they showed it to Bill Gates and kind of changed the world um but two years we haven't really got much better and so I'm going to show you a graph I took the liberty of trying to extrapolate a curve because every day some AI influencer AKA grifter I didn't say that out loud I just thought it um shows you an exponential or I mean doesn't show you an exponential they're all math innumerate but they say every day wow I can't believe what came out this week There's Been exponential improvements it's amazing so I thought I'd actually plot a curve I just did it by eye I didn't do the math but um looking at Palm chinchilla which was the state-ofthe-art in August uh sorry April of 2022 um in relative this is to release dates uh when GPT 4 came out and you could genuinely make the argument that we saw exponential progress over that period it was pretty amazing what happened this is all since I spoke here three years ago the you know there was a lot of progress um and you could argue gp4 was kind of an outlier but there was a lot of progress uh in the period leading up to gp4 but is it continuing so this is where the curve should go there's a little glitch with that we can talk about um but this is where the curve roughly speaking should go but in reality scaling has slowed so here's the full curve of all the data if you're a scientist if you know Bas theorem if you know how to aggregate data whatever statistical technique you want you can look at this each data point is a test

### [17:29](https://www.youtube.com/watch?v=08qjn26EwZw&t=1049s) Debunking the AI skeptics: What they're missing

of the hypothesis let's say that scaling would continue at the pace that it did from Palm chinchilla to gp4 and it is obvious you don't even need to run a statistic but you could if you like it is obvious that scaling has slowed that we are not in fact still on that exponential reg um I'm not trying to be someone who's a Doomer or someone that's trying to create any kind of debates but I mean there's simply one thing wrong with this entire graph is the fact that this is just simply basing this off if you guys can see on the right hand side here the MML U the five shot and I mean like the thing is that as things improve like for every percentage gain there's not going to be like you can't exponentially gain from 87% to 90% like no matter what you do even if you get to 100% like you could still argue that there isn't like a freaking you know exponential growth there so I don't understand you know why Gary Marcus as intelligent as he is has as many you know accomplishments that he has why he Kil still just has this kind of perspective to where you know almost any kind of you know AI update he only focuses on the bad things like a lot of people who pay attention to AI updates they say oh text video is amazing but you know it doesn't get the fingers right oh it gets the fingers right but it gets this wrong it's like you're just simply negating every other thing that it also does now what I also want to state is that what we can also see here is that that yes there was you know a huge jump from you know GPT 3. 5 turbo to GPT 4 and you know the plotting of this is you know wrong because number one when we actually look at the actual dates of when things were actually released I don't know why the MMO is even here but this is wrong for example gbt 3. 5 um chat gbt 3. 5 was actually released late 2022 so that should actually be here um and then of course GPT 4 was released in March of 2022 which is no March 2023 which is of course here so you could say call there's an exponential now I'm not discounting other companies but all of these other companies these models okay like you know just these are first of all of course correctly it does say open weight models here um and I'm going to show you guys another video from a new interview from you know Demis Aris talking about the future but um you know what we can see here is that these you know models like Gemini Ultra Gemini 1. 5 Pro llama 3 uh Claude 3 these models like are you know a cycle behind Okay that's what you have to remember okay

### [19:38](https://www.youtube.com/watch?v=08qjn26EwZw&t=1178s) The real state of AI: Surprising insights

before the entire cycle the only models that you know truly existed that were truly competitive was you know chat GPT they were you know levels ahead and what you have to understand is that this makes sense you can say scaling has slowed if we get the next iteration of Frontier models like claw 3. 5 Opus Gemini 3 if we get those models and then we look and say okay there haven't been any new capabilities you know the reasoning is still off you know the hallucinations are still off then yeah you could say that but right now we still haven't got that next wave of AI yet because clae 2 the Claude 2 systems were still around gpt3 level and Claude you know three levels were just on par with gbt 4 level so I would say this isn't scaling a s I would say that if we're mid 2025 and the benchmarks haven't you know gone up um and I don't think that the mm use a good Benchmark anyways because if you know anything about this Benchmark there's errors in this Benchmark which means that 100% is technically impossible if currently errors in the questions so I think this Benchmark isn't going to be a thing anyways and right now claw 3. 5 son is currently the leader anyway so I mean once claw 3. 5 Opus gets here and other iterations of models I just think that you know staying things like scaling has slowed I mean it's it doesn't really make sense because if you've known anything uh if anything you know these companies are slowing down their releases you know we got GPT 3. 5 then 6 months later we got GPT 4 I mean you know considering the fact that also we're now actually scaling up these systems in terms of you know Hardware in terms of the gpus you have to really think about okay there's going to be a longer time frame between these next generation of models and what we actually have seen is that you know yes models are converging around this point but this is you know all models converging around the current point at their current size to reference and we've seen models get a lot more efficient we've seen them get a lot faster cheaper we've seen context Windows expand we've seen you know reasoning increases we've seen so many different things so I mean you know negating all of that saying scaling her s saying that you know it's an AI winter saying the AI hyp is dying down I would just say always pay attention to the data because stuff like this is absolutely incredible and I mean when you actually think about it if we actually look at the time period from gbt 3. 5 which is late 2022 to where we are now guys that is literally there not even that's not even two years it's not even two complete years and people are saying scaling has slowed like look back to when chat gbt was released to where we are now the state of the i space I mean it's absolutely incredible so I mean saying that scaling has loow is uh you know it's it's an incredible statement but anyways there's also something I wanted to show you as well which is really important and this

### [21:58](https://www.youtube.com/watch?v=08qjn26EwZw&t=1318s) DeepMind CEO drops major hints about next-gen AI

refers to what we discussed just a moment ago but take a listen to Demis I think that's the next era these sort of more agent based systems we would call them or agentic systems that um have agent-like behavior um but of course that's what we're expert in that's what we used to build with all our game agents alphao and all of the other things we've talked in about in the past so a lot of what we're doing is bringing to kind of marrying that work uh that we're sort of I guess famous for and then uh with the new large U multimodal models and I think that going to be you know the next generation of systems you can think of it as combining alphao with Gemini yeah because I guess alphao was very good at planning yes it of course only in the domain though of games and so we need to sort of generalize that uh into you know the general domain of everyday uh workloads and language in two three four years time especially when you start getting agent likee systems or agentic behaviors um then I think uh you know if it's something's misused by someone or perhaps even a rogue nation state uh that could be serious harm Alpha go and those kind of things those things are going to be some of the most important areas and what we've already seen which is why I said this is really important news is that the way how you know Google's you know changing their approach to you know now focusing on those you know other systems that are getting them to really incredible benchmarks like you know getting the Olympiad to you know silver medal which is something that they didn't even think would happen like many people predicting they were like okay that's going to be in 2026 at least to get like a bronze medal or whatnot and it's like a silver medal in 2024 they're like yo this is crazy um so them stting that okay we're now going to move to these methods and these are the methods that are you know moving on the arc AI Benchmark it kind of leads me to believe that if this is what information is publicly available and considering the fact that openi have you know historically been you know one to two years ahead of where the general consensuses of AI capabilities it kind of leads me to believe that AGI is probably just around the corner and

### [23:54](https://www.youtube.com/watch?v=08qjn26EwZw&t=1434s) OpenAI's roadmap decoded: What it means for you

there's also this one last thing that I'm going to leave you guys with which is why I do think AI capabilities are probably still um in terms of you know getting to AGI is probably still within a relative time frame and when I mean weak AGI I don't mean like a complete AGI that can then develop super intelligence I mean like an AGI that's going to be rather effective I think probably within the next two years and the reason I say that is because when we actually look at opening eyes future and the thing that they've said here they've actually said that like if you look at this isn't actually an AGI chart it just it this like when I first looked at this is like you know I didn't really take it in because this is just stages of artificial intelligence this isn't like AGI like when we actually take into account what this is okay level one chatbots this is two reasons number three is Agents number four is innovators and number five is organizations when you actually really think about what's going on here you could argue that level five you know that's going to be like AGI Plus or near like ASI level I mean you know an a AGI system that can literally you know do the work on the entire organization I mean if you think about what apple and these companies are doing they've created trillions of dollars in value like millions and billions of dollars if you think about you know being able to have an AI system that can literally do the work of an organization as big as Microsoft that's a huge level okay um and if you think about innovators the AI that can Aid an invention this is like AGI ASI okay and if you think the fact that okay let's now plot this you know the fact that okay arguably opening eye is at this level they basically recently kind of confirm that they were at this level I mean you would say that you know getting to agent system systems that can take actions considering the fact that we've discussed that this can be solved you know um solving actions can be solved with scale that I don't think it's going to take two whole years to get to this level so I mean I don't know about you guys but I think that you know when we talk about you know AI that can you know Aid in invention that's like strong AGI booring on ASI and this is definitely really strong AGI ASI that can do inventions and then agents that can take actions this is just going to be you know AGI with you know really good long-term planning so the reason is here is probably going to be weak AGI which is not far away what we're from so I mean it's going to be kind of interesting to see what happens in the future but I do not think this space is slowing down by any means necessary if anything I would say things are speeding up the only problem here is that like a lot of companies are now private with what they're doing so I think that you

### [26:12](https://www.youtube.com/watch?v=08qjn26EwZw&t=1572s) Mind-blowing conclusion on AI's true potential

know with these levels and stuff like that I do Wonder um if we're going to get you know notices and that stuff because we're dealing with really powerful technology here but let me know what you guys think about this what do you think the most you know important piece of AI news is um I'm going to be doing another video later so you know subscribe for that and if you did enjoy the video I'll see you in the next one

---
*Источник: https://ekstraktznaniy.ru/video/14116*