# AGI Within 7 MONTHS ,  New Q-STAR Paper , Strict AI Regulations and More

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=6H9l8cKh7qU
- **Дата:** 15.03.2024
- **Длительность:** 27:36
- **Просмотры:** 39,698

## Описание

✉️ Join My Weekly Newsletter - https://mailchi.mp/6cff54ad7e2e/theaigrid
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

00:16 AGI In 7 Months
04:08 GPT-5 Easily
04:41 AGI Doubters
08:50 AI ACT
11:19 AGI Risk
11:38 Open Source AGI
14:21 Gladstone AI
22:21 – Qstar Paper

Links From Todays Video:
https://arxiv.org/pdf/2403.09629.pdf
https://www.reddit.com/r/singularity/comments/1bf9vrq/interesting_theory_on_why_yann_lecun_is_so/
https://www.reddit.com/r/singularity/comments/1bf3kqf/alans_conservative_countdown_plot_predicts_agi_by/
https://www.gladstone.ai/action-plan#action-plan-overview

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:16](https://www.youtube.com/watch?v=6H9l8cKh7qU&t=16s) AGI In 7 Months

that isn't long away at all we've only got a couple months until November and this is Allan's conservative countdown proot predicts AGI by November 2024 so if you aren't familiar with Alan's conservative countdown to AGI essentially it's a method essentially it's a tracking method trick developed by Dr Alan d Thompson which aims to estimate the progress towards the development of artificial general intelligence now AGI is defined of course as a wi teen that performs at the level of an average median human across a wide range of cognitive tasks and the countdown uses a percentage as a scale to represent the progress towards achieving AGI with 100% representing AGI now of course you can see right here this is his conservative countdown to AGI and numerous announcements have made this thing tick forward now I'm not surprised that this at all because it is very fascinating to see the recent developments that we've had now the countdown does include Milestones such as the elimination of hallucinations in large language models and the physical embodiment of AI in robots and of course the ability to pass a Steve wasn test of AGI now one of the things that we've seen here is pretty crazy he says a round score of 80% this is where an AI in a robot passes Steve W's next test of AGI and this is the test where a robot can walk into a strange house navigate available tools and make a cup of coffee from scratch so you can see right here that this is of course 100% And this is in November but he's stating that we are very close to this being done now I think this is a very interesting Benchmark because for an robot to be able to go inside a house and look around find the door open the door find the you know coffee materials be able to get those make a cup of copy from scratch I think that is something that an advanced AGI system is going to be able to do because a lot that times there are aders adversarial disturbances and there are a lot of things in the environment that the robot is going to struggle with now with figur recent demo if you haven't seen it it's pretty crazy um you know you can watch it it's literally insane but they've combined the actual you know llms with the robots and now there is the physical embodiment of an AI system and this is of course one step closer to AGI so of course people are stating that for this to hold up it needs to be at least 80% by the beginning of June and 90% September meaning for this graph prediction to hold up what we would need to see for this to hold up is we would need to see an AI robot walking into an unseen environment and making something like a cup of coffee now is that possible considering that we only have I think around 3 to 4 months left until June I don't think that is impossible considering this company is moving at lightning speed and with a few breakthroughs from the echo chamers that I've heard it seems that robots are about to get a huge jump up so with the embodiment of these AI systems I think it was one of the integral parts that these AI systems were missing So within the next 3 to four months we're going to be able to see how quickly things move up and of course you know someone made a very valid point they said to be honest it seems like a 50/50 chance to me Sora CA figure one Claude Opus all happened in a mere 3 weeks from each other and the pace doesn't seem to be slowing down so this seems like it could very much so be a reality so do you think Allen's AGI countdown to AGI within you know 7 months do you think that is rather conservative do you think it is you know something that is too I guess you say eager I mean either way it's going to be able to see and this is the this is somewhat extracting the DAT data by the way he doesn't say that AGI is going to be achieved at this date this is someone that's extracting the data based on you know the previous like how um this like the polinomial projection so this is based on all the previous data points so yeah I mean it looks like we on that S curve so I mean if that date does happen I I personally don't think it's going to happen but then again I do remember looking at this two and this is something that I did miss something I forgot to include in the video um and at the start of the Year Sam Alma did actually suggest that people who are working on their entrepreneurship startups build with the mindset that gbt 5 and AGI will be achieved relatively soon and that most gbt 4 limitations will get fixed in gbt 5 so that is crazy

### [4:08](https://www.youtube.com/watch?v=6H9l8cKh7qU&t=248s) GPT-5 Easily

so of course samman stating that you know 2024 and 2025 are going to be big years for AI that does mean that potentially you know this kind of chart doesn't look as crazy as it does now and that you know AGI being at 70% you know doesn't seem that crazy now of course with AGI there are some doubters now of course uh Christopher Manning this is someone at the Stanford AI lab he says that I do not believe that human level artificial intelligence or the commonest sense of AGI is close at hands AI has made breakthroughs but the claim of AGI

### [4:41](https://www.youtube.com/watch?v=6H9l8cKh7qU&t=281s) AGI Doubters

by 2030 is as laughable as the claims of AGI by 980 are in retrospect so what he's doing here is he's looking back on when we used to claim the AGI was around the corner but this is from 1980 and technology has since rapidly evolved since then so let's take a look at this statement we can that he says in from 3 to 8 years we will have a machine with the general intelligence of an average human being I mean a machine that will be able to read Shakespeare grear play off his politics tell a joke have a fight and at that point the machine will begin to re educate with fantastic speed and in a few months it will be at genius level and in a few months after that its po will be incalculable in retrospect this is a pretty crazy statement however looking into the future things are on an exponential meaning that the future is much harder to predict that if we look back into the past so now of course there are some more doubters but take a look at this first from Elon Musk he says AI will probably be smarter than any single Human by next year by 2029 it is probably smarter than all humans combined so extrapolating this information now I'm guessing he's stating that maybe we achieve some level of virtuoso AGI by 2029 by next year I'm guessing that we probably have AC cross the benchmarks probably 98 to 99% across all of the top reasoning benchmarks meaning that you know AI is going to be smart smarter than any single human next year now of course this claim was uh you know in response to Ray kwell stating that we will achieve human level intelligence by 2029 and of course he states that we're not there but when we will be there it will be 2029 and it's going to be matching any person now remember this isn't that far away it's 2024 and in 5 years we're going to see some major AI developments one thing that you have to understand that whilst 2029 could be the singularity before then we're still going to see some massive shakeups in the economy and pretty much everything else because it's not like boom we get AI and then everything just falls off a cliff we're going to be seeing you know llms increase their capabilities Vision systems go crazy video systems go crazy remember we're going to have four years of uh AI development before an AGI system you know conservatively is released so that is going to be another prediction there but then again of course some people are predicting AGI in 7 months but then again it's just an extrapolation of data going out now of course some people on the website have predicted the GPT 5 date and Allan expects this date to be August 2024 which does kind of line up with the open ey Dev day which is scheduled for November so that could be kind of a funny interesting time to see what they release at that day now like I said as well back to the doubters of Elon Musk this is of course Yan and I tweeted about this earlier I tweeted the fact that you know the top a researchers could not agree on some of the you know really big issues shows us that the future is going to be near impossible to predict because on one side you have people like Yan Len that says no we're not going to have ai smarter than us by 2029 if that were the case we would have ai systems that could teach themselves how to drive a car in 20 hours of practice like any 17-year-old but we still don't have fully autonomous self- reliable driving even though we/ you have millions of hours of label training data um so on one side we have Yan lad which is a very respected AI scientist who is at the top of his field that says that we are not going to have this by 2029 and of course we do have you know this guy right here in retrospect stating that we were pretty crazy to think we would have something and then of course on the other side we have people like Elon Musk Ray kwell and Sam Alman stating that you know gbt 5 will achieve AGI uh not gb5 will achieve AGI but gbg 5 and AGI will be achieved very very soon but what do you guys think about that where are you on the Spectrum do you think that Sam Alman and M do you think they're wrong too hopeful or do you think that you know Yad Lan is uh just delusional or naive I mean either way I think that you know the only way we're going to know is to be in the future and then whenever the AGI system is developed then we can look back and say okay this person was right and that person was wrong because either way these people are some of the most respected people in their fields and them disagreeing uh it's kind of a concern because it means the future is really uncertain now in addition to uncertainty we do have the laws surrounding AI now essentially what we have here is eu's AI act so the EU AI act has a few things that I wouldn't say are ca for concern but the vagueness of

### [8:50](https://www.youtube.com/watch?v=6H9l8cKh7qU&t=530s) AI ACT

them gives me some you know headache because essentially one of the things they said here was that some applications will be banned outright because they threaten citizens rights including emotion recognition systems in schools and workplace amongst others now I'm sure that potentially you know uh this is going to be you know in more detail probably on the article but the point here is that I think the rate at which AI is developing poses a really hard problem to legislators because the rate in which is developing means that you know they're going to have to dally update these laws because there are going to be new and problem new and newer problems that they just simply haven't predicted there might be even emergent behaviors or properties that you know make the system behave in a way that we didn't predict which means that certain laws it's going to be a bit vague on how we kind of you know govern those AI systems if they have certain abilities or capabilities so that's why I think this kind of stuff is important to pay attention to because you know there were some and a crazy thing as well that you can see here is that it says you know a complete set of regulations including rules governing chatbots will be in effect mid 2026 according to the European Parliament which noted each EU country will establish its own AI Watchdog agency so that's the thing of okay they're developing these laws and they're stating that you know a new rules governing chatbots are going to be there by mid 2026 how much do you think chatbots are going to change in the next 2 years in the next two years we could literally have artificial general intelligence systems or systems that have such Advanced reasoning capabilities that you know trying to implement rules on these is going to be very difficult so I mean it's going to be so hard to do that because it seems like whatever rules they Implement they're going to be pretty much outdated by 2026 because literally last week we saw Claude 3 Opus figure one all of these insane updates and I mean it's only been like I think it's only been 450 days since uh chat GPT was initially released and from GPT 3. 5 to where we are now is absolutely incredible and with that being said I mean we've already seen how the economy's responded to that and you know by them stating that you know another two years like literally we really don't even know what the AI space is going to look like so uh good luck to you if you are on that um but then again of course as long as we pay attention I think it's going to be good because I do think we do need some laws surrounding that now the EU wasn't the only place implementing some you know laws of course the US they were doing some stuff so essentially they were publishing a document warning about the dangers of a AGI and they were saying that you know it would introduce catastrophic risks and I don't want to gloss over the catastrophic risk I do think that whilst those are true I think the main thing here is that they said

### [11:19](https://www.youtube.com/watch?v=6H9l8cKh7qU&t=679s) AGI Risk

that you know AGI is the the primary driver of catastrophic risk and that openi Google deep M anthropic and Nidia have all publicly stated that it could be reached by 2028 now whil like I said 28 is a pretty much Good Year 2028 2029 2030 I think all of those three years we would likely to see an AGI system we have to understand that this thing is developing super rapidly and something

### [11:38](https://www.youtube.com/watch?v=6H9l8cKh7qU&t=698s) Open Source AGI

as well that they also did state that most people didn't pick up on they stated I think this might be llama 3 I'm not too sure but they stated that one individual at a well-known AI lab expressed The View that if a specific Next Generation AI model were ever released as open access this would be horribly bad so I have a controversial Viewpoint here okay and hear me out because I'm going to reference this video in the future if I I'm true I personally don't believe that artificial general intelligence is ever going to be open source and I really am wondering if they're ever going to allow uh the standard person to access an AGI system um and I know that might sound AIT a bit insane but there are two points okay number one if they open source this of course you know there are some second order consequences that could be quite horrific because AGI you know it's going to cause wide scale problems in terms of you know if it's open source people can you know use it for nefarious means which is of course uh quite bad now think about it like this okay I'm not going to say that opening is going to be the only company to achieve AGI but it might be harder than we think because remember okay all these other companies it took them quite a while to catch up to just where gbt 4 is and the problem is with that is that AI seems like it's going to be you know a huge step above that and if it is it means that you know these other companies how are they going to catch up if they do have an AGI that's working for them in those system so I don't think that once they do get it they're going to open source it because it just wouldn't kind of make sense now I think them deploying it I also do think that there's going to be computing isues around that because it's going to be a capable model I mean it could be highly efficient we don't know but uh yeah it's going to be fascinating to see how that AGI thing goes down because some people are thinking that open source AI is going to be awful but then again you do have to think about it like you know open source AI it kind of levels the playing field in terms of everyone getting access to it so I me Jeffrey Hinton also did say that you know if AI is more dangerous than nukes why aren't we just giving everyone nukes like it doesn't make sense to open source AGI if it's that much of a capable system and I think if everyone gets that mentality I don't think we ever see open source AGI because it might be seen as you know Reckless to do so I think in the future once we actually see what the AGI is capable of then I think the conversation changes because right now people say AI this AI that but once you see an AI system and you're really understanding what is truly capable of then you're going to be like okay now we can make a decision so with that being said there's also uh the framework that they did talk about okay uh and in that framework they spoke about three uh strategies that I'm going to include here I'm not sure why they haven't publicized it uh because I had to like go through so many links to find it but um yeah I'm going to show you guys that and then kind of come back to another research paper 2022 before chat GPT came out the US government commissioned an assessment of the risks to National Security from Advanced AI on the path to human level AI or artificial general intelligence as it's sometimes called now that assessment was also to include an action plan to address those risks

### [14:21](https://www.youtube.com/watch?v=6H9l8cKh7qU&t=861s) Gladstone AI

now Gladstone has just finished performing that assessment our team worked for over a year and talked to more than 200 people from Top Executives at Frontier AI labs to cyber security researchers wmd experts and some of the most senior National Security officials in the United States and around the world we also met informally with some of the most knowledgeable individuals inside the world's most advanced AI Labs some of whom spoke to us on condition of anonymity now this is the first assessment of its kind and it's informed by an unprecedented level of access not just to Frontier Labs their researchers and even their CEOs and other Executives but also to an un paralleled range of US National Security officials from the White House to working level experts at the top of their fields and along the way we learned some sobering things behind the scenes the Safety and Security situation in advanced AI seems pretty inadequate relative to the National Security risks that AI May introduce fairly soon Frontier AI labs are publicly saying that they think they might be able to develop artificial general intelligence or AGI within the next 5 years and privately many researchers are telling us they see much shorter timelines as plausible and you can check out our explainer on the new era of AI for more context but the upshot here is that these claims are rooted in hard data and are surprisingly plausible and that's great in so many ways right I mean it could allow us to make amazing economic scientific and medical breakthroughs that would have been the stuff of Science Fiction but on the other hand the AI systems we might have in the next few years could have capabilities that could be weaponized to cause devastating damage at the same time Frontier researchers are also privately worried that Beyond a certain level of capability they may not be able to maintain control of the AI systems they're creating and quite a bit of early evidence is pointing in that direction you can check out our explainers on AI weaponization and loss of control but the upshot is that although we can't know for sure there's compelling data to suggest that AI May introduce weapon of mass destruction or wmd like risks sometime in the relatively near future now that data seems to be on solid enough scientific ground that these risks ought to be taken seriously but this assessment isn't just about the risk it's also a blueprint for what we can do about it culminated in an action plan a set of coordinated whole of government policy proposals designed to mitigate the catastrophic risks that evidence suggests could come with future AI progress all while positioning us to reap the incredible present and future benefits of AI as much as possible plan starts with loe1 which out possible strategies the executive branch could use to buy down catastrophic AI risk in the near term while setting the conditions for successful long-term AI safeguards now any safeguards we put in place for AI need to be carefully scoped so we can keep realizing the incredible benefits of the technology but at the same time AI systems can already be weaponized in concerning ways and AI capabilities are accelerating almost inconceivably fast some could plausibly have wmd like or wmd enabling capabil ities in the next 1 to 3 years and to understand why you can check out our explainer on weaponization and Alignment risks so if we want to buy down that risk in the short term while positioning ourselves for the Long Haul we can do three things first the US government needs better situational awareness of AI threats it could achieve that in part by creating an AI Observatory a hub for AI threat evaluation analysis and information sharing with Horizon scanning and emergency preparedness functions now second as front AI models are being scaled and improved they're developing new weaponize capabilities at an unprecedented rate so there's an urgent need to establish rules for responsible development and Adoption of advanced AI systems the US government could set up an inter agency AI safety task force to establish and coordinate the implementation of those rules at our most advanced AI labs and finally Advanced AI safeguards in the US won't achieve much if other countries don't take similar measures unfortunately the US can use its unparal influence over the AI supply chain to do things like establishing a licensing framework for developers who want to use us cloud services and taking steps to put in place monitoring systems on us designed AI Hardware now LOE 2 outlines actions that the US government could take to increase its preparedness for rapidly responding to incidents related to Advanced Ai and AGI development and deployment now the accelerating power of AI is amazingly good in so many ways but it also increases the destructive footprint of malicious actors who use it over time that could raise the prospect of new kinds of cyber warfare make it easier to plan and execute biological and chemical weapons attacks and even increase the risk that we might lose control of the systems we're developing so if we want the ability to prepare for and respond to AI incidents quickly and in a technically informed way we can do four things first the US government can keep coordinating on National Security risks from Advanced Ai and centralize its efforts in that area by setting up work working groups and overall ownership of the problem set at the executive branch level now second if you want informed planning and informed responses you need an informed Workforce and that means setting up education and training programs in AI across the US government to increase our preparedness and response capacity now third future AI incidents could have wmd scale impacts and unfold very quickly so the US government could consider developing a framework of indicators and warnings for advanced Ai and AGI threats to National Security that would help build out capacity to anticipate and detect emerging catastrophic threats from weaponization or loss of control both domestically and abroad and finally while a warning system is crucial what do we actually do if we detect signs that a serious threat is about to materialize well here's where contingency planning comes in the US government could consider directing a contingency planning process to develop response options for various threat scenarios LOE 3 outlines actions the US government can take to strengthen domestic technical capacity in advanced AI Safety and Security AGI alignment and other technical AI safeguards now Frontier AI labs are locked in a race to build more and more powerful AI systems and that race is driving them to invest more and more in AI capabilities and less into Technical safeguards and procedures against catastrophic risks it's already an Open Secret that many labs are worried they'll be able to develop extremely powerful AI systems before they can figure out how to reliably control those systems and you can check out our explainer on alignment risks but long story short controlling very capable AI systems is an unsolved technical problem now if we want to close that technical Gap as quickly as possible we can do two things first the US Government Can directly fund fundamental research in AI Safety and Security including in technical alignment approaches that are intended to scale up to AGI level systems it could establish AI centers for both open and classified research and establish protocols and infrastructure for secure information sharing and second the US government can continue to support the development of standards for responsible AI development and adoption to give Frontier Labs benchmarks for safe and secure practices but there's an important caveat here these standards shouldn't rely too heavily on our attempts at evaluating what AI models can and can't do using today's approaches see the best AI evaluation techniques today can't tell us for sure if a model does or doesn't have a dangerous capability that we care about or even which kinds of dangerous capabilities we should be looking for in the first place so Safety and Security standards should encourage the development of new techniques built on a fundamental scientific understanding of the behavior of AI systems so that you've seen that there was also another research paper called quiet star and this is one that is really fascinating essentially uh it's about having an internal monologue now

### [22:21](https://www.youtube.com/watch?v=6H9l8cKh7qU&t=1341s) Qstar Paper

essentially I think it's either 50% of people do 50% of people don't now it's essentially an internal monologue now some people might have an internal monologue that helps with things and it's not like another person like talking to you it's just yourself speaking to yourself internally it's very hard to describe but it is a concept that some people have is it's kind of like theory of mind in a sense that you know you can think about something and kind of you know reason with yourself it's kind of like an extra reasoning engine I guess you could say but um I mean it's kind of weird because they kind of use this in llms and llms were able to double their performance um and it's something that I did a video on the second Channel and I will include that and the reason I'm going to include that in the video is because it was something that was widely discussed on Twitter because some people don't have internal monologues um and Yan Lun was also debating internal monologues stating that he doesn't have one so some people were stating that maybe that's why he doesn't believe AI is there because he doesn't realize uh how internal monologues work it's just an entire debate but I will include that when humans write or speak we often pause to think and reason about what we want to say next this reasoning is usually implicit and unstated happening between the lines of the text or conversation recent AI research has shown that large language models llms AI systems trained on vast amounts of Text data can be prompted to show their work and output explicit chains of reasoning to answer questions or complete tasks this allows them to solve more challenging problems however this has required carefully curated data sets of question answer pairs to train the models a new approach called quiet star aims to allow language models to learn implicit reasoning from arbitrary text without the need for specialized data sets the key idea is to train the model to generate useful thoughts between tokens of text that help it predict the next parts of the text more accurately in this way it learns General reasoning capabilities that are embedded throughout language how quiet star Works quiet star operates in three main steps that form a learning Loop one think as the model processes each token of text it generates some thoughts or reasoning statements relevant to predicting what comes next multiple possible thoughts are sampled at each point two Works talk the model then makes two next token predictions one based solely on the original text and one that incorporates the thoughts it generated these predictions are combined based on a learned waiting learn the model is updated based on which thoughts led to better predictions receiving a reward signal that encourages it to generate more useful thoughts in the future thoughts that improve prediction accuracy are reinforced this process allows the model to incrementally learn to generate reasoning that improves its language understanding and generation without explicit supervision the researchers used some key techniques to make this computationally efficient and stable such as parallelizing the thought generation and using special tokens to control the reasoning process results and significance the researchers tested quiet star on two challenging question answering benchmarks Common Sense QA and gsmh math word problems they found that pre-training a model with quiet star on a large web Corpus led to strong improvements on both tasks compared to a regular LM without needing any task specific fine-tuning on Common Sense QA accuracy improved from 36. 3% to 47. 2% and on gm8 K it improved from 5. 9% to 10. 9% importantly performance scales with the length of the reasoning chains generated during pre-training confirming that the models were learning useful multi-step reasoning qualitatively the thoughts generated by quiet star during the fine-tuning often related to retrieving relevant facts and Performing logical steps pertinent to the text even if those facts and reasoning steps were not explicitly stated in the original text this work demonstrates an important new paradigm for imbuing language models with reasoning capabilities through self-supervised learning on Broad language data rather than relying on narrow data sets and explicit Chain of Thought prompting models can learn to reason quietly as they process text to improve their predictions the fact that this leads to strong transfer performance on reasoning tasks is a promising sign that the models are learning generalizable reasoning abilities this opens up exciting avenues for further scaling up language model reasoning in more open-ended ways conclusion the quiet star technique shows that language models can learn General reasoning capabilities through self-supervised learning on text without explicit reasoning supervision by learning to generate chains of thought that improve its predictions a model develops reasoning skills that transfer to challenging question answering tasks this is an important step towards imbu language models with more human-like reasoning that happens between the lines as we process language with further scaling and refinement this approach could lead to language models that more closely reflect the flexible reasoning and generation capabilities of human intelligence now that you've seen the entire video let me know what you guys think about AGI in seven months do you think this is a ridiculous claim going to happen due to the exponential nature of AI do you think that Sam Alman stating that gbt 5 and AGI is going to be here makes the future building much more difficult or do you think it makes it much more easier and do you think Super intelligence by 2029 like ELO musk said all humans being you know outsmarted by next year is going to be true but uh yeah let me know what you think about that and I would say for those of you who are super interested in AI don't forget to check out the video coming in around 2 3 hours because it is absolutely insane cuz as long as the information isn't fabricated uh because there was some a recent announcement it isn't being covered by everyone but it's pretty incredible so that being said I'll see you guys in the next one

---
*Источник: https://ekstraktznaniy.ru/video/14465*