# The Governments SECRET AGI Interview Reveals ALL! (Secret Darpa Interview Discussing AGI)

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=6p2DS8wRGhk
- **Дата:** 01.04.2024
- **Длительность:** 15:56
- **Просмотры:** 28,526

## Описание

How To Not Be Replaced By AGI - https://www.youtube.com/watch?v=LSXpZmo7_Tg
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

Links From Todays Video:
https://twitter.com/apples_jimmy/status/1774588732253012172

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=6p2DS8wRGhk) Segment 1 (00:00 - 05:00)

so Jimmy Apple's recently tweeted out something that actually does lead back to some key advancements in AGI now this document is actually from November 2023 but there are some key things on there that do show us things are moving at very rapid Pace but with that being said this document definitely highlights how the future is going to be and it shows us that even some of the very brightest Minds aren't able to predict just how crazy AGI and AI is moving so without further Ado I'm going to give you guys nine key pieces of information you need to know regarding this document and of course any other insights that might be located in this thing so essentially one of the things that you need to know is that this document is actually from DARPA so DARPA is essentially the defense Advanced research projects agency and has been a driving force in AI since its Inception so in the 1960s they funded groundbreaking research on neur networks a core Inception in modern AI that actually allows machines to train and improve without explicit programming now DARPA also played a crucial role in the devel M of machine learning which enables computers to learn from data and make predictions without being explicitly programmed for each task additionally Dara's support for research and natural language processing has actually helped computers understand and respond to human language Paving the way for advancements in areas like voice assistance and machine translation and the reason I'm giving you guys these pieces of information here is because I want you guys to understand what Dara has done before you just think that this isn't a relevant piece of information and that this video isn't actually relevant whatsoever there's also another piece of information that you should know about Dara is also been at the lead Dara has front of breakthrough Innovations so DARPA has also sponsored projects they have a long history of world changing Innovations they've played a critical role in the development of aranet which is the precursor to the modern internet which actually revolutionized the global communication and actually laid the foundation for the digital age and Dara funding also helped Propel advancements in self-driving cars a techn ology with the potential to transform the transportation industry and safety and their ongoing support continues to fuel Cutting Edge research in areas like Prosthetics that can be controlled by the mind and bcis brain computer interfaces and Next Generation materials with unique property so DARPA is a very established the third thing you need to know about this is that the date is important okay this was on November the 4th which is of course just before Gemini was released and this is going to be very important because when you see some the dates and some of the key things being talked about you're going to be like okay that makes a sense so now let's go on to point number four was in fact that DARPA were actually talking about cyber security and how things were going to affect them so essentially Dara said how will the president's executive order on safe secure and trustworthy AI effect up said if you didn't know this was like a presidential you know order that's basically looking to regulate AI because of course the proliferation of these AI Technologies has been profound so essentially he says you know I'm not sure yet there are restrictions on what it takes to work for a Frontier Model in the new document they are particularly worried about who can use a Frontier Model to generate bioweapons which is of course pretty bad and in cyber attacks and of course demonstrate how they're being appropriately careful with that I've not yet had a chance to grasp what the implications would be we intend to able to work with Frontier models and we will get back with you and essentially they're pretty much concerned about you know cyber security and of course bioweapons which is pretty crazy and if you didn't you know take a look at this like I said before this is something that you should be you know knowing about because essentially these are some of the exerts from the article and essentially the executive order is essentially just you know discussing a lot of stuff you know one of the points is of course limiting biological risk they talk about how you know it's one of the most consequential things with biotechnology and it requires the US government to undertake assessments specifically focused on the potential misuse of AI to develop biological weapons now they also actually talk about DNA synthesis and screening regulation the executive order creates a new regulation actually for DNA synthesis screening in federally funded biological research and this is actually aimed at the use of AI where they're trying to prevent pathogens or toxins evading screening and detection measures and of course there's a new framework establishment there's of course nucleic acid since the screening there's also bi risk impact assessments essentially in summary the executive order takes significant steps to address the intersection of AI and biocurity particularly concerning the potential for AI to be used in the development of bioweapons and establishes new regulations reporting requirements and risk assessment Frameworks to ensure that the AI is pretty much developed in a safe and secure manner now in addition this is where things in this video start to get rather fascinating this is where they actually start to talk about an AI slowdown and I think this is actually rather fascinating so this is where they mention GPT 5 and a very vast amount of Frontier models that at the time they weren't a lot of information on the internet so they basically state that you know as the capability advances so too will the performers using them be able to leverage the advanced capability at the same time essentially what they're saying here is if we get systems like gbg 5 that are Advanced with the in

### [5:00](https://www.youtube.com/watch?v=6p2DS8wRGhk&t=300s) Segment 2 (05:00 - 10:00)

other people are be going to be able to somehow jailbreak these models and be able to leverage that advanced capability at the same exact time and of course that is one model and they said another piece is that we will be keeping an eye on what is happening if the capability that we are working on in the program becomes outmatched we will stop the program and regenerate or do something else so whatever they're working on if it just gets completely outclassed I'm guessing that they're going to you know stop that program and just work on something else and what they also said is that not all Frontiers are advancing at the same Pace reinforcement learning is not going as fast as a Transformer model and we know that so many people are building on top of the Transformer model of course with the GPT series it's going absolutely crazy that is just moving you know fundamentally faster than a lot of other AI developments because AI isn't just chat GPT it's actually a lot of different uh you know subsections and other things so it's pretty crazy um and and it does they do say here which is you know I'm going to get into this point as well because this is why I think it's pretty crazy they say the pace of Frontier models is slowing down a little bit it says a lot of the results that we are seeing right now include understanding what they doing and what they are not doing they haven't released gbt 5 and they said okay now this is a bit why I highlighted it and this is why it's important it says that they haven't really even started training gbt 5 due to the slowdown in the release of the h100s due to the production problems at the Taiwan semic condom manufacturing company so tsmc if you don't know they basically produce uh the chips like they just make all the stuff that Nvidia use they basically Supply Nvidia with the chips and they're basically one of the most probably the most important company the world right now and you know they did have some issues I can't remember what exactly the issues were but I think during the region you know there was some political issues which led to the Slowdown there was just a lot of you know issues going on there um and they were saying that isn't the reason why they didn't release gbt 5 now I got to be honest as Point number five you know um one of the things that was pretty crazy remember how I said the dates are important because the dates for this document were actually November the 4th or something like that November the 4th I think the point is that 2 weeks later openi just basically announced that they were doing a tra run so the point is that if someone at daa someone who is you know got a lot of you know uh collaborations they do partner with anthropic Google open eye Microsoft if they have all the information and then and like literally they started training GPT 5 two weeks later that is a pretty crazy statement because they are basically like um you know we're going to have some you know room Breathing Room you know they haven't even started training gp5 due to the Slowdown of the h100s and the h100s were nvidia's uh chips that they needed to use I think it's crazy that like literally 2 weeks later cuz I remember making this video and I was like gbt 5 is now in training there was a bunch of information showing that they started to train this model so I think that is showing us that it's pretty crazy on how you know these AI advancements move quicker than what some teams think um and it just goes to show that like within two weeks a statement can be I wouldn't say you know completely out of date but it is one that uh you know shows that predictions are really hard to making the a space now in addition there was also something very interesting that they discussed they also discussed the halting problem and Gemini and they said the Gemini model getting the planning piece is integrated in the llm we are not so sure we lack full transparency basically they're saying we don't know entirely but we think Gemini is going to be getting llm now remember this was in November the 4th so they're talking about Gemini which will like I said before if we go back to my other side right here you guys can see the Gemini was actually released on December the 6th of 2023 so that means that this was like I said before a month before Gemini which is why I said things are moving rapidly they were like you know we're not sure if it's getting planning or not um and I think that is definitely going to be saved for a few future model now of course like I said with Gemini they also discussed the halting problem so they basically said that there's still significant problems to be solved and this is where they actually discuss AGI um this is pretty crazy as well they actually state that you know um hearing people say we're just a little bit away from Full artificial general intelligence is a bit more optimistic than reality they're basically saying that look AGI is a bit far away um and whilst yes I do believe what they're saying to an extent I think it's crazy that you know they were sating some things that we've just seen right now um you know with the pace of air development shows us that you know it's really hard to make predictions and you know saying that you know we're a bit far away from AGI they're basically saying that we're further away from AGI than you think and they said there's things like the halting problem the exponential things and we still need resources and basically what they're talking about is compute which is of course very true and the halting problem is a concept from computer science that asks whether it possible to create a program that can look at any other program and determine whether or not that program will eventually stop running or if it will run forever without ending now essentially this links to AI because at its core AI is about designing algorithms that can process information learn from data and make decisions and predictions the halting problem underscores a fundamental limit of what algorithms can achieve and it reminds AI researchers that not every problem can be solved by computation alone setting a boundary on the kind of task we can expect to handle so it's pretty crazy and of course in AI development especially in areas involving autonomous systems or decision-making algorithms understanding the limits of predictability is actually

### [10:00](https://www.youtube.com/watch?v=6p2DS8wRGhk&t=600s) Segment 3 (10:00 - 15:00)

pretty crucial and the halting problem illustrates that predicting outcomes of complex algorithms or a form of program can be fundamentally undecidable and this actually has insane implications for AI safety as it highlights the importance of Designing AI systems that can handle uncertainty and operate safely even when outcomes cannot be predicted with certainty so it's a lot of information there but the long story short is that they think that aagi is not that fast but you know I wouldn't say I disagree I would just say very hard to predict considering what people are saying I think it's going to be here by at least 2028 2029 and of course I'm talking about Gemini getting planning um it's clear that wasn't the case now they also you know with regards to AGI as they were you know talking about they said my question is when you have you might not have AGI but you might have a system that helps humans and everyone in the room advance so quickly that before AGI becomes this Apex we're not on apex this is asymptomatic growth where we are dealing with that constantly basically here D tries to avoid directly competing with industry on near-term developments in Frontier models such as incorporating new information into llms or developing multimotor models instead because they know that open a and all these other companies are going after that stack they try to focus on other you know longer term challenges that the industry might not prioritize such as multi-level security so here you can see they say what are they going to do in what time and what time frame and then what we should we do is something we talk about all the time do we have perfect answers no but we ask that question all the time so they know that you know these companies are going to be working on these Frontier models and llms are their main gripe but uh I guess what they're trying to say is that they're going to be working on some other cool stuff so I mean it's going to be kind of fascinating I'm sure they're probably not going to release any of the details of what they're releasing on but they do partner with companies and that is something that they do talk about you know I should probably not skip to this but um you know in here you can see this is point 11 I'm kind of skipping around on the points but it says uh you know Point number 11 looking over the past year how does Dara and the Dara programs that pop up how do they stay relevant within the fastpac of advancements in AI how does Dara maintain relevance when it's that fast-paced and that is a very good question and they said one area is by program structure the AI cyber challenge is a competition where we partner with llm companies anthropic Google Microsoft openi to provide compute access to those in the competition so they hold these competitions and they essentially see what's capable so it's pretty important that they do stay up to date because and the reason this is all crazy is because if you remember okay the fact that these companies are partnered together does bring more Credence to a letter that was uh you remember the qar letter they talked about how they spoke to Dara during something and I mean it kind of kind of gives me the Vibes that maybe the qar letter was true I mean I don't really know but the fact that you know DARPA are so closely integrated with these companies kind of leads me to believe that maybe there's a slight chance that it could be true and if you don't remember the letter I'm talking about it's the qar letter that sparked a huge debate and was one of the biggest leaked things that you know went all over the Internet and was the Q star not mem but like the big thing that everyone was talking about now one thing that they also did talk about that was rather fascinating and like I said before this is why it's very hard to make AI predictions they said how seriously does Dara consider the possibility of software being developed by Ai and here it says Dara has a position on the topic my opinion is that it will be a tool that will help people write software faster particularly boring boiler plate software faster but it will not automate the process I don't think that people who write good code will be out of a job anytime in the foreseeable future maybe I am overly pessimistic but that seems inconceivable I think a lot of the boiler plate software like coming in Frameworks or something like that the code everyone hates to write I think AI will write that anyway in the near future so here this is a prediction about the future um and here he basically says augmentation not replacement which essentially means that he's predicting or whoever it was in the interview is predicting that you know AI is going to replace coders but remember this was before Devon was released and crazy enough there was actually a recent research paper that actually goes to show that they've improved on Devon as well and there are open-source versions of Devon being built if you don't know what I'm talking about when I say Devon that is referring to an AI system that essentially can write code on its own it's an automatic AI agent software agent that has planning and reasoning capabilities built on top of GPT 4 and people are wondering when GPT 5 gets released how crazy is Deon going to be because it's state-of-the-art right now it's just the very best um there was a recent research paper but there was no demos the point is that with GPT 5 we might see like a 5x jump on Devon which would be 80% of software engineering task which would be absolutely incredible um and that would definitely impact the industry so him saying that you know I don't think anyone's going to be out of a job anytime soon um maybe not but right now I know software engineering is a pretty hard industry to get into so it's pretty crazy out there the like when people like this I wouldn't say they're struggling to make predictions but certain AI things come up and it's like o was that prediction right or not it just goes to show that one of the things you just take away from this video is that nothing is

### [15:00](https://www.youtube.com/watch?v=6p2DS8wRGhk&t=900s) Segment 4 (15:00 - 15:00)

Impossible by AI I wouldn't rule out anything because if we say that's impossible and then it comes to be true you're just going to look like a fall okay and I'm not saying that this person does I'm just saying that you know anything now like I've seen like I would not rule it out at all because uh with the rate that these companies are increasing their capabilities anything is genuinely possible now in addition the last point of this is that they essentially discuss uh you know fixing open source they said Dara is basically particularly interested in using Frontier models to automatically find and suggest repairs to open source software at scale which could be critical in scenarios like rapidly responding to a widespread Cyber attack um and essentially you know if there's a Cyber attack to your software is going to go down there's going to be loads of vulnerabilities they're basically trying to think of how could they get an LM system to quickly and you know just basically efficiently to fix all of the problems in that software that is going to be something that they kind of want because that's might need in the future now if you enjoyed this video let me know because I'm super

---
*Источник: https://ekstraktznaniy.ru/video/14420*