# 20 Surprising Things You MISSED From SAM Altman's New Interview (Q-Star,GPT-5,AGI)

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=DumPnDqBg5A
- **Дата:** 19.03.2024
- **Длительность:** 36:02
- **Просмотры:** 16,467

## Описание

✉️ Join My Weekly Newsletter - https://mailchi.mp/6cff54ad7e2e/theaigrid
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/

Links From Todays Video:
https://www.youtube.com/watch?v=jvqFAi7vkBc&pp=ygULTEVYIEZSSURNQU4%3D

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Содержание

### [0:00](https://www.youtube.com/watch?v=DumPnDqBg5A) Segment 1 (00:00 - 05:00)

so here are 20 Revelations from Sam alman's interview with Le Friedman I thought it was a very en tiful interview and I've collected 20 of the most interesting things so let's not waste any time and get into what there is so one of the first things that he actually said was he said that things are very intense at open Ai and I thought that this was rather fascinating because it goes to show that the scale that open ey are operating on in terms of the products they're trying to develop you know like something that was in the past that was really unpleasant and really difficult and painful but we're back to work and things are so busy and so intense that I don't spend a lot of time thinking about it there was a time after uh there was like this Fugue State um for kind of like the month after maybe 45 days after that was I was just sort of like drifting through the days I was so out of it um I was feeling so down uh just on a personal psychological level yeah really painful um and hard to like have to keep running open a ey in the middle of that I just wanted to like crawl into a cave and kind of recover for a while but you know now it's like we're just back to working on the mission the next quick point that Sam did actually make was he did talk about why they changed the board he also spoke essentially about the fact that the new board is going to have a lot more experience and something that he did actually also talk about was he did actually talk about you know it is pretty hard to say that open eyes governance did work because essentially it did fail because when they did fire him Sam was immediately replaced but he said that is going to be something that they do work on in the future I think didn't have a lot of experienced board members and a lot of the new board members at open a have just have more experience as board members um I think that'll help now here's the point that a lot of people wanted to see Point number three is that they did actually speak about Elia satova which is one of the most renowned scientists in terms of artificial intelligence and of course there has been many speculation in regards to what Ilia satova saw at open AI that prompted him to aler Sam Alman from the board and here they kind of discuss what IIA quote unquote saw and Sam Alman reveals that Ilia didn't see any he's a little bit younger than me maybe he works a little bit longer you know there's a meme that he saw something like he maybe saw AGI and that gave him a lot of worry internally uh what did ilas see uh I has not seen AGI none of us have seen AGI we've not built AGI uh I do think uh one of the many things that I really love about Ilia is he takes AGI and the safety concerns broadly speaking you know including things like the impact this is going to have on society very seriously and we as we continue to make significant progress um Ilia is one of the people that I've spent the most time over the last couple of years talking about what this is going to mean what we need to do to ensure we get it right to ensure that we succeeded the mission um so Ilia did not see AGI um but Ilia is a credit to Humanity in terms of how much he thinks and worries about making sure we get this right then of course here we do have another point which is why is Ilia satova remaining silent I thought it was rather fascinating that I guess Ilia just wants to be out of the spotlight for some time I think another thing as well is that maybe with all of the board drama like samman previously talked about the you know mental health impacts of that just wanting to stay out of the spotlight for quite some time just wanting to focus on mainly the work and just kind of think about the future of development rather than being in the entire Spotlight and maybe that's just what IL satov is doing just rather being a bit quiet I think I kind of think ailio is like always on a soul search in a really good way yes yeah also he appreciates the power of Silence then we had a rather fascinating point and that was actually Sam Alman uh talking about open eye and what the open stands for so this is where he actually refers to Elon musk's lawsuit where he's stating that open eye is no longer open and he actually pretty much says that you know the open in open AI the transformation of what that means and essentially pretty much now it just means that open AI uh I guess you could say tools are going to be open and available to the public for use but not open in terms of actual research I think this is a pretty interesting shift because um now people are going to be looking at this statement and you know I guess you could say seeing that you know of course openi has changed in its nature but I I wonder what people's opinions on this open Nature now are is the word open in

### [5:00](https://www.youtube.com/watch?v=DumPnDqBg5A&t=300s) Segment 2 (05:00 - 10:00)

open AI mean to Elon at the time Ilia has talked about this in the email exchanges and all this kind of stuff what does it mean to you at the time now I would definitely pick a diff speaking of going back with an oracle I'd pick a different name um one of the things that I think opening eye is doing that is the most important of everything that we're doing is putting powerful technology in the hands of people for free is a public good not we're not you know we don't run ads on our free version we don't monetize it in other ways we just say it's part of our mission we want to put increasingly powerful tools in the hands of people for free and get them to use them and I think that kind of open is really important to our mission I think if you give people great tools and teach them to use them or don't even teach them they'll figure it out and let them go build an incredible future for each other with that uh that's a big deal so if we can keep putting like free or lowcost or free and lowcost Powerful AI tools out in the world uh I it's a huge deal for how we fulfill the mission um open source or not yeah I think we should open source some stuff and not other stuff uh the it does become this like religious battle line where Nuance is hard to have but I think Nuance is the right answer now here's where Sam Alman actually also did talk about s needing to be better for its release and this was mainly due to Computing uh you know power/ restriction so essentially one of the things that you know with these Advanced AI systems and one of this the constant thing that we're going to keep seeing is that there is you know a real need in fact a dire need for more compute because the problem that we do have is that there just isn't enough compute to go around we know that gb4 is still very limited in terms of you know what we can access on a regular user and the fact that we're building more and more capable systems they're going to need more and more compute and this doesn't seem to be a trend that is going away anytime soon and it seems like that is of course the case with Sora and I think it was uh you know the estimated times for Sora video generation are between 5 to 10 minutes um that is of course pure speculation just based on the fact that they said you could go ahead and make a cup of coffee whilst you wait for the video to be generated but um essentially he's stating that you know we still need to make this more efficient and have more compute available before it is widely released to the public can this L can the same magic of llms now start moving towards visual data and what does that take to do that I mean it looks to me like yes but we have more work to do sure what are the dangers why are you concerned about releasing the system what uh what are some possible dangers of this I Frankly Speaking one thing we have to do before releasing the system is just like get it to work at a level of efficiency that will deliver the scale people are going to want from this so that I don't want to like downplay that and there's still a ton of work to do there but you know you can imagine like issues with deep fakes misinformation um like we try to be a thoughtful company about what we put out into the world and it doesn't take much thought to think about the ways this can go badly now here's something that I did actually find rather fascinating and this is where he talked about the job market changes this is something that is uh rather concerning it's a concerning point because of course as you know as technology develops it becomes more and more capable and that means it's capable of more and more tasks which means it's more and more capable of replacing your job so that is of course that not most people in fact I don't think anyone is going to be excited about especially if they are someone who's been in their field for a while so uh he kind of you know talks about this AI generated content do you think in the next five years people talk about like how many jobs they are going to do in five years and the framework that people have is what percentage of current jobs are just is going to be totally replaced by some AI doing the job the way I think about it is not what percent of jobs AI will do but what percent of tasks will AI do and over what time Horizon so if you think of all of the like 5-second tasks in the economy five minute tasks the five hour tasks maybe even the five day tasks how many of those can AI do and I think that's a way more interesting impactful important question then how many jobs AI can do because it is a tool that will work at increasing levels of sophistication and over longer and longer time Horizons for more and more tasks and let people operate at a higher level of abstraction so maybe people are way more efficient at the job they do and at some point that's not just a quantitative change but it's a qualitative one too about the kinds of problems you can keep in your head I think that for videos on Youtube it'll be the same many videos maybe most of them will use AI tools in the production but they'll still be fun driven by a person thinking about it

### [10:00](https://www.youtube.com/watch?v=DumPnDqBg5A&t=600s) Segment 3 (10:00 - 15:00)

putting it together you know doing parts of it sort of directing it and running it yeah it's so interesting I mean it's scary but it's interesting to think about I tend to believe that humans like to watch other humans or other human like humans really care about other humans a lot yeah if there's a cooler thing that's more that's better than a human humans care about that for like two days and then they go back to humans that seems very deeply wired it's the whole chess thing oh yeah but no let's everybody keep playing CH and Let's ignore the elephant in the room that humans are really bad at chess relative to AI systems we still run races and cars are much faster I mean this is there's like a lot of examples yeah and maybe it'll just be tooling now this was one of the craziest statements uh and I think it was quite eye openening because samman said that gp4 sucks now for those of you who use gp4 on a day-to-day basis I don't think it sucks I think it something that's really cool but if samman says that gp4 sucks I think that this potentially means that from his point of view based on the technology that he's seeing and what he knows is coming from the future systems maybe he just believes that you know GPT for and you know the systems beforehand are so terrible in comparison to the new systems that they are working on you know so I'm guessing that potentially they are so far ahead that they know that the newer systems are really going to shatter our expectations to the point where he can say that gbt 4 sucks uh and that was something that definitely actually did shock me so it was quite surprising but for me looking back GPT 4 Chad gbt is pretty damn impressive like historically impressive so allow me uh to ask what's been the most impressive capabilities of gp4 to and gp4 turbo I think it kind of sucks H typical human also gotten used to an awesome thing no I think it is an amazing thing um but relative to where we need to get to and where I believe we will get to uh you know at the time of like gpt3 people were like oh this is amazing this is this like Marvel of technology and it is it was uh but you know now we have GPT 4 and look at gpt3 and you're like that's unimaginably horrible um I expect that the Delta between five and four will be the same as between four and three and I think it is our job to live a few years in the future and remember that the tools we have now are going to kind of suck looking backwards at them and that's how we make sure the future is better another pointment that's another point that Sam Alman did make was he did talk about the future of long context windows and he did state that you know potentially not unlimited but really long context windows are probably going to be somewhere in the distant future and he also did about the fact that you know previously when you know Bill Gates was uh you know developing you know memory and computers and that stuff he couldn't really conceptualize a really lot like the need for you know huge gigabytes of storage but of course as we know that is something that is uh really you know easy to do in today's day and age we have terabytes and terabytes of storage but um something like the future long context Windows is going to be something that you know is something that they're thinking about from GPT 4 to gp4 Turbo people like long most people don't need all the way to 128 most of the time although you know if we dream into the distant future we'll have like way context length of several billion you will feed in all of your information all of your history over time and it'll just get to know you better and that'll be great for now uh the way people use these models they're not doing that and you know people sometimes Post in a paper or you know a significant fraction of a code repository whatever um but most usage of the model is not using the long context most of the time I like that this is your I Have a Dream speech one day you'll be judged by the full context of your character or of your whole lifetime that's interesting so like that's part of the expansion that you're hoping for is a greater and greater context there's this I saw this internet clip once I'm going to get the numbers wrong but it was like Bill Gates talking about the amount of memory on some early computer maybe 64k maybe 640k something like that and most of it was used for the screen buffer MH and he just couldn't seemed genuine this couldn't imagine that the world would eventually need gigabytes of memory in a computer or terabytes um and you always do or you always do just need to like follow the exponential of technology and we're going to like we will find out how to use better technology so I can't really imagine what it's like right now for context links to go out to the billion someday and they might not l Lally go there but effectively it'll feel like that

### [15:00](https://www.youtube.com/watch?v=DumPnDqBg5A&t=900s) Segment 4 (15:00 - 20:00)

um but I know we'll use it now here's an interesting point is that memory remember how a lot of people were talking about memory and that kind of stuff um memory is something that is of course as you know opening eyes said they are working on it but interestingly enough it seems like open AI have shifted their focus away from memory and I think that's because uh maybe they've just reached I wouldn't say a dead end but maybe they've realized that you know uh reasoning is more reasoning and you know agentic capabilities are stuff that you know can be to more things and of course personalization and memory uh is going to be something that they they've said that they've worked on but from this kind of interview statement it didn't seem like there were too um more you've given Chad GPT the ability to have memories you've been playing with that about previous conversations and also the ability to turn off memory I wish I could do that sometimes just turn on and off depending I guess sometimes alcohol can do that but not in uh not optimally I suppose uh what have you seen through that like playing around with that idea of remembering conversations and not we're very early in our Explorations here but I think what people want or at least what I want for myself is a model that gets to know me and gets more useful to me over time this is an early exploration um I think there's like a lot of other things to do but that's where you'd like to head you know you'd like to use a model and over the course of your life or use a system be many models and over the course of your life it gets better and better yeah how hard is that problem cuz right now it's more like remembering little factoids and preferences and so on what about remembering like don't you want GPT to remember all the you went through in November and all the drama and then you cuz right now you're clearly blocking it out a little bit it's not just that I want it to remember integrate the lessons of that yes and remind me in the future what to do differently or what to watch out for and you know we all gain from experience over the course of Our Lives varying degrees and i' like my AI agent to gain with that experience too um so if we go back and let ourselves imagine that you know trillions and trillions of context length if I can put every conversation I've ever had with anybody my life in there if I can have all of my emails input out like all of my input output in the context window every time I ask a question that'd be pretty cool I think yeah now this is something that I also did find rather fascinating was the fact that he also talked about different levels of compute and one of the things that he you know made clear was that you know in the future it I think it's going to be of course more effective is that like let's say for example I want to boil an egg and I want to ask an llm or an AI system how do I boil an egg in a specific type of way I could easily use some kind of very basic system like chat GPT 3. 5 to answer this kind of question but of course if we have you know a super Advanced AI that is definitely you know requiring a lot of compute we wouldn't route this question to that kind of AI so there's going to be different levels of compute for different levels of AI systems to be able to tackle uh easier and harder problems and I think that is something that is rather important so it's allocating approximately the same amount of compute for each token it generates is there room there in this kind of approach to slower thinking sequential thinking I think there will be a new paradigm for that kind of thinking will it be similar like architecturally as what we're seeing now with llms is it a layer on top of the llms uh I can imagine many ways to implement that I think that's less important than the question you were getting at which is do we need a way to do a slower kind of thinking where the answer doesn't have to get like you know it's like I guess like spiritually you could say that you want an AI to be able to think harder about a harder problem and answer more quickly about an easier problem and I think that will be important it seems to me like you want to be able to allocate more compute to harder problems like it seems to me that a system knowing if you ask a system like that prove from mlas theorem versus what's today's date unless it already knew and had memorized the answer to the proof assuming it's got to go figure that out seems like that will take more compute now this is some very juicy information Point number 12 is the qar leaks this is some very juicy stuff because qar was something that many people didn't think was real of course there were many different articles floating about but qar was something that uh of course as you know was in the speculation area for quite some time and now Sam Alman has confirmed that you

### [20:00](https://www.youtube.com/watch?v=DumPnDqBg5A&t=1200s) Segment 5 (20:00 - 25:00)

know these were some unfortunate leaks as he previously stated in the past but now it's a situation where you know the fact that he's openly stating that they can't talk about it means that we are looking at something that you know probably is going to come in the future and maybe GPT 5 and future AI systems they really have discovered something amazing one can drink open AI is not a good company at keeping secrets it would be nice you know we're like been plagued by a lot of leaks and it would be nice if we were able to have something like that can you speak to what qar is we are not ready to talk about that see but an answer like that means there's something to talk about it's very mysterious Sam I mean we work on all kinds of research uh we have said for a while that we think better reasoning in these systems is an important direction that we'd like to pursue we haven't cracked the code yet uh we're very interested in it Point number 13 is of course as you all know Sam alman's iterative deployment this is where they do not want AI systems to be shocking individuals they actually don't want that because that just isn't something that the public is going to take very well this can cause problems like Mass hysteria and other issues in society which they actually do talk about later on in the interview but the point is that they are trying to you know be even more iterative with their launches because they feel like they're not shocking people with their updates and um you know leex Freedman basically says that you know I actually did feel that these were quite surprising but essentially right here he states that you know they're going to try and release updates even more iteratively because that's not their intended goal it's interesting to me it all feels pretty continuous right this is kind of a theme that you're saying is there's a gradual you're basically gradually going up an exponential slope but from an outsider perspective for me just watching it does feel like there's leaps but to you there isn't I do wonder if we should have so you know part of the reason that we deploy the way we do is that we think um we call it iterative deployment we uh rather than go build in secret until we got all the way to GPT 5 we decided to talk about gpt1 2 3 and 4 and part of the reason there is I think Ai and surprise don't go together and also the world people institutions whatever you want to call it need time to adapt and think about these things and I think one of the best things that open ey has done is this strategy and we get the world to pay attention to the progress to take AGI seriously to think about what systems and structures and governance we want in place before or like under the gun and have to make a rust decision I think that's really good but the fact that people like you and others say um you still feel like there are these leaps makes me think that maybe we should be doing our releasing even more iteratively I don't know what that would mean I don't have an answer ready to go but like our goal is not to have shock updates to the world the opposite yeah for sure more it iterative would be amazing I think that's just beautiful for everybody but that's what we're trying to do that's like our state of the strategy and I think we're somehow missing the mark so maybe we should think about you know releasing gp5 in a different way or something like that yeah 4. 71 4. 72 but people tend to like to celebrate people celebrate birthdays I don't know if you know humans but they kind of have these milestones and all I do know some humans um people do like Milestones I uh I totally get that right here he actually does talk about the GPT 5 release date so uh when is GPT 5 coming out again I don't know that's honest answer oh that's the honest answer is it blink twice if it's this year I also so we will release an amazing model this year I don't know what we'll call it so that goes to the question of like what what's the way we release this thing we'll release over in the coming months many different things uh I think they'll be very cool uh I think before we talk about like a gp5 like model called that or called or not called that or a little bit worse better than what you'd expect from a gb5 and think we have a lot of other important things to release first and right here this is where like previously I've stated as well you know more Compu is of course needed for the future because of course they're trying to build AGI I think that most people did Miss The Mark with this because samman stating that you know a few different models are going to be coming out this year means that if we look back on some of the stuff that was uh talked about earlier samman actually did State well he didn't actually State you know they did speak about some of the models called like arus and GOI that kind of means that potentially we do have um maybe other models coming before GPT 5

### [25:00](https://www.youtube.com/watch?v=DumPnDqBg5A&t=1500s) Segment 6 (25:00 - 30:00)

some people have speculated that the models coming before GPT 5 are an audio model we know that is uh something that is penciled in for GPT 6 but I think it's going to be kind of interesting to see what other models they do drop because whilst of course they have been working uh on GPT 5 and we do know that as the main focus because everyone's like okay GPT 4 GPT 5 um you know with like Sora we didn't really predict that Sor was going to be here so them working on some other stuff um I'm not surprised if that does happen to will be it will be uh interesting to see if in the future this kind of thing does happen because I mean for the next couple of months I don't think we're going to be getting gbt 5 it seems that we're getting uh a couple of different models and potentially like I said before a couple of new models that we didn't even predict that we were going to get that open a I have somehow figured out learned how to crack the code and then uh done some really good stuff with it so that should be something that's really interesting because I think that kind of changes the concept and the scope of what people think is going on this year and when you combine that with the fact that they said that this year is going to move really quick I think that we're going to start to see um this kind of landscape I think the world is going to want a tremendous amount of compute and there's a lot of parts of that are hard uh energy is the hardest part building data centers is also hard the supply chain is hard and of course fabricating enough chips as hard um but this seems to be where things are going like we're going to want an amount of compute that's just hard to reason about right now Point number 16 is one of the most interesting things I actually did see because this was where samman actually talked about the risks of artificial intelligence in the mainstream and the kind of shift in societal opinions regarding how AI is viewed by consumers and I think this is one of the main issues and I didn't see anyone really talk about this but Sam Alman actually did Express his concerns of his own safety and he said the percentage of me getting shot is not zero which means that I I've been saying this for quite some time that I do think that in the future open a eye people that work there are going to need some kind of security Personnel at all times you know maybe at the facility or whatever because I think uh as tensions Mount between people who are losing their jobs and livelihoods um and as you know these these capabilities grow I think you know people potentially are going to have a lot of anger and a lot of rage and of course we don't want you know innocent people that work at the company to get harmed so I think him expressing that is definitely an important point because you know this transition from uh an economy that's based on labor to one that isn't is going to be fascinating because with millions of people displaced it's a real mystery on how we're going to make that transition smooth and effective for everyone I don't know if you know humans but that's one of the dangers security threats for uh nuclear fishion is humans seem to be really afraid of it and that's something we have to incorporate into the calculus of it so we have to kind of win people over and to show how safe it is I worry about that for AI MH I think some things are going to go theatrically wrong with AI I don't know what the % chances that I eventually get shot but it's not zero oh like we want to stop this how do you decrease this what I meant more about theatrical risks is like AI is going to have I believe tremendously more good consequences than bad ones but is going to have bad ones and there'll be some bad ones that are bad but not theatrical you know like a lot more people have died of air pollution than nuclear reactors for example but we worry most people worry more about living next to a nuclear reactor than a coal plant but something about the way we're wired is that although there's many different kinds of risks we have to confront the ones that make a good climax scene of a movie Carry much more weight with us than the ones that are very bad over a long period of time but on a Slow Burn well that's why truth matters and hopefully AI can help us see the truth of things to have balance to understand what are the actual risks or the actual dangers of things in the world now here's where Sam Alman in Point 17 talks about the AI arms race and why uh it is a little bit of an issue the con is that I think if we're not careful it could lead to an increase in sort of an arms race that I'm nervous about and this is probably one of the most interesting points is where samman talks about the leap from GPT 4 to GPT 5 and I think uh as we previously discussed is going to be rather stunning um so that was one as well what aspect of the leap and sorry to linger on this even though you can't quite say details yet but what aspects of the leap from gbt 4 to gbt 5 are you excited about I'm excited about being smarter and I know that sounds like a glib answer but I think the really special thing happening is that it's not like it gets better in this one area and worse

### [30:00](https://www.youtube.com/watch?v=DumPnDqBg5A&t=1800s) Segment 7 (30:00 - 35:00)

at others it's getting like better across the board that's I think super cool yeah there's this magic now here was something that I think most people from yesterday's video would want to see and this is of course the future of software engineering as I previously discussed the future of work is changing it's on a very you know uh rapidly changing scale and the thing is here is that we don't know how the future is going to work but mman kind of uh quells anyone's fears by stating that you know the future of software engineering won't be one that is entirely automated but rather augmented looking out into the future how much programming do you think humans will be doing 5 10 years from now I mean a lot but I think it'll be in a very different shape like you know maybe some people program entirely in natural language entirely natural language I mean no one programs like writing bite code some people no one programs the punch cards anymore I'm sure you can find someone who does but you know what I mean yeah you're going to get a lot of angry comments no yeah there's very few I've been looking for people program Fortran it's hard to find even Fortran I hear you but that changes the nature what the ski the skill set or the predisposition for the kind of people we call programmers then changes the skill set how much it changes the predisposition I'm not sure oh same kind of puzzle solving all that kind of stuff you program is hard it's like how get like that last one% % to close the gap how hard is that yeah I think with most other cases the best practitioners of The Craft will use multiple tools and they'll do some work in natural language and when they need to go you know WR C for something they'll do that point number 20 is samman actually touches on the humanoid robots and essentially right here he discusses how openi of course if you had didn't know openi previously did work on robotics but stopped because it was you know too difficult at that time um and they discussed that in the future they will return to robotics at some point so I'm not sure if it's just going to be through a collaboration with figure one or 1X robotics or if open AI are going to build the human o robotics themselves with their amazing team how important is embodied AI to you I think it's like sort of depressing if we have AGI and the only way to like get things done in the physical world is like to make a human go do it so I really hope that as part of this transition as this phase change we also get uh we also get humanoid robots are some sort of physical world robots I mean open a has some history quite a bit of History working in robotics yeah but it hasn't quite like done like a small company we have to really focus and also robots were hard for the wrong reason at the time but like we will return to robots at in some way at some point now here's where I think this is one of the most important points because this is where samman talks about the AGI threshold now the threshold for AGI for many different people has been different but I think this is the important point that you take away from the video at Point 21 is because if samman is essentially declaring that the AGI threshold uh is essentially the point at which a AI systems are effectively going to be you know producing scientific research you know like significantly impacting the economy I think that means his definition of AGI is one that is truly Advanced like an AI that can you know Advance scientific research in terms of being able to you know publish research papers and discover new things I think that is going to be something that is truly far on terms of the sliding scale in terms of abilities and I think that samman stating that is where his definition of AGI is means that we are truly far ahead in terms of you know standard AI systems because most people think of AGI as a system that can just do standard things on a computer at the level of any human but I think him stating that it's not just that it's the cognitive ability to be able to not only reason things together and get simple tasks done but to be able to actually have an impact on the world in terms of research and not just automating work and discover new knowledge so I think that is really important you and we as Humanity will build AGI I used to love to speculate on that question I have realized since that I think it's like very poorly formed and that people use extremely definition different definitions for what AGI is uh and so I think it makes more sense to talk about when we'll build systems that can do capability X or Y or Z rather than you know when we kind of like fuzzily cross this one mile marker it's not like AGI is also not an ending it's much more of a it's closer to a beginning but it's much more of a mile marker than either of those things and but what I would say in the interest of not trying to dodge a question is I expect that by the end of this decade and possibly somewhat sooner than that we will have quite capable systems that look at it and say wow that's really remarkable if we could look at it now

### [35:00](https://www.youtube.com/watch?v=DumPnDqBg5A&t=2100s) Segment 8 (35:00 - 36:00)

you know maybe we've adjusted by the time we get there yeah but you know if you look at Chad GPT e35 and you show that to Alan touring or not even alent toring people in the 90s they would be like this is definitely AGI well not definitely but there's a lot of experts that would say this is Agi yeah but I don't think chat I don't think 35 changed the world it maybe changed the world's expectations for the future and that's actually really important and it did kind of like get more people to take this seriously and put us on this new trajectory and that's really important too now what did you all think about this uh Lex Freedman Sam mman interview I thought it was really fascinating I hope Sam mman does more interviews um and I'm glad he didn't avoid a lot of the questions because times we do have vague answers to questions so I'm glad he was able to you know State a lot more than he usually does because he doesn't usually state that you know he's been deliberately vague and it's completely understandable but yeah it was definitely a very entertaining interview and let me know what point was your favorite from this video

---
*Источник: https://ekstraktznaniy.ru/video/14454*