Sam Altman Reveals AGI PREDICITON DATE In NEW INTERVIEW (Sam Altman New Interview)
27:19

Sam Altman Reveals AGI PREDICITON DATE In NEW INTERVIEW (Sam Altman New Interview)

TheAIGRID 11.04.2024 27 030 просмотров 662 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
How To Not Be Replaced By AGI https://youtu.be/AiDR2aMye5M Stay Up To Date With AI Job Market - https://www.youtube.com/@UCSPkiRjFYpz-8DY-aF_1wRg AI Tutorials - https://www.youtube.com/@TheAIGRIDAcademy/ 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Checkout My website - https://theaigrid.com/ Links From Todays Video: https://www.youtube.com/watch?v=RIp1TdYeutU&pp=ygURc2FtIGFsdG1hbiBob3dhcmQ%3D Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (9 сегментов)

Intro

so Sam Alman recently had an interview at Howard University where he actually spoke about a variety of interesting topics and there was a lot that he discussed it actually gives us an insight to things like education the role of AI in the future and of course artificial general intelligence which kind of gives us a gauge on where he's at in terms of what he's thinking now the initial interview was actually done back in January but it was only just released which means that this interview is from literally 4 months ago so that's of course something to keep in mind but nevertheless let's take a look at the first section where Sam Alman actually gives his opinion on what will be the most important skill for people to learn in the future strong agree with that uh

Most Important Skill

I think critical thinking creativity the ability to figure out what other people want the ability to have new ideas that in some sense that that'll be the most valuable skill of the future if you think of a world where every one of us has a whole company worth of AI assistants that are doing tasks for us to help us express our vision and um make things for other people and make these new things in the world the most important thing then will be the quality of the ideas um the curation of the ideas because AI can generate lots of great ideas but you still need a human there to say this is the thing other people want and also humans I think really care about the human behind something so when I read a book that I really love the first thing I want to do is go read about that author and if an AI wrote that book uh I think I'll somehow connect to it much less same when I look at a great piece of art or if I am using some company's product I want to know about the people that created that so I think in both directions of humans knowing what other the humans want and also humans caring about the humans behind something um this will be that'll be a super important skill uh and so I think learning that ability to create come up with new ideas choose ideas from among the many options presented by an AI uh that'll be very valuable I agree with you the tools will change but I also think familiarity with the tools of today and this new way of using computers is really important and that'll be important for everyone not just the tool Builders but everybody like in the same way that if you can't use a mobile phone you're kind of at a huge disadvantage but they're not that hard to use and people learn but the earlier in your career you got familiar with or in life the better you know everybody in this room was familiar with it probably as long as you can remember but uh I remember watching older people struggle with getting comfortable with a phone for the first time as intuitive as I thought they were uh I think it I I think human adaptability is remarkable and so I'm very happy that people no longer think it's weird or impressive that we can talk to a computer like we talk to a human and it understands us and it talks back to us and it does things for us but two years ago almost no one believed that was going to be possible anytime soon you know two years ago what happens now with using chat PT was the stuff of sci-fi at best and if you told the world this was going to be part of people's daily lives two years later I think they would have said of course not you know so at the end of that we actually got Sam Alman to say that you know this is going to happen and it's not going to happen you know in the far future which does mean that you know AI is growing as quickly as some people expect it to be and I think that this means that the future systems that open I do have in mind are clearly going to be some of the most capable systems that we've ever seen and might even exceed some people's wildest predictions on what they're able to do now this is something that we've you know widely speculated on for quite some time but hearing Sam Altman say himself is something that is pretty reassuring and something that he also said that you know uh in terms of the most important skills that you can have for the future is of course I think there are two most important skills now I do discuss this on the second Channel where I talk about post AGI economics which is just you know how the economy is going to move forward in terms of the job market and in terms of where people are going to gain their economic agency from and their economic value from and one of the key things that he does talk about is of course number one thinking about what other humans want and being able to provide that is going to of course be an essential skill because humans are going to need a lot of different things than they do need now and of course number two focusing on the humanness aspect which is something that I've been trying to do because in the framework that I've adapted and the framework that I talk about for post AGI that we're moving towards humanness is of course something that I know people enjoy for example even on this channel something that I try my hardest not to do unless I'm completely ill is of course use AI to do any of the content creation and this is because I truly believe that people value what other humans create and this is something that I've seen time and time again usually when you see AI generated artwork it's frowned upon usually if there's an AI voiceover people don't like listening to it as much of course there are some exceptions but I think humanness in the future for the workplace is most certainly going to be something that is valued uh you know quite a ton because if we're moving to a future where you know AI is going to be I guess you could say outling humans one to one I think humans are going to Value other humans quite a lot you know it's going to be something that I think we're going to see and by focusing on that it's definitely going to allow you to have a lot more value in terms of just you know rather than automating completely everything that's a Hollywood thing and this is a significant change the world has just gone through um I think this is probably well certainly this is the most significant change to how we use computers since the touch screens on mobile phones um but I think it'll probably be much bigger than that you will be able to just say tell a computer like you would tell a friend or an employee I need this thing to happen or what do you think about this or can you help me out with this or how do you think about this and it'll just do it for increasingly complex definitions of it you know right now it can maybe like write some code for you edit a paper for you uh you know help you analyze things but someday it'll write a whole program for you uh do a whole research project for you help you come up with new ideas uh someday not in the far future so I think this a very big deal and something that Sam mman did talk about the last bit one that I want to you know just reference once again is of course he said that you know uh you know computers are basically going to just be able to do something that you want and this is something that we' previously spoken about before where we know that opening ey is of course focusing on agents in the future so it's of course you know opening ey are pretty much the market leader in terms of what they've been able to do with AI systems across the board um and it seems that you know in the future when we have systems where we're able to say okay I need you to go ahead and do this and that and it's not just a back and forth interface where the AI actually takes 10 steps to complete several different tasks across a range of you know subfields uh it's going to be really fascinating to see how far that potential is pushed to see what an AI system can really do versus what it can do today and I think you know for the future that's going to be something that's rather fascinating that happens with every technological Revolution and even though I'm confident we're going to gain much more than we lose doesn't mean we're not losing something and we're all loss ofs for good reason um I'll tell you what I think we're not going to lose which is I think we are not going to lose two things the value and depth of Human Relationships how much we care about other humans uh I think people get excited to talk to AI friends for a while and that'll be part of the future for sure but you hear people who do that a lot say man there's really something about knowing this another human and this is like deeply biologically wired in us and I don't think going anywhere um we're going to we are so yeah so deeply wired to care about other humans what other humans think what other humans do the connection we have with other humans we're not going to lose that um so one of the things that Sam of

Humanness

course just talk about there was something that we previously spoke about was of course the humanness now I think the reason that a lot of people are trying to talk about you know humanness in terms of onetoone Human Relationships is because there's been like a slight Trend that most people haven't been paying attention to unless you're actively engaging it in yourself and that is of course um this company here so you can see that this is character AI which is basically a company that you know is nearly catching up to chat GPT in terms of the users and the reason that most people are actually you know talking about character AI in terms of the app and you know how active the user is because we have so many people that are engaging with things that aren't human um and a lot of people are you know talking about how people are engaging with AI girlfriends and how it's becoming an increasingly a worrying Trend because people are engaging with things that you know right now aren't that good in terms of impersonating what a human could do but in terms of the future we know that these systems are going to get more and more lifelike they're going to have better voices probably real time um I mean so it's like you know people are wondering that you know is that going to be something that you know goes on even worse in the future I think it probably is going to be considering the fact that there's already societal issues that you know that exist between men and women in society I'm not going to really get into those issues there but you know there's always going to be issues and if people now have something where they don't need to talk to you know which which either whichever the point I'm trying to make here guys is that if you now have a system where you don't need to engage with another human in order to get some emotional connection the problem is that now you never need to ever talk to another human again um and that leads us to a situation where Some Humans might just stop playing the game all together um in terms of talking to another human and that's you know an entirely another issue but like I said before I think overall humanness will still be one of the most important things because like I said before you know discussing with how you know running this Channel and many other you know Industries uh when things are created by AI it is kind of round upon and I think humans do want to engage and interact with other

Cognitive Service

humans we are going from a world in which intelligence is limited and expensive to Abundant and cheap and if you think about how much any of you could do if you had a massive amount of cognitive labor at your disposal to build the ideas you want to see happen to be useful to other people to provide services and advice um you know right now you can hire people and you can coordinate them and it's kind of difficult and very expensive and most people in the world cannot afford nearly as much let's call it cognitive Service as they' like um you know not many people can afford great lawyers for example that's a very specialized very expensive kind of cognitive service if the cost of that the availability of that comes down by a factor of a 100 or factor of 10,000 and not just for legal advice because I don't think anyone needs like lots more legal back and forth but for all the stuff we do want great entertainment great products and services everything else great educ great Medical Care uh that is a profound shift to the world so we're super excited about that and I think that everyone can feel what the magnitude of that transformation looks like your second point is actually not a question that I've been asked many times and I think it's a great one so I appreciate it um one of the things that I learned at YC why common a and also what I learned is I was like a kid studying the history of technology is you can never go to too far making a technology easy to use and accessible um every you know every like 10% easier to use you can make a technology maybe twice as many people use it or they use it twice as much or there's this huge thing and so we had this technology that we knew was pretty cool we didn't know quite how much people were going to like it but we had a sense they were and we put it out first in an API and like some nerds had a good time with it but not very many and it was kind of like unknown in the world we put gpt3 out in the API I think it was in like June maybe it was July of 2020 something like that uh and you know people built stuff and other but we started thinking then about like what is the best simplest most natural user interface that we can build on this and I'd had this observation that computers had trended over time um to be as close to the way we interact with other humans or we interact with our physical world as possible so you started out with like punch cards to program computers I don't know how those people did it sounds amazing to me like what an unnatural way to use a computer and they're like literally like sorting these things out on the floor wild but they did it and then you had command lines and that was like a little better there's somewhat of like a kind of framework I can see for that but I'm grateful I never really had to use those computers and then you have the graphical user interface and now finally we're getting something towards more like something the way we interact with the world uh and a lot of people started to use it and we knew how to point at things and the mouse was a reasonable analog for that the keyboard was kind of fake but it was like good enough and this idea that we had these like Windows and graphical information displayed to us like we look at the world we look at a screen there were images that all kind of worked uh the smartphone was then a huge Revolution we got to get rid of that keyboard and that Mouse and just use our hands like again much closer to how we used the world and so we were thinking about what was next in that and sci-fi had predicted this so it shouldn't have taken us as long to figure it out as it did but you really just want a computer you can talk to like you talk to a human we have we are so finely tuned to used language and this the the nuance and sophistication of language um imprecise though it is all of the problems with it that it has we can communicate at a very high bandwidth enormously complex ideas with language and so we said well what if we just go back to this idea of chatbots people tried it earlier the problem was the chatbot didn't really understand you maybe now it can let's try to build that and then building the chatbot itself the chat interface itself is obviously trivial but the question was how do we tune the underlying model to be really helpful to you and really good at conversation so yeah right there you can see Sam mman talks about the evolution of the kind of systems that we've interacted with and how they've gone better over time and you know that this might be the final I wouldn't even say final actually if you're discussing you know neuralink but I think this is the step before the final I wouldn't say integration with humans but the final way that humans are going to interact with technology because I would say now we're moving to that stage where you know nearly everything is going to probably be you know interacting with humans in a natural language setting I mean you know some people are saying that the future you won't need to code you're going to be able to just say uh you know code me this program you know fix this code y y or go build me this program which is um uh very simple and interacting in a natural language way um and in the future of course I'm talking about you know the final stage where you're probably going to have like you know a neuralink device or something is just going to be able to think and it's going to be able to interpret that immediately um and go ahead and do the work so I think you know the movement from you know how we interact with systems now to in the future it's very telling for how open ey build their next systems cuz he actually Al did say that he wants it to be very easy for everyone to use um and the easiness and the ease of use in terms of the UI design that they're going to be doing and just how the entire system works I think it's going to be really simple to engage with and interact with um even with regards to you know many of the way that you know if you're trying to build an agent now you know there's a lot of code involved complexity involved so I think open ey are going to solve that in the future so it will be interesting to see uh how they do tackle that problem but like I said I think open a ey one of the key things that I think most people do miss is that open ey are also not just a fantastic company that do amazing a research they're actually a company that is focused on building really good products which means that the services that they provide are far superior than any other company in terms of the fact that even if they have a better AI system open AI can usually present it in a better way and I think that's something that's really underrated um and something that most people don't take into account because whilst yes you know some other systems have beaten gbt 4 on the benchmarks um I think in the future just the ease of use of the systems is going to be something that really is uh a point where opening ey do manage to you know stand out in feedback where we take the base model and get it to behave in a certain way and that requires both deciding how it should behave uh and then getting people to sort of say this is a good response this is not a good response or this you know this fits the specification that this doesn't and having diverse representation at all of those steps uh is very important and also figuring out and agreeing as a society on what the behavior should be I know I've mentioned this a few times but it's such a big challenge getting that right requires such a diverse input of voices to do it um I think that'll be critical to the field going forward okay thank you you know there has

Model Collapse

been two sides of the spectrum on one side there has been Google's woke AI which is where it's you know so Progressive that it unfortunately portrays history inaccurately which is of course a problem because you don't want history to be portrayed inaccurately because inaccuracy is just not what we're aiming for with AI systems and then on the other side you don't want AI systems to be so regressive in the sense that it goes in the other direction so we do want a fair balance and this is something that I think is going to be pretty hard to fix because it's something that AI systems I'm not sure they're inherently understanding of what humans always want now this also brings me to another problem that I was reading about in a research paper which is the idea of model collapse so essentially this paper actually does it's not really talking about work Ai and that kind of stuff but it does talk about something that's pretty similar um and I guess this is kind of you know the problem you know it's kind of like a similar area and basically it talks about you know model collapse so essentially the widespread use of AI systems you know like llms which we talk about all the time could lead to a narrowing of human knowledge over time as these models tend to generalize and focus on content which is pretty common and popular information rather than the rare and specialized knowledge which you know we usually sometimes are exposed to which you know furthers discoveries and just you know it's a more broader picture and this knowledge collap could actually you know harm Innovation and lead to a less Rich understanding of the world and the diversity of ideas get lost because essentially what we have is AI systems that are you know are cheaper um a lot of people produce content with AI and because the AI is so centralized in it terms of its beliefs and always kind of you know generalizes with the same kind of answers we can have uh this kind of problem in the future which is I'm not sure how they're going to solve this problem I think they're going to have to uh you know it's probably going to be some synthetic data set and probably some testing on you know trying to understand where the distribution lies in terms of you know the variations where AI is generating content so I think it's going to be interesting to see uh how this goes in the future because these AI systems are going to become more and more widespread and as they uh these hallucinations and the less diverse the content is in terms of the ideas Innovation and in terms of you know certain things which are you know in certain industries and certain topics that just really Niche pieces of information I think AI systems won't talk about it that much and this is something that I've noticed myself when talking to AI systems like I will know a lot about a topic and sometimes I'll ask an AI just to make sure that it knows and sometimes it won't even bring up a certain point and like what about this why didn't you bring up this and it's like oh yeah I do remember that and it's like just because it was on the niche end of the spectrum like it was just on the tail of tail end of the distribution the AI systems sometimes forget to include it so and that's also something that you guys should know as well when you're using chat TBT sometimes it doesn't always give you the entire picture it just gives you the you know centralized distribution of information that it has so it's always important to like still do research on top of chat gbt because a lot of times it does Miss some of the like key points that you do Miss and it's only really centralized so hi thank you my

AGI

name is Kiton I'm a sofom major here and my question is actually tended towards AGI because um the future after AI indef Tech become AGI and how artificial intelligence has emotions or we able to learn from itself and you know that's where the potential risks come in with you know um AI actually having emotions and being able to like well the risk potentially so I'm going to ask where open AI is at in terms of AGI and how do you plan on balancing out the risks and benefits thanks for the question it's uh you know that is probably the thing we think the most about um I think AGI is now like a such a fuzzy term and people use it in so many different ways what you're asking about I think is closer to what I would call like super intelligence not something that can do the jobs that a human can do but say something that can do research do AI research itself maybe as well as all of open ai's researchers and use that to self-improve um and how we think about what the world will look like when we get to that level and how we make sure we confront the risk of such a system um which are very hard to do we have new teams that help us think about being prepared for that world also technical safety work to think about how we can make sure humans stay in control of systems that are more capable than we are I think it's somehow both going to be Stranger Than it seems and also in some other way much more continuous and much more like the world of today humans will still be in control but what any human can do and certainly what any group of humans or Nation can do will be like vastly improved and part of the reason that we try to talk about this even though it scares people or they think we're crazy or both is if we're right this is a huge deal and really important it's going to impact all of us in a huge way and we want the world to have this conversation now like we know that chat gbt isn't that powerful we know if it was just going to be chat gbt none of these things really matter but given this the steepness of the curve that we're on of the expon itial um we want the world to have this conversation so we jointly decide how to balance those risks and benefits thank you just to ask how close do you think you are do to like achieving all that um it's super hard to say I hesitate to give I I'm like always happy to make predictions about what will happen but when in research in particular is super hard but I would say that like in this decade we get to very powerful systems I personally don't believe towards that like thing that can do is re AI research as well as open AI but I've been wrong before um but I would say like very powerful systems that a lot of people will say like okay for what I want to call AGI this is a version of it an early version by the end of this decade that would be my guess but could be much longer thank you

Timeline

so here we have a Sam alman's deadline well not really deadline just a vague definition and I think Sam mman rightly so is of course very vague in his predictions because he doesn't want to be someone that's known to Contin blast out dates and then of course when they don't happen it's like you know a knock in terms of your reputation by not being able to deliver on said dates but he does say that pretty much by the end of this decade we should be able to get an AGI system which is not far away six years um is really quickly if we've seen how quickly the past years have gone I remember Co like it was yesterday so you know six years is not that long in terms of getting to an AGI system and of course the point is as well the kind of systems that we're going to be getting for that will be also pretty fascinating as well so that is of course you know something that he did say he did of course leave a caveat in there by stating that it could be you know much longer and of course there are pretty much anything could happen with you know the way Earth is at the moment but the point is that he also does discuss super intelligence which is of course an A system that is able to do research at the level of open a eyes engineers and then improve itself so we did get a kind of definition in that aspect but um it is going to be kind of interesting to see how the developments leading up to AGI will be considering you know we're progressively increasing in terms of the capabilities of these systems implications of AI one that comes to my

Deep Fakes

mind is deep fakes and online impersonation of people so how do you think the industry as a whole can sort of medicate that problem there's two different directions I can imagine that coming from um one is when people say something themselves or when they endorse a particular image there's like a cryptographic signature other people can verify and you say this really is a picture I took or quote I said um and we as a society decide that you know we're just flooded with generated media and back to that point about humans caring about other humans we're going to like have these networks of trust and we'll say all right you know what if you didn't sign that photo I'm going to assume it's not real um and if someone didn't sign it that I trust that I trust and I don't have that chain I'm going to assume it's not real so that could happen um the other thing that could happen is that we have enough rules in place on the powerful AI systems that exist that there's a watermarking process that everybody kind of enforces um but with either of those paths there will be a huge amount of generated content on the internet and I think Society is just very quickly going to evolve to understand not to take it too seriously samman also does discuss here

Digital ID

something that is quite interesting and I think this is something that there needs to definitely be some kind of Regulation because currently the problem is with AI generated images is that we don't have a One-Stop solution in terms of verifying whether or not an image is real or absolutely fake and currently there's this big discussion on you know how we're going to get to that solution um and Sam does talk about you know applying some kind of digital ID to that image in order to verify whether or not that image was taken by an AI system or not and I think that this is something that is really important because as these systems get increasingly more comprehensive as they get easier to use you know I mean right now you know mid Journey it's pretty easy to use you log on to Discord y y but as we get to systems where you can just literally say hey create me 1500 images of XY Z and it's able to blast out high quality images within seconds I think we're going to need some kind of system like Google's you know synthetic ID where any major AI text you know image AI system is able to pretty much convincingly have some kind of digital Watermark that you can then run through some other system that you know notices if it's you know digitally altered or not so I think whichever company manages to come up with that is probably going to be a winner or it's some kind of legislation in the future that I think uh we will definitely see because uh you know it only takes one picture to you know have a devastating impact on you know what people do believe

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник