# Sam Altman Just REVEALED The Future Of AI..

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=fBz2bOhcYQw
- **Дата:** 01.06.2024
- **Длительность:** 17:25
- **Просмотры:** 40,172
- **Источник:** https://ekstraktznaniy.ru/video/14276

## Описание

Join My Private Community - https://www.patreon.com/TheAIGRID
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Checkout My website - https://theaigrid.com/


Links From Todays Video:
https://www.youtube.com/watch?v=oNP6W8bl0XI (8:52:00)

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries)  contact@theaigrid.com

#LLM #Largelanguagemodel #chatgpt
#AI
#ArtificialIntelligence
#MachineLearning
#DeepLearning
#NeuralNetworks
#Robotics
#DataScience

## Транскрипт

### Segment 1 (00:00 - 05:00) []

so after a pretty rough week at open AI for Sam Alman he actually did a interview where he actually responded to numerous different comments and numerous different interview questions that were pretty interesting and I've got to be honest this is probably one of the most insightful interviews that we do get because it actually Reveals His opinions on some of the recent controversies that have been surrounding open Ai and a decent amount on the future direction of some of the frontier models that we will be getting in the future now one of the first things that Sam Alman actually does talk about is that he does give subtle hints to a new architecture that they are potentially working on now I do say potentially because that is a very very big potentially so we have synthetic data and potentially a new architecture or a new method of making these AI systems I guess you could say be more data efficient with their training so I'm referencing this article here from the information this was an article that was released I think sometime in the past few months and this was actually around the time that Sam Alman was being fired from openi so this was November of last year but we can see here that it clearly states that sucker's breakthrough allowed open AI to overcome limitations on obtaining enough high quality data to train new models so what I'm going to do is I'm going to show you guys the clip because once I play this clip from the interview you'll then understand exactly uh what he's saying because the interviewer asked the question how are you training these new models and are you just simply generating large amounts of synthetic data but Sam Alman reveals exactly what they are up to I think what you need is high quality data there is lowquality synthetic data there's lowquality human data um and as long as we can find enough quality data to train our models or ways another thing is ways to train you know get better at data efficiency and learn more from smaller amounts of data or any number of other techniques uh were I think that's okay and i' say we feel like we have what we need for this next model are you have you created and you can see here that he literally says you know that we have what we need for this next model so whatever it is that they are currently using to train the next model in terms of data efficiency whether it be synthetic data or just I guess you could say figuring out how to get a lot more high quality data from whatever sources they may use I think one interesting Trend that we've seen recently is that of course data is very important and with the 53 series we've seen that the smaller models are getting increasingly better and that is a direct byproduct of the fact that we are realizing how much your high quality data actually matters so uh the clip does continue we have what we need for this next model are you have you created massive amounts of synthetic data to train your model on have you self-generated data for training we of course have done all sorts fix Ms including generating lots of synthetic data um My Hope Is that there will be you know there'd be something like very strange if the best way to train a model was to just generate like a quadrillion tokens of synthetic data and feed that back in it would you'd say that like somehow that seems inefficient and there ought to be something where you can just learn more from the data as your training uh and you know I think we still have a lot to figure out but yeah of course we've generated um lots of synthetic data to experiment training on that but uh again I think the real the core of what you're asking is how can you learn more from less data that's interesting I didn't know that so yeah I think what open I are trying to not really tell us but what we can infer from this very short segment is that open AI uh Sam Alman whatever you want to call it they are focusing on of course getting these models to learn from whatever data that they do have whether it be synthetic data human generated data whatever data it is but um I did want to include this because I know that a lot of people would forget this article where SAS breakthrough allowed openi to overcome limitations on obtaining enough high quality data to train new models um and this was I think this was around the time that GPT 5 did start training forgive me if that's wrong but um apparently this was the major obstacle for developing Next Generation models and of course the research involved using computer generated rather than real world data like text or images pulled from the internet to train new model so it's clear that there was some kind of uh breakthrough before samman's firing and whatever kind of breakthrough that they've done clearly you know samman gives a small hint about that and I'm guessing that they're using that to train these new models and this kind of does really make me excited to see how the benchmarks of GPT 5 will be and like I said before I think you know um with the next Frontier models I think that probably with GPT 5 there might even be some kind of benchmarks that I guess you could say aren't emergent but just something that we can't even measure on other systems that just completely uh

### Segment 2 (05:00 - 10:00) [5:00]

blow things out of the water now here is where Sam Alman actually talks about something that I would say is probably most concerning to most of us who are people who are going to be living in the society I guess you could say in the normal tier and I say that because this is where he discusses post AGI economics So currently our economic model is one that is labor based so you exchange your labor for I guess you could say money I mean some people say it's value or time whatever it is uh the majority of people just exchange their labor for money and of course if we are moving into a society where depending on whatever kind of work you do whether it be physical cognitive white color blue color at the end of the spectrum over a long period of time once that change does occur you have to wonder how the society that we all live in is going to change because if the social contract is changed where you know human value is no longer based on exchanging your labor for an income uh you know how does a social contract change and this is what Sam mman talks about I still expect although I don't know what and this is over a long period of time this is not a like next year or you know the year after that kind of thing but over a long period of time I still expect that there will be some change required to the social contract given how powerful we expect this technology to be um I'm not a believer that there won't be any jobs I think we always find new things to do but I do think like the whole structure of society itself will you know be up for some degree of debate and reconfig and that reconfiguration will be led by the large language model companies no no just the way the whole economy Works uh and what we like what Society decides uh we want to do and this has been happening for a long time as the world gets richer um social safety nets are a great example of this I expect we will decide we want to do more there so yeah I mean it's going to be really interesting because this is where you have the concept of Ubi uh and in some videos that are coming soon I'm going to really be talking about how I guess you could say you can prepare for what's coming because I think there are some hints with which as to what the social contract is going to be of course there's this idea of universal basic compute which some Altman has basically tinkered with before where if there is like an ASI level system you know artificial superintelligent system that can do a lot I guess you could say everyone gets a certain allocation and that resources probably a lot more valuable than money because it can generate a lot of different things for every individual and maybe it's going to be up to you to allocate how you want to use that resource if AGI is here you know everyone gets like an hour a day or I know it's pretty hard to kind of like explain how that works but instead of exchanging money I mean there's going to be something else and it's pretty hard to Envision that's the whole reason why this is very hard to talk about it's because we're moving into you know a society that I guess you could say hasn't been done before so it's going to be very interesting to see if we actually do get it right and I guess probably the AI will even help us design how that Society is even uh functional and how it works and how it's you know organized as you train this next iteration let's let's stick with the next iteration of the model as you train it what level of improvement do you think we're likely to see are we likely to see kind of a linear Improvement or are we likely to see ASM topic any kind of exponential very surprising Improvement great question uh we don't expect that we're near an ASM toote um but you know this is like a debate in the world and I think the best thing for us to do is uh just show not tell uh you know there's a lot of people making a lot of predictions and I think what we'll try to do is just do the best research we can um and then figure out how to responsibly release whatever uh whatever we're able to create I expect that it'll be hugely better in some areas and surprisingly not as much better in others which has been the case with every previous model but this feels like the conversation we've had every uh other model release you know when we were going from 3 to 3. 5 and 3. 5 to 4 there's a lot of debate about well is it really going to be that much better uh if so in what ways and the answer is there still seems to be a lot of Headroom um and I expect that we will make progress on some things that people didn't expect to be possible on the whole though this so yeah that was one of the kind of surprising things is that you know there's actually an area or areas of improvement that we didn't even think were completely possible which is why I'm stating that you know these next models next iteration of models probably not going to blow them out by our current standards we're probably going to have to rethink about how we even use the systems uh and I think that might be the biggest uh I wouldn't say uproar but the biggest shock um to us because I think you know currently people just look at you know the standard benchmarks like the mlu and yada y um and the other just standard like math and GSM 8K

### Segment 3 (10:00 - 15:00) [10:00]

benchmarks which whilst yes they're very useful on uh determining like I guess you could say on a quantitive basis of how the models are actually performing uh I think you know the ELO Arena you know those areas that are more like qualitative where you can actually uh talk to the model and realize how it actually works when it's actually with an actual user and not just you know trained potentially contaminations on the benchmarks which some of them even actually do have errors but I think you know the fact that he's clearly stating here that you know there's a lot of head room for a lot of growth uh and the fact that in previous interviews if you haven't seen he's basically said that GPT 4 is very dumb and I've said that before that like that is a very bold statement considering the fact that many people have meaningfully used GPT 4 within their lives to improve their workflows I'm definitely someone that's been able to benefit from that because it's a very useful tool for brainstorming but the fact that the tool is regarded to as dumb is you know it's kind of an eye opener to kind of show you possibly what is to come in these next iteration of models and I'm really intrigued as to what areas that we're not even thinking about that are going to be improved upon that uh most people think that they really can't be so that will be something that I'm going to be looking out for and um the kind of growth of course there doesn't seem to be any kind of plateaus so it seems that we're going to be uh getting a huge new increase in terms of the reliability and terms of the overall reasoning capabilities now there was also some think that Sam Alman did respond to and that was of course the fact that he decided to respond to Helen Tona when your name is in the public and people are making pretty you know uh big statements on your name especially when you're a public figure for one of the hottest AI companies of course it's important to address them now if you're not familiar with this essentially Helen Tona basically said that Sam Altman was fired for outright lying and she was on the board that made the decision to fire Sam Alman in November so samman basically responds to Helen Tona and you know he gives his kind of side to the story look I respectfully but very significantly disagree with uh her recollection of events but I will say that I think this toner is um she genuinely cares about a good AGI outcome and I appreciate that about her um I wish her well a Miss toner is um she genuinely cares about a good AI outcome and I appreciate that about her um I wish her well I yeah probably don't want to get into like a line by line reputation here um when we released chat GPT uh we it was you know at the time called a low-key research preview we did not expect what happened to happen but we had of course talked a lot with our board about a kind of research plan that we were a release moving towards we had at this point had you know 3. 5 which chbt was based on available for I think about 8 months or something like that we had a long since finished training GPT 4 and we were figuring out uh a sort of gradual release plan to that um but yeah like I uh I disagree with her recollection of events okay that's where you're going to so yeah I think it's important to you know for Sam Alman to at least give his side of the story because I think it was probably one of the most incredible weeks and that's very incredible to say considering the fact that there was a week where there was all of this open a ey board drama so I mean as they say what would open a ey be without its drama and continuing on from the drama there was of course the Scarlet Johansson Fiasco in which there was a lot of drama with regards to the voices and I'm not going to cover the entire Fiasco again because I'm sure that everyone pretty much knows about it by now but um he does respond to this it's super good for that um let's talk about the Scarlet Johansson episode cuz there's something about it I don't understand so you demonstrate these voices she then puts out a statement which gets a lot of attention everybody here probably saw it saying they asked me if I could use my voice I said no they came back 2 days before the product was released I said no again they released it anyway open AI then put out a statement saying not quite right we had a whole bunch of actors come in an audition we selected five voices after that we asked her whether she would be part of it she would have been the sixth voice what I don't get about that is that one of the five voices sounds just like Scarlet Johansson so it sounds almost like you are asking there to six voices two of which sound just like her and I'm curious if you can explain that to me yeah it's not her voice uh it's not supposed to be uh I'm sorry for the confusion clearly you think it is uh voice some people's I mean people are going to have different opinions about how much voices sound Al like uh but we don't it's not our voice and uh yeah we don't think not sure what else to say all right then of course we had something interesting and I know this might not be interesting to everyone but I think this is probably one of the most important things because this is interpretability

### Segment 4 (15:00 - 17:00) [15:00]

research and if you don't know what this is basically where they try to kind of figure out what on Earth is going on inside the ai's mind when it makes certain decisions and he also kind of hints at the fact that open AI have I guess made some kind of breakthrough or some kind of finding that maybe has you know pointed them in the right direction and the reason I think this is as well is because if you take a look at uh how opening I have acted they don't seem particularly concerned in the fact that their you know super alignment team has disbanded so maybe before the Super alignment team had disbanded there were you know meaningful research efforts going into I guess you could say interpretability and I'm guessing that they figured out that you know um they've got the majority of the bases covered but you can listen to Sam Alman say that in this clip where he's basically saying that um you know we've got most of the stuff covered so maybe they're further along than we thought in that aspect I think that safety is going to require like a whole package approach but this question of interpretability does seem like a useful thing to understand and there's many levels at which that could work uh we certainly have not solved interpretability there's a number of things going on I'm very excited about but nothing close to where I would say yeah you know everybody can go home we've got this figured out uh it does seem to me that the more we can understand what's happening in these models the better and I think that can be part of this cohesive package to how we can make and verify safety claims but if you don't understand what's happening isn't that an argument to not keep releasing new more powerful models well we don't understand what's happening in your brain at a neuron by neuron level and yet we know you can like follow some rules and we can ask you to explain why you think something uh there are other ways to understand a system and of course if you've been paying attention in the community there have actually been a lot of I guess you could say research efforts and one of those being the Golden Gate claw so let me know what you guys thought about this interview if you thought it was insightful good if you guys are excited about some of the new things that are going to be coming with the models if you can simply wait or you just are too excited for GPT 5 with that being said if there was anything that I did miss don't leave a comment down below um and it will be interesting to see you on the next one
