On the first episode of the OpenAI Podcast, Sam Altman joins host Andrew Mayne to talk about the future of AI: from GPT-5 and AGI to Project Stargate, new research workflows, and AI-powered parenting.
00:00 Welcome to the OpenAI Podcast
01:00 ChatGPT & parenthood
04:10 AGI, superintelligence & scientific progress
07:10 Operator, Deep Research & productivity
10:30 GPT-5 & how we name models
13:40 User privacy & NYT lawsuit
16:15 Will ChatGPT ever show ads?
20:30 Social media & user behavior
23:25 Project Stargate & why compute matters
31:30 Future progress & potential new AI devices
38:45 Final thoughts
Оглавление (11 сегментов)
Welcome to the OpenAI Podcast
Welcome to the OpenAI podcast. My name is Andrew Maine. For several years, I worked at OpenAI first as an engineer on the applied team and then as the science communicator. After that, I worked with companies and individuals trying to figure out how to incorporate artificial intelligence. With this podcast, we have the opportunity to talk to the people working with and at OpenAI about what's going on behind the scenes and maybe get a glimpse of the future. My first guest is Sam Alman, CEO and co-founder of OpenAI. And we're going to find out a bit more about Stargate, how he uses chat GBT as a parent, and maybe get an idea of when GPT5 is coming. More and more people will think we've gotten to an AGI system every year. What you want out of hardware and software is changing quite rapidly. But if people knew what we could do with comput, they would want way, way more. One of my friends is a new parent and is using chat GPT a lot to ask questions. It's become a very good resource. And you are a new parent. And how much has chatb been helping you with that? A lot.
ChatGPT & parenthood
I I mean clearly people have been able to take care of babies without chatbt for a long time. I don't know how I would have done that. Uh those first few weeks it was like every qu I mean constantly. Now I kind of ask it questions about like developmental stages more because I kind of I can do the basics. But uh is this normal? Yeah. But it was super helpful for that. I spend a lot of time thinking about how my kid will use AI in the future. Um it is sort of like by the way extremely kidilled. I think everybody should have a lot of kids. I'm yeah a lot of my friends at OpenAI former colleagues and current ones are having kids and people go like oh what about this AI thing? Everybody I know inside is very optimistic and having families. I think that's a good sign. Yeah. Like my kids will never be smarter than AI, but also they will grow up way to set them back there though. I mean they will grow up like vastly more capable than we grew up um and able to do things that would just we cannot imagine and they'll be really good at using AI. And obviously I think about that a lot. Uh but I think much more about the like what they will have that we didn't than what is going to be taken away. Um, they're like I don't think my kids will ever be bothered by the fact that they're not smarter than AI. I just like, you know, I there's this video that always has stuck with me of um a baby or like a little toddler with a one of those old glassy magazines um going like this on the screen because it thinks it's an iPad. Thought it was a broken iPad. Um, and you know, kids born now will just think the world always had extremist AI and they will use it incredibly naturally and they will look back at this as like a very, you know, prehistoric time period. I saw something on social media where a guy talked about he got tired of talking to his kid about Thomas the Tank Engine, so he put it into chatbt into voice mode. Kids love voice mode. Chatt and he was like an hour later the kid still talking about Thomas the train. Again, I suspect there this is not all going to be good. There will be problems. People will develop these sort of somewhat problematic or maybe very problematic parasocial relationships and well, society will have to figure out new guardrails and uh but the upsides will be tremendous and we society in general is good at figuring out how to mitigate the downsides. Yeah. So, uh yeah, I think optimistic. We're seeing some interesting data where used along in classrooms with a good teacher, good curriculum, chat becomes very good used solely by itself as sort of a homework crutch can lead to kids sort of just doing the same thing as trying to Google stuff. I was one of those kids that everyone was worried I was just going to Google everything when it came out and stop learning. And you know, it turns out like relatively quickly kids in schools adapt. So I think we'll figure this out. Think of what you could have become if you didn't Google everything, Sam. You know, so we've seen this adoption figures which are really insane. It's open's most popular product. Five years from now, is it going to be CHPT? I
AGI, superintelligence & scientific progress
mean, I think Chacht will just be a totally different thing five years from now. So, in some sense, no. But will it still be called Chacht? Probably. Yeah. Okay. So, it's still a name. So, the other thing we hear is AGI, which I'd like to hear your definition of AGI. In many senses, if you asked me or anybody else um to propose a definition of AGI five years ago um based off like the cognitive capabilities of software, I think the definition many people would have given then is now like well surpassed. These models are smart now, right? Um and they'll keep getting smarter. They'll keep improving. I think more and more people will think we've gotten to an AGI system every year. Um, even though the definition will keep pushing out and getting more ambitious, like more people will still agree to it. But, you know, we have systems now that are really increasing people's productivity that are able to do um valuable economic work. Maybe a better question is what will it take for something I would call super intelligence? Okay. um if we had a system that was capable of either doing autonomous discovery of new science or greatly increasing the capability of people using the tool to discover new science. Um that would feel like kind of almost definitionally super intelligence to me and be a wonderful thing for the world I think. So basically a lot of it's kind of this gradient where it keeps getting better and each one of our definitions go oh this feels I felt like that way when we hit GPD4 internally playing of this I'm like there's 10 years of runway that we can do so much stuff with this and even when it starts using itself like you can hit a reasoning was really capable but when you're saying it comes out with some new theorem or proof or something and then oh hey we found a better cure for cancer or I found out some new GLP drug or something or yeah I mean I am a big believer that the high orderer bit of people's lives getting better is more scientific progress. Like that is kind of what limits us. And so if we can discover much more I think that really will have a very significant impact and for me that'll just be like a tremendously exciting milestone. I think many other great uses of AI will happen too but that one feels really important. Have you seen like signs of this you'd see internally? Have you seen things that made you go, "Oh, I think we've kind of figured it out. " The other thing where I would say we have figured it out, but I would say increasing confidence on the directions to pursue maybe the I mean this the example everyone talks about, but I think it is still interesting what's happening with people using AI systems to write code and coders being much more productive and thus researchers as well. like that is a sort of example of okay it's obviously not doing new science but it is definitely making scientists able
Operator, Deep Research & productivity
to do their work faster. Uh we hear this with 03 all the time uh from scientists as well. So I wouldn't say we figured out know the algorithm where we're just like all right we can point this thing and it'll go do science on its own but we're getting good guesses and the rate of progress is continuing to just be like super impressive. um watching the progress from 01 to 03 where it was like every couple of weeks the team was just like we have a major new idea and they all kept working. Uh it was a reminder of sometimes when you like discover a big new insight things can go surprisingly fast and I'm sure we'll see that many more times. I noticed uh recently open I just shifted the model and operator to 03 and I noticed a big improvement operator and it and I'd say that the thing that we ran into before was brittleleness is that you have people who promise agentic systems can do all these things but the moment it gets to a problem it can't solve it falls apart. Interestingly, speaking of the AGI question, a lot of people have told me that their personal moment was operator with 03. And there's something about watching an AI use a computer pretty well. Not perfectly, but it's not it's 03 was a big step forward that feels very AGI like it didn't really have that effect on me to the same degree, although it's quite impressive, but I I've heard that enough times. M mine was with deep research because that felt like a really agentic use of it and that was when I came back and I produced something on a topic because I had interested in that was better than ever read before because previously all those models would just get a bunch of sources summarize it but when I watched the system go out on the internet get data follow that then follow that lead and then follow back then come back like I would have but better was interesting. I met this guy recently who's like a one of these like crazy autodidacts just obsessed with learning and knows about everything and he uses deep research to produce a report on anything he's curious about and then just sits there all day and has gotten good at digesting them fast and know what to ask next. And it is like it is an amazing new tool for people who really have a crazy appetite to learn. I built my own app that literally lets me ask questions and it generates audio files for me of this stuff because it's just like that I'm like my curiosity probably exceeds my retention. Um and operator I'll tell you the magical moment for me and I'm curious to see where the things go next was I was doing on Marshall McLuhan and I wanted to get a bunch of images of Marsh McLuhan and I asked to do it and then all of a sudden I had a whole folder full of these things which was for a research thing would have taken me forever to do. Yeah. I think we're just going to keep seeing things like this where whatever we thought about what a workflow had to be like and how long something had to take is going to just change like wildly fast. Yeah. How are you using it? Deep research. Yeah. Science that I'm curious about. Uh I'm just in this like weird place of I am extremely timestrapped. If I had more time, I would read like I would read deep research reports preferentially to reading most other things, but I'm sort of short on time to read in general. Yeah, what's neat too is the sharing feature which I love because now it's easy to share that with somebody else. The PDFs are great and that's cool and I
GPT-5 & how we name models
would say that even though we have deep research, we have these tools, there is a model race going on and so the question comes up is like GPD5 and the idea is that with a system like that we should see an increase in capabilities. What is the time frame for GPD5? When are we going to see this? Probably sometime this summer, right? Um I don't know exactly when. One thing that we go back and forth on is how much are we supposed to like turn up the big number on new models versus what we did with GPT40 which is just better and better and when we I had to handle the recent GP4 right when that was coming out and meanwhile I had to kind of do this test off between that and 3. 5 kept getting better and better and the comparisons I was able to make were changing and so that's my question is like yeah, you know like would I know GPD5 versus wow this is a really good GPD4 4. 5 probably not necessarily. I mean it like it could go either way, right? You could just like keep doing iterations on 4. 5 or at some point you could call it five. Um it used to be much clearer. We would train a model and put it out and then we train a new big model and put it out. And you know now the systems have gotten much more complex and we can continually post train them to make them better. I we're thinking about this right now like every time let's say we launch GPT5 um and then we update it and update it. Should we just keep calling this GPT5 like we did with GPT40 or should we call this 5. 1, 5. 2, 5. 3 so you know which you know when the version changes? Um I don't think we have an answer to this yet but I think there is something better to do than the way we handled it with 40. And we see this periodically like sometimes people like one snapshot much better than another and they might want to keep using one and we got to sort of we got to figure something out here. Yeah, that's the the challenge is even if you're technically inclined, you can kind of understand, okay, if there's an O before it, I know this, but if I want, you know, like but then even then it's not clear. Should I use O4 mini, should I use 03, should I use this? I think this was like an example of this was an artifact of shifting paradigms. Um, and then we kind of had these two things going at once. I think we are near the end of this current problem. But I can imagine a world I don't know what it is but I can imagine a world where we discover some new paradigm that again means we need to like bifurcate the model tree even more complicated names. I hope we don't have to do that. I am excited to just get to GPT5 and then GPT6 and I think that'll be easier for people to use and you won't have to think do I want you know 04 mini high or 03 or 40 O4 mini high is what I used to code when I want to have a conversation it's 03 I think we will be out of that whole mess soon for now uh yeah it's fun to have choice when you know what they mean but it's still I think one of the things that's made these things more capable but also harder to understand where the capability is coming from is integrations of things like memory and memory started off as one very simple thing and now memor is a lot more sophisticated. Memory is
User privacy & NYT lawsuit
probably my favorite recent chatbt feature. Mhm. Um, you know, the first time we could talk to a computer like GPT3 or whatever, uh, that felt like a really big deal. And now that the computer I feel like it kind of like knows a lot of context on me. And if I ask it a question with only a small number of words, it knows enough about the rest of my life to be pretty confident in what I want it to do. Um, sometimes in ways I don't even think of. like that has been a real surprising like level up. So I and I hear that from a lot of other people as well. There are people who don't like it but most people really do. I think we are heading towards a world where if you want the AI will just have like unbelievable context on your life and give you these super helpful answers which I for me is cool. The fact you can turn it off is also great. But one of the challenges came out was in New York Times ongoing lawsuit with OpenAI. They just asked the court to tell OpenAI they had to preserve consumer chat GPD user records beyond the 30-day window that has to be held for regular reasons and uh Brad Ly just wrote a letter responding to this. Could you explain open state? We're going to fight that obviously and I suspect I hope but I do think we will win. Um I think it was a crazy overreach of the New York Times to ask for that. Uh this is someone who says you know they value user privacy whatever. Um, but I to like look for the silver lining here. I hope this will be a moment where society realizes that privacy is really important. Privacy needs to be a core principle of using AI. You cannot have a company like the New York Times ask an AI provider to compromise user privacy. And I think society needs to I think it's really unfortunate the New York Times did that. But I hope this accelerates the conversation that society needs to have about how we're going to treat privacy and AI. And I hope the answer is like we take it very seriously. People are having quite private conversations with CHBT. Now chb will be a very sensitive source of information and I think we need a framework that reflects that. So that brings up the other question from people who are using this or skeptical is that openi now has access to this data and there's the concern one was about training which open has been very
Will ChatGPT ever show ads?
clear about when or when not it's training you have the options to turn that off the other thing is like uh advertising things like that what's open's approach towards that how are you going to handle that responsibility we haven't done any advertising product yet um I kind of I mean I'm not totally against it I can point to areas where I like ads. I think ads on Instagram kind of cool. I bought a bunch of stuff from them, but I am like I think it'd be very hard to it would take a lot of care to get right. Yeah. Um people have a very high degree of trust in chatbt which is interesting because like AI hallucinates. It should be the tech that you don't trust that much. My friends hallucinate too so I trust them too much. People really do. Um, but I think part of that is if you compare us to social media or, you know, web search or something where you can kind of tell that you are being monetized and the company is trying to like deliver you good products and services, no doubt, but also to kind of like get you to click on ads or whatever. like, you know, how much do you believe that like you're getting the thing that company actually thinks is the best content for you versus something that's also trying to like interact with the ads? I think there's like there's a psychological thing there. For example, I think if we started modifying the output, like the stream that comes back from the LLM in exchange for who was paying us more, that would feel really bad and I would hate that as a user. I think that'd be like a trust destroying moment. Um maybe if we just said, "Hey, we're never going to modify that stream, but like if you click on something in there that is going to be what we'd show anyway, we'll like we'll get like a little bit of the transaction revenue and it's a flat thing for everybody. " Um if we, you know, have like a easy way to pay for it or something, maybe that could work. Maybe there could be like ads outside the transaction stream, sorry, outside of the LM stream that are still really great. But the burden of proof there I think would have to be very high and it would have to feel like really useful to users and really clear that it was not messing with the LM's output. Yeah, it's going to be a difficult one. I I hope there's some solution. I would love to do all my purchasing through chat GPT or a really good chatbot because a lot of the times I feel like I'm not making the most informed decisions and so mitigating Yeah. No, that's good if we can do it in some sort of really clear and aligned way. But I don't know. Like I love that we build good services. People pay us for them. It's like very clear. It's a Well, that's benefit. That's like I'd say the difference in models is like I think Google builds great stuff. I think the new Gemini 2. 5 is a really good model. I think they went from It is a really good model. Yeah. They went from kind of like I'm like, "Oh man, these games are good. " But end of the day, Google is an ad tech company. And that's a thing that always kind of you know I you know using their API and stuff is I'm not as too concerned although but I do think about like man if I'm using their chatbot that whatever that is my thinking is that their where their incentives are aligned. Google search was an amazing product for a long time. I it does feel to me like it's degraded. Um but you know there was like a time where there were lots of ads but I still thought it was the best thing on the internet. I mean I love Google search. Uh, so I don't like it's clearly possible to be a good ad- driven company, but and I like respect a lot of things Google has done, but there are obviously issues too. Yeah, I the Apple model as an Apple user I liked was I know paying a lot for my phone, but I know they're not trying to cram all these things in it. They do ads, which was, you know, not terribly effective, which probably showed you their heart was really not in. in it. Yeah. So, it's going to be interesting. I guess uh we just have to keep watching and seeing this and we start to think, man, you know, JPD is really pushing this. I need to start wondering about this. Uh anything we do, we obviously need to just be like crazy upfront and clear about. So, we had uh an issue.
Social media & user behavior
There was a model update and then the ide the thing that happened was apparently the model was trying to be a little bit too pleasing, agreeable. And that brings up the human AI interaction as people are using these systems more and developing this relationship with that like how do you see the shape of that coming and what's open as a position on personality. One of the big mistakes of the social media era was the feed algorithms had a bunch of unintended negative consequences on society as a whole and maybe even individual users. although they were doing the thing that a user wanted or someone thought that user wanted in the moment which is get them to like keep spending time on the site and that was the big misalignment of social media and I think there were a lot of other things like you know making people upset kind of gets them stuck on more than being like happy and content and I always knew that there'd be like new problems in the world of AI um where the that you know there'd be like something that was like misaligned in a not obvious way. But definitely one of the first ones that we experienced was if you ask a user what they want for one given response um versus and then you try to like build a model that is most helpful to the user. You show a user say two responses which one's more helpful to you on any given thing. you might want a model to behave one way, but over the course of, you know, all your interaction with an AI that might not match up. You know, you can see and we did see these problems where if you pay too much attention to the user signals and a lot of other things that we talked about in our postmortem, but that I think this is just like an interesting one. Um, on the short horizon, you kind of don't get the behavior that a user most wants or is most helpful or useful or healthy to a user in the long run. Um so you know maybe the analogy to filter bubbles is going to be uh AIs that are uh you know helpful to a user in a short amount horizon but not over a long horizon. Well, I think a sign of that was Dolly 3, which I thought technically was a really capable model, but they all kind of started to be one kind of genre of image and it all kind of like an HDR sort of style and was that from doing that sort of comparisons where users said looking in just these two things isolations, I prefer this one better. I don't remember for dollar 3, but I would assume so. Yeah, which I think it's gotten better. The new image model is like fantastic. Crazy good. Yeah. Um, and I can only imagine where that's going to go from here. So, when you're building these things and you're increasing usage, and that's always been sort of a problem. The new image model
Project Stargate & why compute matters
comes out, you have to restrict usage, and you have to have like you have Sora, which you can only have a certain amount of compute to do that. Illustrates the big problem everybody's facing, which is compute. And so, to address this, we heard about project Stargate, which has a very cool name and it involves computers. Other than that, I think a lot of people are going in and their price tag, you know, half a trillion dollars. People are going like, "Wait, what what is the simple description I give to my mom about Stargate? " I think it just it's quite simple. It's uh an effort to finance and build an unprecedented amount of compute. It's totally true that people we don't have enough comput uh do what they want. But if people knew what we could do with more compute, they would want way more. So there's this incredibly huge gap between what we could what we can offer the world today and what we could offer the world with 10 times more compute or someday hopefully 100 times more compute. Um, and a thing that is different about AI than other technologies I've worked on, or at least AI, the scale of delivering it usefully to hundreds of millions, billions of people around the world, is just how big the infrastructure investment has to be. Um and so Stargate is an effort to pull a lot of capital and technology and operational expertise together to build the infrastructure to go deliver the next generation of services to all the people who want them and make intelligence as abundant and cheap as possible. So it is a massive project, global project. We talked before one of the partners is the UAE. you're working at that working with other governments around the world on this. Um, one of the considerations is, you know, one, uh, been asked on social media, half a trillion dollars, $500 billion. Do you have the money? We don't literally have it sitting in the bank account today, but we are. Is it in the room right now? It's not in the room, but we will deploy it over the next Okay. Um, not even that many years. uh you know, unless something like really goes wrong and it turns out we can't build these computers, uh I'm confident that people are good for it. Um I went recently to the first site that we're building out in Abene. Um that'll be about, you know, roughly 10% of all of the initial commitment to Stargate, the sort of 500 billion. Um it's incredible to see. It is a like I knew in my head what a order gigawatt scale site looks like but then to go see one being built uh and the like thousands of people running around doing construction and going to like you know stand inside the rooms where the GPUs are getting installed and just like look at how complex the whole system is and the speed with which it's going is quite something. Uh, we'll have more to share about the next sites soon, but there's a great quote about a pencil just like a standard, you know, wood and graphite pencil and no one person could build it and it it's this like magic of capitalism. It's miracle really that like that the world gets coordinated to do these things. And standing inside of the first Stargate site, I was really just thinking about the global complexity that it took to get these racks of GPUs running. You know, when you get your phone out and you type something into chat GPT and you get the answer back, you probably at this point you probably don't even think that's like particularly surprising. You just expect it to work. Um there was a time maybe the first time you tried like that is really amazing. But the work that happened over the last thousand or at least many hundreds of years of people working incredibly hard to get these hard one scientific insights and then to build the engineering and the companies and the complex supply chains and kind of reconfigure the world that had to happen to get this like rack of magic put somewhere. Think about all the stuff that went into that. The you know that and trace it all the way back to people that were just like digging rocks out of the ground and seeing what happened. Um so that you now get to just you know type something into chatbt and it does something for you. I read a behind thescenes story about the development of project Stargate and the international partnerships particularly the UAE and that Elon Musk had tried to derail that and what have you seen what have you heard what's the take on that I had said I think also externally but at least internally after the election that I didn't think Elon was going to abuse his power in the government to unfairly compete. And I regret to say I was wrong about that. I mean, I don't like being wrong in general, but mostly I just think it's really unfortunate for the country that he would do these things. And I didn't think I genuinely didn't think he was going to. Um, I'm grateful that the administration has really done the right thing and stuck up to that kind of behavior. Um, but yeah, it sucks. Well, I think the thing that's changed and I think uh Greg Brockman had just talked about this where there was a couple years ago where people thought like, okay, whoever gets there first is the winner and that's it and the game is over. And now we realize there are great AI labs elsewhere like Anthropic is building great tools. I think Google's really got its game up. There's good stuff happening everywhere and it's not going to be that one person runs away with it. I agree. And so it seems Yeah. The example that I like the most is the discovery of AI was analogous to this not perfect but close um to the discovery of the transistor in many surprising number of ways. Uh but many companies are going to build great things on that and then eventually it's going to like seep into almost all products but you won't think about using transistors all the time. So yeah, I think a lot of people are going to build really successful companies built on this incredible scientific discovery and I wish Elon would be less zero sum about it. Yeah, I think or negative some I think the pie is just going to get bigger and bigger if we think about that. I was just at an energy conference and it was interesting talking to the people who were involved in energy production and stuff and hypers scaling the term they use for this was a topic. Um and that does bring up like the energy requirements. I know that for like Grock 3 apparently I guess they had to put generators in the parking lot to be able to train that model and that's the question is like how where is the energy going to come from money I understand energy to think of when we talk about the scale of energy needed I think kind of everywhere right I think it's a big mix right now eventually I think a lot of I'm very excited about advanced nuclear both fishision and fusion um but for now I think it's a whole mix of the entire portfolio, right? Gas, solar, I mean really nuclear, everything. So all the above and stuff. Yeah. I was talking to people that were some of them worked in areas like in Alberta where they said we have a lot of access to energy and not as much use for it there, etc. And that was just this total picture I even thought about. You know, traditionally it's very hard to move energy around the world, most kinds. Um, but if you exchange energy for intelligence and then move the intelligence around the world, it's much easier. So you could put the giant training center or even the big inference clusters in a lot of places and then just like ship the output over the internet. There was a speaker at opening I came to an event and somebody
Future progress & potential new AI devices
was working I think was the James Web Space Telescope and he talked about his biggest bottleneck was they're about to get all this you know terabytes of data but he doesn't have no scientist to work on it doesn't have enough people to go through the data and here we have these answers about the universe whatever in front of us and it's like a big data problem. Yeah, I um I've always joked that one thing we should do when we have enough money, when opening eyes enough money is just build a gigantic particle accelerator and solve high energy physics once and for all. Um cuz I think that'd be like a triumphant wonderful thing. But I wonder what are the odds that a really smart AI could look at the data we currently have with no more data, no bigger particle accelerator and just figure it out. It's not impossible. Yeah. And Yeah. So there's this question of like, okay, there's already a lot of data out there. There's a lot of smart people in the world, but we don't know how far intelligence can go. With no more experiments, how much more could we figure out? the I remember reading some talked about how in early 1990s somebody had found like a form of aimic all right and presented it to like a drug company said this and they said nah we're going to pass on that and that's been a life-changing drug for people like for people who've just basically don't chronic obesity whatever it's going to improve the quality of life and you think oh this was sitting there for 25 years I suspect there's a lot of other examples that we'll find where maybe we already have existing drugs that we know do something good but they they're reusable and some other big way or with a couple of small modifications, we are very close to something great. Um, and it's been very heartening to hear from scientists using the even the current generation models for this kind of work. So, it sounds like one of the things we're going to need though for next generation models is models that understand physics and chemistry and stuff. Is Sora sort of a stab at that? I mean, it'll understand like Newtonian physics. I don't know if it'll help us with discovering new chemistry and sort of like new like novel physics uh or no theoretical physics or whatever you'd like. But I think I'm optimistic that the techniques we use for the reasoning models will help us with those things a lot. Okay. And what is the short definition of how a reasoning model works versus just me asking GPD 4. 1 something? So the GBT models can reason a little bit and in fact one of the things that got people really excited in the early days of the GPT models was you could get better performance by telling the model let's think step by step and it would then just output text that was thinking step by step and get a better answer which was sort of amazing that worked at all. the reasoning models are just pushing that much further. So, it's the idea of like when it's able to break the question down, it can spend more time on each step. When you ask me something a question, I if it's a really easy question, I might just fire back like almost on reflex with the answer. But if it's a harder question, I might think in my head and have like my internal monologue go and say, "Well, I could do this or that or maybe maybe, you know, this will be clearer. I'm not sure about that. " And I could like backtrack and retrace my steps and then when I finish thinking and I've, you know, been thinking in English, I can then, you know, make some bullet points and then kind of like output an answer to you in English. One of the interesting things I've observed now when I use the app, if I ask a deep research question or something and I go away on my lock screen, I get the it's still processing and thinking about it. And I heard somebody another company I forget it was using a metric of how long something spent. I think it was anthropic like said, "Hey, this model actually spent like 15 minutes or 30 minutes or whatever length of time to think about a thing which is a good metric by but it needs to actually give you the right answer. " And I thought that was sort of just interesting paradigm of one thing I have been surprised by is people are surprisingly willing to wait for a great answer even if I was going to think a while all of my instincts have been you know the instant response is the thing that matters and users hate to wait and for a lot of stuff that's true but for hard problems with a really good answer people are quite willing to wait yeah so we have all these tools all these things so far I'm using my phone and now open just announced that you guys are building hardware. You had the video with you and Johnny IV talking about you guys been talking about and collaborating for a couple years. Uh obviously you can't I mean well I could ask you is it on you right now? No, it is not. It's going to be a while. Okay. Um we're going to try to do something at like a crazy high level of quality and that does not come fast. But computers, software and hardware, just the way we think of current computers were designed for a world without AI. And now we're in like a very different world. And what you want out of hardware and software is changing quite rapidly. Um, you might want something that is way more aware of its environment that has way more context in your life. You might want to interact with it in a different way than like typing and looking at a screen. And we've been exploring that for a while and we've got a couple of ideas we're really quite excited about. Um I think it will take time for people to get used to what it means to use a computer in this kind of a world uh because it is so different now. But if you like really trusted an AI to understand all the context of your life and your question and make good judgments on your behalf where you could like have it sit in a meeting, listen to the whole meeting, know what it was like allowed to share with who and what it shouldn't share with anyone and you know kind of what your preferences would be. and then you ask it one question and you trust that it's going to go do the right follow-ups with the right people and do like you can then imagine a totally different kind of how you use a computer to get done what you want. So kind of the way we interact with chat GBT is kind of inform the device. I mean you could also say that the way we interact with chat GBT was informed by the previous generation of devices. So I think it is this sort of like co-evolving thing but yeah I hope so. One of the things that made the phone so ubi ubiquitous was the fact that I can be in public and look at the screen. I can be in private and have a phone call and talk to it. And I think that's one of the challenges for new devices is that trying to bridge that gap between what we use in public and private. Phones are unbelievable things. I mean, they are really fantastic for a lot of reasons. Uh, and you can imagine one new device that you could use everywhere, but also like there's some things that I do differently publicly and privately, like at home, I've got great stereo system to listen to music and when I'm walking the world, I use AirPods and that doesn't bother me. Yeah. Um, so I think there are things that are different in the public and private use case, but the general purposeness I agree is important. Yeah. Follows you with it. So, uh, nothing yet until maybe next year. All right. It will be worth the wait, I hope, but it's going to be a while. Okay. I'm excited. I'm curious. I have
Final thoughts
thoughts. So, if you're giving advice to a 25-year-old right now, what do you tell them? I mean, the obvious tactical stuff is probably what you'd expect me to say. Like, learn how to use AI tools. It's funny how quickly the world went from telling, you know, the average 20-year-old, 25-year-old, learn to program to programming doesn't matter, learn to use AI tools. I wonder what will be next, but of course there will be something next. Um, but that's very good tactical advice. And then on the sort of like broader front, um, I believe that skills like, resilience, adaptability, creativity, figuring out what other people want. Uh I think these are all surprisingly learnable and it's not as easy as say like go practice using chatbt but it is doable and those are the kind of skills that I think will pay off a lot in the next you know couple of decades and would you say same thing a 45year-old is just learn how to use in your role now yeah probably whenever we have whatever your personal definition of agi will more people be working for openi after then or before more more. So, yeah, I see a lot of online people like, "Ah, they're so good. Why are they hiring people? " I'm like, "Because computers can't do everything. They're not going to do everything. " The slightly longer answer with more than one word is that um there will be more people, but each of them will do vastly more than what one person did, you know, in the pre-agi times, right? Which is the goal of technology. Yeah.