# State of the AI industry — the OpenAI Podcast Ep. 12

## Метаданные

- **Канал:** OpenAI
- **YouTube:** https://www.youtube.com/watch?v=Z3D2UmAesN4
- **Дата:** 19.01.2026
- **Длительность:** 49:42
- **Просмотры:** 51,022
- **Источник:** https://ekstraktznaniy.ru/video/11146

## Описание

OpenAI CFO Sarah Friar and Khosla Ventures founder Vinod Khosla argue the greatest challenges in AI right now are keeping up with demand and making sure more people get the benefit. They unpack what's driving big investments in compute and why this moment is different from other technology cycles — with meaningful advances in health, agents, and robotics still ahead. 

Chapters

00:00:00 — What’s the AI story of 2026?
00:07:28 — AI in healthcare
00:12:01 — Scaling compute to match revenue
00:18:05 — Difference between now and dot-com bubble
00:27:41 — Ads in ChatGPT
00:30:05 — Will consumers have more than one AI subscription?
00:36:41 — Winning in enterprise
00:39:44 — How can startups succeed?
00:44:05 — Robotics and beyond

## Транскрипт

### What’s the AI story of 2026? []

Hello, I'm Andrew Maine and this is the OpenAI podcast. Today our guests are Sarah Frier, CFO of OpenAI and legendary investor Venode Kosla of Kla Ventures. In this discussion, we're going to talk about the state of the AI ecosystem, whether or not we're in a bubble and how startups and investors can succeed as AI progresses. — Unlike something like Netflix where they're running so many hours in the day, I think of it much more like infrastructure, like electricity. demand is limited not by anything other than availability of compute. Today I think the conversation we need to have is what will people do? 2025 was about agents in vibe coding. Now it's 2026. What's the story of 2026? — I think we matured in vibe coding in 2025. I don't think we've matured in agents. So agents especially multi-agentic systems will mature to the point of having real visible impact whether you're an enterprise and you have a multi- aent systems doing full task like running an ERP system for you um you know doing all the reconciliation every day acrruals every day tracking contracts every day I think that on the enterprise side. But today uh on the consumer side, you know, it's still a hassle to make plan a trip, that's a multi- aentic thing that looks across a lot of different things from your food preferences to the restaurant reservation to airline schedules to your personal calendar. Uh those will start to mature I think a year from now. Um so I'm pretty excited about that. I think models in robotics and real world models that go beyond well beyond robotics like general intuition uh will all start to happen in the next year. So I think that those are areas to look for. There's usual functions like memory in llams um continual learning in llams um red reduction of the impact of hallucinations. Those are all areas I could go on. There's half a dozen areas in which AI doesn't do as well today that will be start to be addressed. — Yeah. And I think at its baseline what Venote is saying is 26 is the beginning of closing this capability gap. So what we know is we've handed people massive intelligence, right? We've handed them the keys to the Ferrari, but they are only learning how to take it out on the road for the first time. Um we need to give consumers more and more easy ask easy ways to go from Chad GBT is just a chatbot call and response. Most people use it today just to ask questions. But how do we take it towards being a true task worker that books that trip for them or helps them get a second opinion on what they just heard from their doctor or enables them to create a menu for their diabetic child, right? How do we help them really move from simple questions into actual outcomes that make my life better? And then on the enterprise side, it's that same continuum. How do we close the capability gap? Right? One of the things we know from our state of the enterprise AI and the enterprise report that our chief economist put out at the end of last year is on the frontier versus just even the median corporation, the average number of messages or the median is about 6x, which will tell you that's 6x the usage from a company that's already on the frontier. And we know that frontier isn't even pushed to its max. So for us, it's this focus of how do we help consumers move along that continuum to true agentic task working. And then for enterprises, how do we create a much more sophisticated vertically specialized outcome for enterprises that allows them to go from maybe a very simple chat GPT implementation the whole way to something that's transforming the most important part of their business? For a healthcare provider, it might be their drug discovery process. For a hospital, it might be the time to admit a patient to get that patient back into the community. for um a really large retailer, it might be just larger basket sizes, higher conversion rates, and much happier customers. So, it's the basics of closing that capability gap. — So, I I might add one other perspective. We've talked about the number of areas in which the technology will advance and capability will advance. I would venture to guess today of the people using AI whether it's personal or enterprise some singledigit percentage are even using 30% of the capability of the AI. — So this percentage of people who using 30% or 50% let alone 80% of the AI's capabilities will keep increasing. I think that's a 10-year journey before people learn to use AI. I've seen this some people kind of pundits confuse adoption curves for capability curves and that's come up where you've seen — so that's the point I'm making — and it's a force multiplier because today we have over 800 million using chat GPT today 800 million consumers weekly using but you know that number should be in the billions and then what percentage use are they using it for it's like we've just turned electricity on in the home we've wired up the home and they've turned on the lights but they have no idea that they could now heat their home, they could cook, they could curl their hair, right? There's so many things you now can do. — An analogy I've used is that email didn't really get much better between 1990 and the year 2000. Neither did mobile, but usage went way up. And the problem wasn't like, well, we need better email. We need more better mobile. It's like people need to learn all the things they could use it for, — right? Yeah. And in a more sophisticated way. Like mobile's always one that's interesting to me because when mobile took off, people just took their desktop websites and turned them into mobile. and they were really hard to scroll, but I guess you at least had them in your pocket. But then you realized you had a GPS, so now you could have Uber and now you could do things with location. Or you had a camera at your fingertips. Okay, so now, yeah, I can take photographs of all my friends, but I can also snap, you know, a check and deposit it into my bank account. Although we should fix the whole paper check thing, but that's an aside. — It still seems like a but I can just take a photo of this and now I get money in my bank account. Yeah, but you know those that all existed in the minute mobile was available to us, but just the you know the the ability for human ingenuity to come to work on it. So I think you're right. I don't even know if we need more intelligence than we have today to vastly increase outcomes, but of course the models are going to keep getting more intelligent as well. You mentioned health and that's one of the really kind of high stakes things we think about when it comes to just probably the most important thing and it's kind of fascinating to think about that just you know a few years ago we got chat GPT and we're using it for very simple applications and now we're trusting with HIPPA compliant data. Do you look at that as sort of a marker of how fast or how well things have been accelerating? Are there other ones like that you think about to say okay now we know we're some new level? Health is clearly one of those areas. I've long

### AI in healthcare [7:28]

believed uh it'll revolutionize health uh by making uh expertise be a commodity in all areas of health. Uh the problem with health is regulatory. So first there's constraints on what AI can do. An AI can't legally write a prescription even if it's better than human beings at writing a prescription. Um that is not only the FDA but it's actually beyond the FDA into the American Medical Association institutionally controls that function. So they will be incumbent resistance in a lot of areas. I think we can talk about it if you like. U but diagnosing is still a constraint because the FDA controls that there's no AI approved as a medical device yet. So that all fortunately this administration is doing a very good job of moving quickly and taking the appropriate level of risk. So I'm pretty pleased to see what's happening there. On the health front, we see in our data 230 million people every week ask Chat GPT a health question. — Yeah. — 66% of US physicians say they use chat GPT in their daily work. I'll tell you at a personal level, my brother is an HDU doctor in the UK. So his job is right, you hit the ER, they don't know how to triage you, so they send you to him. You kind of don't want to show up to him. He's expected to have very good though. He's very good at what he does, but it means you're not in good shape. — But he's expect to have an almost an encyclopedic knowledge of every disease that ever existed. So I always give the example he works in Aberdine in Scotland. If you showed up with malaria, he will not think of that. That is not in his pattern recognition. And yet that could have happened. I don't know, you went on vacation summer, you got bitten by a mosquito, boom, you're showing up in an ER room in Aberdine. what chat GPT can do or what the model can do is really act as a great augmentation to the doctor which is why I think 66% of them are using it and that number is only growing right you know it's probably already um much higher and so I think it's just a great example of where something like health we're getting the benefit of our doctors being able to have always the latest research in front of them always the latest known um interaction say between someone's drug regime and what they're living through and experiencing as individuals. But it also puts some independence back into consumer's hands. So now I get the opportunity to um ahead of time do some research on what my symptoms might be saying. So I can have a much more educated conversation with my doctor. It allows me to maybe get a second opinion or know that I want to go ask for a second opinion. Um it also we go very fast to you know these extreme places but just even things like hey I've got 20 minutes a day to exercise. I know I'm suffering from type 1 diabetes. What could I do in 20 minutes or my daughter has a an interesting um issue with the food she eats. And so it used to be a super just frustrating thing to go to a restaurant even because we'd have to almost ask the server so many questions. And now we can photograph a menu. Chat suggests what are likely the best dishes for her to order. And then we can have a bit more of a turser conversation, but a bit more productive on what's going to work. And it has just changed how we think about just eating. Takes it away from all about the food to why we're going out for dinner together. And so I think they are all these just examples of something like health. It's already happening and it's going to keep getting better and better. And then to def's point, I think regulatory environment is going to have to catch up. — It's no matter what kind of system you're under, the cost of medical care is exceeding the GDP of every country at the rate at which increases. And it seems like we needed AI, we needed it now. And you know it's can be helpful and as you pointed out it's the first time the cost of medical intelligence has dropped year-over-year but that comes with a lot of demand for compute and we have a lot more questions uh you know that we want to have answered and certainly people can see the need for more compute but the scale and scope at

### Scaling compute to match revenue [12:01]

which openi is investing in compute is incredibly huge you know we're talking you know numbers that are just really hard to fathom um how does openai determine that need. You know, what are the metrics you're looking at to think that like yes, we need to spend this much. — So, first of all, we are trying to make sure we stay investing in compute to match the pace of our revenue. And we've seen a really strong correlation between in period compute and in period revenue. I'll give you an example. If you just go back in 23, 24, and 25, our compute was 200 megawatt, 600 megawatt, and we ended last year at 2 gawatt. Against that, and it's really easy because the numbers match up. We exited 23 at 2 billion in ARR, so 200 megawatts, 2 billion. We exited 24 at 6 billion. So 6 billion, 600 megawws. And we exited last year a little over 20 billion. 20 billion, 2 gawatt. actually it's been accelerating. So that's just even if you look at the slope of the line it says more compute more revenue. Now there is definitely a timing mismatch because I have to make decisions today about making sure we have compute in not even 26 or 27 but 28 29 and 30 because if I don't put in orders today and don't give the signal to create data centers it won't be there right today we feel absolutely constrained on compute there are many more products that we could launch many more models that we would train many more multimodality things we would explore if we had more compute today. So for example, even in the last year, I think the overall hardware investments globally has gone up by something like $220 billion. That's just how much actual spending has gone up. If you look at chips, chip forecasts have gone up similarly about $334 billion. So it's not just open AI. The signal from the whole environment is AI is real. We are in a paradigm shift. We need to invest to give people the intelligence they need to do all the things we just talked about. For example, so back inside of OpenAI, we do spend a lot of time going very deep on what is our demand signal in consumer in enterprise in developers. We think about what's the mosaic first at the base like on an infrastructure layer. How do we create max optionality? So we want to be multicloud, multi-chip. Um, and that gives us uh an interesting layer at the infrastructure layer, one tick up at the product layer. We also want to become more multi-dimensional. So we used to just be one product, chat GPT. Today we are chat GPT for consumer with all of the blades inside it, healthcare and so on. Chat GPT for work, but we also have Sora as a new platform. Um we have uh some of our transformational research projects. One tick up. We also then have a business model ecosystem that's becoming much more um multi-dimensional. Began with a single subscription because we'd launched chat GPT and we needed a way to pay for the compute. We now have multiple — price GPT subscriber by the way that was — I love you for that. — Multiple subscriptions. We went to the enterprise and had SASbased pricing. We have credit based pricing now for places where high value is being um found. So people want to pay more to get more. Um we're beginning to think about things like commerce and ads. And then of course longer term I like models like for example would we do um licensing models to really align let's say in drug discovery if we licensed our technology you have a breakthrough that drug takes off and we get a licensed portion of all its sales. It's great alignment for us with our customer. So kind of if you think about those three tiers, I actually think of it like a Rubik's cube. Okay, so we went from a single block, you know, one CSP, Microsoft, one chip, one product, one business model to now a whole three-dimensional cube. And one of the things I love about a Rubik's cube, I'm probably not getting the number exactly right, but I think it has 43 quintilion different states it can be in. It always blew my mind when I was in university. Um, so now just think about that cube spinning. So we pick a low latency chip going alongside something like coding that's 5x the pace that people expect. We can charge a high-end subscription for that. So it's almost like you line up the cube and you get three colors on one side. Um, we could spin the cube again and say low latency chip, um, faster image gen, more free users come in, but that creates more inventory for ultimately perhaps an ads platform. So, you can start to see how the goal in the last 12 months has been creating more and more strategic options that allow me to keep paying for the compute we need to really achieve our mission AGI for the benefit of humanity. So you know the way to simplify that is demand is limited — not by — anything other than availability of compute today whether it's Sora or more broadly — and then there's price elastity elasticity — where demand is infinite for compute. So I think that's the way to think about it. It just we haven't even started to exercise the price elasticity lever. It just we can't fulfill demand, — right? — Uh and it's limited by compute. So all the people talking about bubbles and things I think are on the wrong track. They have no sense of how large this change is and how much more demand elasticity there's a need for API calls. As one of OpenAI's earliest investors

### Difference between now and dot-com bubble [18:05]

you made a bet early on. You saw where this was headed, but you've saw the dot bubble. You watched what happened there, but you've also seen other things. The mobile revolution, you've seen this happen with other areas. And you mentioned the term broad and is that sort of where your conviction comes from is just how many different areas it touches? — Yeah, look when we invested we had one simple metric. There was no projections to look at, no product plans chat GPT to look at. It was very simply the idea if we develop anywhere near close to human intelligence let alone supersede human intelligence the it's its impact is going to be huge. So it was this hand baby approach like the consequences of success are really going to be consequential. So why not try that? uh people there's also this funny notion of bubble. People equate bubble to stock prices which has nothing to do with anything other than fear and greed among investors. So I always look at bubbles should be measured by the number of API calls. — Mhm. — Or in the dotcom bubble which people refer to. It should be amount of internet traffic. — Mhm. — Not by what happened to stock prices because somebody got over excited or under excited and in one day they can go from loving Nvidia to hating Nvidia because it's overvalued. Those girrations aren't reality. The reality is the underlying number of API calls. — If you look at internet traffic during the dotcom bubble, prices may have gone up violently and gone down violently. There's no bubble detected in internet traffic. I would almost guarantee you won't see the bubble in number of API calls. And if that's your fundamental metric of what's the real use of you AI, usefulness of AI, demand for AI, you're not going to see a bubble in API calls. What Wall Street tends to do with it, I don't really care. I think it's mostly irrelevant. Great for press articles because press has to fill their column inches but it's not reality. So prices of things aren't reality or stock prices private company valuations. The reality is what's the actual demand for AI which is the number of API calls — right and if I think if I hark back to that moment where you were looking at 1999 the value people were getting from the internet at the time was actually very it was so young so nent that you couldn't really see how it was changing their lives I do think that with AI it's happened so fast yeah that change it's very real like as a CFO, forget about being the CFO of OpenAI, but as a CFO, what I see happening in my organization is truly taking tasks that previously I would have kept having to add more and more people doing fairly mundane things. Like let's take something like revenue management. Um, so in a team that does revenue management, they h one of the things they do every day is they have to download all the contracts that we signed the day before or through the week and they have to read all of those contracts to make sure there's no terms sitting in it that are unexpected that are effectively non-standard terms because a non-standard term means that there could be a revenue recognition change that has to happen and that's a very big deal for a finance team. That's the number one thing usually your auditors come in to audit you on the pace at which we are growing right the number of contracts every day is going up in multiples. So my only choice in a pre oai preai world would have been hire more people and imagine what those people's jobs are like. You come to work every day and you read a contract and then you read the next one and the next one. It is so mundane and such drudgery. And that's not why people, you know, went to school and learned about the accounting field or thought about being a finance professional, but that's kind of the job we hand them as an entry-level job. Today, using our own tools here at OpenAI, I now have overnight all of those contracts are pulled out of a system. They are put into a tabular database, the data bricks database in our case. um the a the agent or the intelligence is able to go through. It shows me exactly what is non-standard and why. It suggests what therefore the revreck is, but it also suggests the insight which is, you know, should this term even be here? Did the saleserson just give away something they shouldn't have? In which case, you know, I go and I coach them. — Um is it actually telling me something about my business that's starting to shift? in which case this non-standard term is actually should become a standard term and I'm actually what I'm experiencing is a shift in my business model which might actually be a good thing or perhaps I want to find a different way to help get the customer what they're looking for the salesperson but maintain my revenue recognition my current business model right so I now my more junior entry level people are over on the right of that discussion and they're kind of the job they loved. — That to me is why it's not a bubble because the value is real and tangible. Like it also means I probably can have a smaller team. I can have a much more high performing team, a much higher morale on my team, better retention rates, right? All of these I can put into like numbers to say my business is now healthier. And I think that's the piece when the press is trying to lead with the bubble conversation or whatever, they just miss that we are investing with demand. If anything, behind demand at the moment a bubble to me suggests you're investing ahead of demand and there's going to be a gap. — And you look at productivity numbers, they're going up in the companies that are adapting AI, especially the newer set of tech oriented companies. The numbers are just absolutely amazing. So one of my favorites is a little company called Slash — about 150 million ARR. They have one person in accounting only a controller because they adapted an AI oriented ERP system. Uh they replaced Netswuite with it. But uh and it's just amazing what they can do. And the CEO was apologizing to me. He might have to hire a second person. Uh, and they're moving really rapidly. I just saw a story somebody replaced 10 SDRs with one SDR and AI essentially that the one SDR remaining supervises. — Yeah, — it it's I've been hearing two stories about where instead of hiring somebody that's in an area that doesn't create growth, they can now then when they hire people that are creating a lot more growth for the company. And that's why you're seeing a lot of these tech companies just build so fast. You know that old phrase, the future is here now, but it's not evenly distributed. — Yes. — I see all these single points of huge productivity gains and efficiency gains or agility gains, the ability to move faster. Um, but very small percentage of the people in the world on in the US or worldwide have adapted these or even know they exist, — right? And so this issue back to demand I think this I these ideas some of these examples will spread to everybody over time and you'll see an exponential growth of adoption of these technologies. That's why I don't think demand is the question. — Yeah, the note is absolutely spoton. I think Mackenzie did a study that showed for companies that are more in the top quartile, their productivity as measured by, you know, any kind of financial metric you would pull is up, you know, in the 27 to 33%. Like that's a really meaningful jump. I think where you were going is it doesn't just mean fewer employees overall. There's definitely a place to kind of shift people over into more growthoriented jobs. I was hiking this weekend with someone who runs a very large consulting company that you all would know of and he was talking about how his in his what he thinks of more his backend systems, the leader there is now talking about her organization as people plus agents and she has a 1:5 ratio, one person to five agents. — But on the front end, they're actually back out rehiring to grow because clients need more help now to think about deploying AI. So it's actually shifting back I would say to the jobs people want to do not the jobs that maybe were just open to them because more and more of the world had become this kind of you know so much information that people were parsing it and now we're finally back to a machine and agent intelligence parsing it. — I want to touch back on the consumer side. You mentioned ads — and certainly the argument can be made

### Ads in ChatGPT [27:41]

that with ads you can increase the benefits to people. You can provide more services, more AI. how you can help pay for the compute and people get more out of those tiers with that. But that brings up the question though of trust and when people think about AI initially even asking questions people worried about what does Chad cheapy do with my information once you have ads in play people worry about that because it's often just a big question of how does that affect the rest of the product and the or — yeah so I think you started in the right place which is today 95% of our users use our platform for free on the consumer side and that's absolutely where our mission is right AJI for the benefit of humanity not who can pay, right? So access is very important. From an ads perspective, I think number one, we have to just make sure everyone understands you're always going to get the best answer the model can provide you, not the paid for answer. And I think other platforms have fallen back into that where you're not sure is this a sponsored link or is this truly the best outcome. We have a northstar which is that the model will always give you the best answer. I think the second thing to understand is that there can be a lot of utility in ads. So, we want to make sure people know when it is an ad that they're working with. But, for example, if I do a search for a weekend getaway to pick your favorite city, I don't know, San Diego, um, an ad for Airbnb might actually be very helpful. And you might even want to have a discussion with the ad or with the advertiser in that case in a chat GBT setting that's very rich, but you're clear that it's in an advertising setting. And I think this is where there's there has to be more innovation on what feels endemic to the platform, not just kind of the old world of stick, you know, banner ads on things. Um, and I think the third and final thing for me is again there always has to be a tier where advertising doesn't exist. So we give the user some choice and some control. Um but we're very mindful of your data. When we released health, we were very clear your data is off to one side. It's not being used to train on and so on. And I think we just need to keep giving users that kind of that trust is everything for OpenAI and that we're going to stand by those principles even when it comes to things like ads. — On the consumer side, is it going to be a world where you're going to have a lot

### Will consumers have more than one AI subscription? [30:05]

of subscriptions to different AI services? I think you'll have every model. U most people will have more than one subscription. Media is a good example. Most people have more than one subscription media. And so that's a good proxy for consumer behavior. Uh different people will pick different choices including free choices which also which is ad supported media too. So even the same services you can get for pay or for free. I think you'll see a wide range of diversity. — How do you think about though the expense of going to a different platform? So I like chat GPT memory. I'm finding it more and more helpful because as I ask about one thing, it remembers something we talked about maybe weeks ago, months ago. Pulse, which is today not widely distributed, but it's the more it's the way I wake up in the morning now. So I can actually it's so amazing. And when you start connecting it to things like your calendar, so it's not just saying, you know, you say are very interested in AI data centers, which clearly it must think I'm the most boring person on earth because this is what I see a lot of, but it also says, hey, on your calendar, you're going to be sitting down with Benode today. You know, remember a couple of these things like it's so helpful. But if I am multihoming, I'm losing the benefit, which is not the same as if I subscribe to the Wall Street Journal, The Economist, and the New York Times. they're not really losing out if I go read in other places in the same way or I'm not losing out. — Yeah. So I do think memory is an important question — whether there'll be one per wear or more than one perware of the models — on each model there'll be multiple services that may offer different tradeoffs. — Yeah. So even whether you're talking health or media — even on the open AI models there's multiple people providing services. — So that's what I was thinking of multihoming but obviously I don't think open eye will be 100% of the market. — I hope so. — I was going to say I hope so too but — I'm okay with that. But it's an interesting business model. I think it's hard for people to wrap their heads around because like Netflix is a great company, but there's only so many hours on the planet that people can watch Netflix, right? And mobile's great, right? I only I'd only need so many minutes of mobile per week or whatever to do that. With AI and intelligence, you can have more intelligence. I can buy more and get better answers and do this. And I think that's I think I'm still trying to wrap my head around about where that goes. is the idea that like you start at like you know one level of free you know use it for free then you go to a smaller tier and then as it becomes more useful you start increasing that where does it go — so I think unlike something like Netflix where there only so many hours in the day I think of it much more like infrastructure like electricity how much electricity do you use in the day I don't know I walked into a room today and there was a fan blowing it was really nice it cooled it down there are lights on around us right now um you know there's so many I charged my phone overnight and it worked for me all today. So I think that the state we live in today is much more I call on chat GPTI invoke it as opposed to intelligence just being baked in. Like I think this will be the big change over the next couple of years. You'll kind of look back almost it'll feel a little toyike that we used to do this thing and instead it just is everywhere around us. And so it's not really quite asking answering the question you're asking, but it's that I don't get so caught up that there's only so many hours for people to do things because I feel like almost everything I do in life requires intelligence cuz I'm walking around hopefully with some intelligence up here. And if I can get that augmented, um, I think it's going to surprise us. Like as we were talking before we got started, you said about on your phone when you suddenly discovered you had a flashlight and a camera. It is you say that and it's so obvious and yet with chatbt every time I discover kind of a what feels like almost a slightly cute use case I'm so blown away by it like yesterday morning I do love the economist I wanted to read the editorial I didn't really have a ton of time because I was running upstairs to get ready so I took a photograph of the editorial because they're very good they put it on one page and I asked Chat GPT to read it to me and it did it and I was like Oh my god, this is awesome. So, I just think there are all these moments where we're just getting started and multimodal, I think, is probably the biggest because phones taught us to talk with our thumbs. And I think this new world we're moving into, there's going to be new hardware that just really help us understand that we can talk, we can listen, we can see, we can write, we can do all of these things in a very human way that we're just scratching the surface of. So let me give you a different frame on that. I agree with all of that. — If you look at what we talked about the internet earlier — and the bubble associated with it. But what the internet did is give you access to a lot more stuff whether it was media, YouTube videos or Tik Tok or you name it information of any sort. uh but it's expanded it to the point where no human can actually use the internet fully. I think of AI as given you're limited to 8,000 some hours a day some of which is me meant for sleeping it'll make your time much more efficient. So the internet exploded information available to you to the point where you couldn't use it. And I think what AI will do is filter it to make your every hour the most effective hour if you know how to use it. So intelligence will reduce the world to what is most relevant to you personally and I may have a different set of priorities than Sarah. So I think of intelligence as summarizing the world to the most relevant things for me and to her which are different. So I think that's where there's almost unlimited capacity for intelligence to be used to reduce information when the internet exploded information. — Yeah. — We we've talked a lot about consumer side and it feels like open eye is very much winning the consumer side. Question comes up about enterprise and how is

### Winning in enterprise [36:41]

openi going to compete and win in that area. — So I think we're already winning in this area. Um what I see is you know 90% of corporations are saying they either are using open AI or intend to use over the next 12 months right um I think the second is Microsoft and Microsoft's using our technology so I actually think we have this is where the consumer is a really potent part of the enterprise flywheel so as I said earlier when someone you know you back in the day when you first started bringing your iPhone to work and corporates didn't want you to do that you just discover discovered you can't say no to the title wave that is consumer preference. So something I'm already using that I've already got in my pocket and I get to work my expectation is work is at least as good if not better. And so that's what's helped drive our internet our uh actual enterprise business the fastest company ever to get to 1 million businesses on a platform and we did that in about a year and a half. Um but where to from here cuz clearly we're just scratching the surface. So some of it is certainly meeting customers in terms of their vertical so that we talk to them in their language and we learn this art of enterprise selling which is let me not tell you all about my products but let me understand your problem like what is your board forcing on you Mr. or Mrs. CEO, what is the thing your customers most want that you can't deliver? Okay, let's start putting intelligence against that. We can then drop that down into some light vertical specialization to quite heavy vertical specialization. things like RLing models that are very pertinent to a use case like let's say in an energy company it might be really understanding that particular oil well or all the seismic data they have to say what's the recovery we're going to get out of this gas field like that is deep special specialization and then I think it gets the whole way to some of these big transformational research projects that we have begun where we're actually almost taking over someone's whole business and helping them rethink it in a smarter, faster, better way that ultimately drives their key business metrics. So, it's a journey. I think most corporates have started with wall-to-wall chat GPT. That's an easy starting point. They've done some coding. Um, and in many cases, a lot of coding. Like when I talk to corporates, they're now CEOs are starting to say things like 60% of all my production code was built by, you know, an agent. And I'm like, you didn't even know what, you know, production code meant 12 months ago, but now you're saying that that's good because it means you're tracking it. Um, but on agents, it's just starting. Like we only see about 14% of all kind of customers when you go out and just survey us corporates are using something agentic today. 14% when I just explained what's happening in my finance organization. So I think we are just getting going. But I couldn't be

### How can startups succeed? [39:44]

more excited about the opportunity. It's huge. — Okay. But if I'm a startup — and I look at everything OpenAI is doing, I might be asking, is there room for me? What do I got to do? — The models will keep getting better and do more and more. But I do believe there's lots of room to build on top. You know, no one company can do everything on the planet. there's billions of people who are working that whose job AI can help with. I don't think OpenAI will specialize in every one of those. So I think the careful thing to do is be clear where the models will go OpenAI or others and what they will be able to do and how do you use that best to then specialize into a more interesting world — like some sort of specialization where you add something that's additional to the base models in and frankly just intelligence isn't the only thing to provide a solution. There's lots of other stuff that goes around solution beyond intelligence. So I think there's lots of opportunity to build on top of these models and the more powerful they get the number of opportunities to add to it dramatically increases. How do you think about so I think a lot about use cases where there's already a lot of um data that's being aggregated perhaps by that startup by that company that you know today I think 95% of the world's information actually sits behind corporate firewalls university firewalls and so on so there's even though we talk about the vast training that's occurred again we're just getting going but I think companies that have already built businesses that have aggregated that data have access to it and then on top of that have managed complex workflows. So I often give the example of our procurement system. Procurement system per se not that complicated but what it does very well is it understands things like delegation of authority. So it knows what the board has approved in terms of approval limits. So it knows that when this software contract comes in it's x over x amount so only I can approve it or if it's beneath that but it knows a VP can improve it. It doesn't know that Andrew's a VP, but it knows to touch the HRS system and check what's his level and so the whole procurement flow can happen in a way where I have compliance and governance and hopefully makes just the whole company run faster. Those are places I get interested for startups. So where have you got access to unique data with a complex workflow? It feels like there's more of a moat around that we want to work alongside you, but you know the general purpose model is not going to do all of that itself. — Yeah. No, I I completely buy that. I think there's lots of opportunity. — I've seen quite a few startups around just permissioning around data. — Yeah. — Like who can do access to what information. — For example, — I've seen a whole bunch of startups around customizing to each company — the models for their history and their priorities. and the agent the whole identity side of agents. I think we're just starting to understand um both the risk that can happen when you have agents talking to agents but then also how are you going to permission that and then start to think about like agentic commerce like the the complexity that's coming is also quite big. So to suggest there's no more opportunity as a startup I think it's never been probably more interesting or fun to be a startup. Yeah, I think there's more opportunities than they've ever been. — What are you looking for now? What gets you excited when you talk to a company? — Well, the hardest thing is great people always. Uh but uh I think the other thing that has been in short supply is agency where people sort of have the agency to make things happen. That's again comes down to people. But there there's so much opportunity. I think traditional things like knowing a space or experience in space is much less relevant now. It's more agency. We've not talked about the whole new

### Robotics and beyond [44:05]

world of robotics and real world models and all that. That's a whole space by itself that we probably don't have time for. — Well, do we? We've got time. — I've got plenty of time. I'd love to I want to go there. Yeah, because we we talked about where we're headed here and you you famously talked about kind of the world of 2050 and things are moving fast, models are getting faster and more capable and where do you see things like robotics headed? — Well, I think two years ago when I gave a talk at TED, I said the robotics business, both bipeedal and other robots will be a larger business in 15 years than the auto industry is today. We think of auto industry as one of the larger businesses on the planet — and this other thing will be larger. — I don't think there's very many automotive companies who are thinking of the world that way. — They're thinking about how to use a robot in their assembly line. Not that business is larger than their current business. All driven by the intelligence of robots. So massive opportunities for startups there and we're seeing a lot of activities. — Yeah. And we and I think sometimes we underestimate so when you think about robots in the home right people very fertile area a lot no one's really had a breakthrough though there's so many different issues around the complexity actually sometimes the more time I spend in AI they actually the more respect I have for the human condition in a way um because our ability to move around the world and do you know if you watch like the people in robotics getting so excited about a robot folding clothes you know perhaps my 18-year-old I'd be just as excited about. But for the average human, I assume they can fold clothes. — Um, but I think — the hello world of robotics now is folding clothes, — but you do get a little stuck in your head that they have to somehow be a human, but it turns out there may just be these breakthrough moments. Like for example, um, uh, companionship in the home, right? We have an aging population. What's one of the biggest, you know, we talk about epidemics in the world? Loneliness, probably one of the biggest acade epidemics. What does someone living alone, maybe has just lost a spouse, value most, just someone to converse with in a way that feels intuitive and human? We see people using chatbt more and more for this conversation, but is there a humanoidesque breakthrough where it turns out you don't need it to make coffee or fold clothes or do the dishes, although that would be good too, but it might just be something a little bit more simple that still adds a lot of value and is just the first crawl of crawl walk run of this kind of future that Benode is talking about where that whole complex is X times more valuable ever than we saw in automotives. — I think that it's interesting because we can sort of think of kind of like our present and put robots in places and do things like that. It's really hard to think of when you really have extremely lowcost labor, manufacturing, etc. And then the world you can build from there because, you know, we can look at that's a good solution for now, but when the cost of building a wonderful state-of-the-art assisted living facility where you can put a bunch of people together, the cost drops. And I think that's the thing I have my the hardest problem is for me is to really think like what does it really mean when you lower the cost? We've lowered the cost of intelligence. What does it mean we really lower the cost of labor? — Uh my personal view sometime probably towards the end of the next decade you'll see a massively deflationary economy because labor will be near free, expertise will be near free. Uh most functions will be almost zero cost. how it exactly plays out. A little hard to tell how purchasing power versus production of goods and services plays out, but I expect we'll see a hugely deflationary economy at a level people aren't planning on. So there's social aspects of adaption of AI that hasn't been handled yet. I think the conversation we need to have is what will people do? I get asked that a lot. Um how will people make a living. I think the minimum standard of living governments can assure people is going to be much higher uh without needing to earn an income. I mean, I can't imagine much better primary care like 10x more primary care than today doesn't happen for a dollar a month. I have a hard time imagining how that happens. It will be true. It costs almost nothing to have free primary care, free education, almost AI tutors for every p personal tutors for every child that's already happening. Um, so there's a set of services that'll be free. There's some hard nuts to crack. Housing is the hard one. um you know for people in the bottom half of the US population they spend 40ome percent of their income on housing and food. Uh so there's some hard nuts but I do think both are addressable by robotics and better approaches. — Well this has been a very interesting conversation. I'm excited to see where things are headed. Thank you both for joining us here on the podcast. — Thank you.
