We are one month into 2026, and AI is evolving faster than anyone predicted. But here are a few things AI experts warned us about that feel more alarmingly relevant now than when they said them.
From conversations across the past couple of years, this episode brings together five of the sharpest minds we've had on, saying the things that usually get cleaned up before they go public. What they actually think is happening. What scares them. And where this is all going.
If you only listen to one episode about AI this year, make it this one.
5 statements that stayed with us:
- "I'm not worried about being replaced by AI. I'm worried about being replaced by somebody who's really good at using AI." - Mike Cannon-Brookes
- "$600 billion is the amount of money that all the AI startups combined need to generate to pay back one year of investment." - David Cahn
- "Training not on language alone but on observational data, on video data, allows the model to grasp reality in a much more consistent way." - Cristóbal Valenzuela
- "I think OpenAI will go public. I think it'll be a trillion-dollar market cap." - Martin Shkreli
- "Some of the tasks we do involve spending days or even weeks solving these problems. A very far cry from five seconds labeling a task." - Edwin Chen
Connect with us here:
Mike Cannon-Brookes: https://www.linkedin.com/in/mcannonbrookes/
David Cahn: https://www.linkedin.com/in/david-cahn-60150793/
Cristóbal Valenzuela: https://www.linkedin.com/in/cvalenzuelab/
Martin Shkreli: https://www.linkedin.com/in/martin-shkreli-4a858721/
Edwin Chen: https://www.linkedin.com/in/edwinzchen/
Lukas Biewald: https://www.linkedin.com/in/lbiewald/
00:00 Trailer
00:48 Introduction
08:00 AI Meets Trading News
15:54 Cloud Prisoners Dilemma
22:47 Beyond Language to Reality
30:15 Open vs Closed and Big Bubble
Оглавление (6 сегментов)
Trailer
AI is not going to replace human beings. It is a force multiplier for human creativity. It is an accelerant. It's not like some robot will show up and just be like sending out emails or whatever. — One of the bottlenecks of language models is that language is always constrained by what language actually is, which is a human obstruction of reality. We've created this mechanism for us to communicate with each other and describe the world, but it's not an accurate representation of the real world. — I think open AI will go public. I think it'll be a trillion dollar market cap. The bubble will get bigger, — at least with the current state of things. closed source models will continue winning. If you ever try to build open- source models, eventually you're going to be forced to close source them. — Where's all the revenue? You know, we're spending hundreds of billions of dollars on AI. 600 billion is the amount of money that all of the companies using AI combined need to generate to pay back one year of investment.
Introduction
— You're listening to Gradient Descent, a show about making machine learning work in the real world. And I'm your host, Lucas Bewald. — I'm not worried about being replaced by AI in my job. I'm somebody who's really good at using AI in my job. That is the fear that I have that oh man, someone will come on, they're just better at using AI to get better results. It's not like some robot will show up and just be like sending out emails or whatever. Um — the technical version of that is you require this human AI loop repeatedly. So in that example, we'd written some examples of how it worked. Then using the teamwork graph and enterprise search, the agents can go off and find other examples and come back and say here's a whole bunch of examples that we think are better and worse. Now there's lots of ways you could say just go write the code and it'll give you back 500 pull requests. — Now some human goes through them all. you're like, "Oh, let's apply AI out of that problem. " Like, h, I worry about the AI code reviewer of the AI written code with no human in the middle that's kind of orienting, right? Because if it's off by a few percent and then you multiply it through, it's going to be pretty tricky. And we've had our fair share of AI worm 1980s equivalents where it kind of feeds on itself and goes off and if someone's like, we were 1% off at the start, but after a thousand loops, we're in trouble. Um so I think those type of tasks where you know we talk about it as code maintenance it's a pretty nap term the analogy maybe to gardening is the easiest one or some sort of term like this where it's like most of the code creation is where you're saying look I want a waterfall over there and I want to put down some rocks move around like plant a big tree and it's this is landscape architecture. Mhm. — A lot of the day-to-day work is mowing the lawn, taking out the weeds, like you know, putting the fertilizer on the fruit trees or whatever it is you need to do. That's what AI can help us do to get back to some of that landscape architecture. All of these things are going to be done in a loop with the AI. — Um, and I think there's going to be lots of coding models to do that. So yes, DX is going to really help organizations understand their productivity both qualitatively and quantitatively but more importantly come back with uh actions of where to improve that right. Um and where to your point it feels productive and it's not and where it isn't and also where you get examples hopefully of this productivity like that. We've done a lot of work about like things like accessibility and translation. Any of these large scale tasks where you need to do them across massive code bases. — That is where it's really really helpful. — It's hard for me to imagine a world where like you have a one click on solution for everything. That feels boring to be honest. Um you want to have that control and so I think language interfaces are a huge step towards like accelerating the speed at which you can execute. Are they the final answer for everything? I'm not sure, but they do u make you they make you just move faster on your ideas. — Well, you know, this is an ML um podcast, so you know, I think people would probably be interested in like the most like, you know, flashy ML features. — Okay, so in short, Runway is a full um video creation. Um it allows you to do things that you might be able to do in more traditional video editing software. The main difference is that everything that runs behind the scenes. So most of the core components of runway are ML driven. Um and the reason for that um it has two main kind of like uh modes or uniqueness about uh making everything ML based. One is it helps editors and content creators and u video makers automate and simplify really time consuming and expensive processes when making video or content. there a lot of stuff that um you're doing in traditional software that are very repetitive in nature that are very timeconuming and expensive. So runway aims basically to simplify and um reduce the time of doing the stuff right it's if you have a video you want to edit an idea you want to execute spending the time and the minutes and the hours and sometimes days on like this very boring stuff is not the thing that you really want to do. Um and so we build algorithms and and systems that help you just do that in a very uh easy way. And then there's another aspect of runway that it's not only about automation, but it's about generation. And so uh we build uh models and algorithms and systems that allow our users and customers to create content on demand. Um and everything I guess a baseline for us is that everything happens um on the browser. And so it's web based and cloud native which means that you don't rely anymore or what you actually feed these system access to our GPU cluster on demand. You can render videos on 4K 6K pretty much in real time. — Plus you can do all of this AI stuff also in real time as well. But capability alone doesn't really tell you much because what you actually feed these systems, that's what shapes everything about what they end up doing. Like when I first started working on Twitter, this was the old days when it was purely chronological timeline. And so one of the things we want to do was make it easier for users to discover tweets they really cared about. And so the question was, how do we train our recommendation algorithms? And one of the first things I wanted to do was build a sentiment classifier. So you know, sentiment analysis is super simple problem. And all we needed was 10,000 tweets to train our models. But the problem is we tried doing these things and it turned out to be this incredibly negative feedback loop. Like once you optimize for clicks, you get the most clickbaity content in world rising to the top. You get lots of racy content. bikinis. You get lots of listicles about 10 horrifying sin diseases and so on. And so we wanted to train our models on all these deeper principles instead where we'd ask a human raiders to label tweets and recommendations according to our product principles. But if we just couldn't even get simple sentiment analysis right, we definitely couldn't get this more complex data at the quality and scale we needed. So um yeah, so I guess eventually there's a problem that happened over and over again at Google and Facebook too. And so eventually I just realized just something that I needed to go out and build myself. So we started surge in 2020 uh right in the middle of the pandemic. So our take on a space was that all of these solutions out there they were basically focused on this idea of very commodity labeling like very low skill um like the example I often give is take the problem of drawing a bounty box around a car like yeah you and I we can all draw bounding box on cars uh like almost like a three-year-old can draw a bounty box on a car the bounding box that I draw isn't going to be any different from the bounding box that Terrence TA draws or Einstein would It's like a very low ceiling on the complexity of data that is required. In contrast, if you think about all the things that we want to do today, like yeah, we want models that can write poems. solve relativist relativistic equations. There's almost like an unlimited amount of intelligence that we want to feed our models. And so all the other solutions at the time, yeah, they were designed for like a very low skill commodity style of flavor. And so it would just it was not focused on quality at all. Instead, it was focused on scale. I think AI trading is going to
AI Meets Trading News
make quant trading even better and even harder for humans to to do well in. I mean, I think that humans have a small edge when it comes to like judging markets, emotional states of like traders, other traders and things like that integrating lots of like disperate pieces of information. Like if you see a little clue about inflation from one company and then another little clue from another company and you start to connect the narrative that like, okay, maybe inflation's going up. I'm gonna do something with bonds or something like that. You know, that's the kind of thing that maybe AI can't see right away. But in terms of super high frequency trading, the computers have dominated that. Trading the news is something that I'm really interested in that I know a lot of other LLM type folks are interested in, whether they're coming from the quant side pointing to the LM side or the LM quant side. You know, the question is, can you have a computer that just reads breaking news really quickly and reacts to it really quickly? And that's one of the products we're working on that I think is going to be really helpful because you'd be surprised at how much breaking news there is that hangs in limbo and the market's still trying to figure out if this information is real or not. The LM can make that call in a second and then make the trade. And I feel like that's going to be a big game changer. But of course, all things in finance, everyone will have it within six months or a year and you know you can very quickly react to things. Just the other day, just today, it was HIMS announced that they were going to carry a GLP product, which was going to be good news for HIMS, but it sort of took the market a few minutes to really digest it. Whereas LM would be immediately like, yeah, buy that. And I feel like that is a product that certain big firms are making, but nobody's making it for the little guy. And I and I'd like to make those types of products for even I mean firms, too, but also the little guy. And there's all kinds of other little nuances in in finance that could really benefit from this. traders themselves. I mean, they've mostly been wiped out by computers. I feel like there's still hedge funds out there with humans. They they pale in comparison usually to the quant hedge funds. We're also considering, you know, how do we bring quant tools to people, too? — Are you trading like it looks like on your live stream you're trading? Is that so you have you yourself have not been wiped out by — No, I think there are still people that could do it. I think that it's a losing it's a fool's errand. I mean over it's sort of like Kasparov versus Deep Blue like you could see the writing on the wall even years ago and today like they're still good chess players but you know it's really hard to play you know deep blue and I feel like you're going to have a time and place where like say private equity or venture capital like these are places that are like not AI is not going to compete with for another 10 or 20 30 40 50 years maybe you know somebody's going to come up with a startup that competes with yours and say who are these guys is it that guy I met MIT or is it this that guy I know from anthropic is like, "No, it's a [ __ ] machine. " You know, it's an all web. And you know, it's that would be, you know, really ironic. — This first article that you wrote that was called like AI's $200 billion question and then I think it kind of got updated with like AI's $600 billion question. I actually imagine a lot of folks that are listening to this have read the article and of course we'll post in the show notes, but I thought like let's not assume that someone's read it. Maybe you could kind of lay out the case. I think especially it feels like your article might be still a little bit confusing to someone without your um finance background. So try to dumb it down for the AI audience here if you don't mind. — Yeah, happy to. I mean I think just to start the basic question that I was trying to answer and this was from my own curiosity at the time was where's all the revenue? You know, we're spending hundreds of billions of dollars on AI. The hyperscalers are each building out these massive data centers. And so I just didn't have a great intuition for what is the scale of this investment and what's the scale at which the AI ecosystem needs to get to in order for these investments to make sense. And so yeah, his $200 billion question which became AI's $600 billion question was basically some form of napkin math that I had sort of come to or intuited over time in terms of thinking through how do we go from data center investment to the amount of revenue that's required to pay back these investments for AI. So this number is like the total spend on data centers in some period, right? — So let me walk through the basic math which is awesome. So um in 2024 the idea when I published the $600 billion piece over the summer, we thought that Nvidia was going to do about 150 billion of run rate revenue in 2024. And so for every dollar that you spend on GPUs, you have to spend a dollar on the data center, energy, generators, power, all this stuff. And so assuming that the people who are buying these GPUs are using them to create AI systems, you can apply this 2:1 multiplier and say, okay, so for $150 billion being spent on GPUs, we're going to spend $300 billion on data centers. And then the question was okay fine we have this GPU capacity we have these data centers but now some startup somewhere some enterprise somewhere someone's going to use those that data center capacity and they're going to deliver a service. So you're a startup you're using open AI's API they're calling a data center somewhere in Microsoft's data center to run that model. Um you want to earn a margin and so I just assumed for the sake of argument that you Mr. startup want to earn a 50% gross margin on your product. And so that means for every dollar you spend on AI, you need to generate $2 of revenue to get a 50% gross margin. And so you basically get this second multiplier. And so you take the $150 billion, you double it to 300 billion to get the amount of uh investment in data centers and then you double it again to 600 billion and you say okay 600 billion is the amount of money that all of the AI startups combined need to generate and enterprises and all of the companies using AI combined need to generate to pay back one year of investment. And that's maybe one final important point which is 600 billion isn't this like total number into infinity. Actually, if in 2024 we spend 150 billion, that's 600 billion of required revenue. If in 2025 we spend another 150 billion, now it's 1. 2 trillion and it goes up and up and it's almost this debt that we've sort of invested in that we now have to go pay back over time in terms of AI revenue. Microsoft is now spending roughly $20 billion a quarter on new data centers and that's started to stabilize in Q3 and Q4 of the last calendar year. and Google is spending about 13 billion per quarter on data centers and that's also starting to stabilize and then Meta and Amazon while they haven't fully stabilized I expect will stabilize around the same levels. So you'll see Amazon stabilize in the low 20s similar to Microsoft and you'll see Meta stabilize in the low teens similar to Google. And so what that means is AI's $600 billion question is not going to become AI's trillion dollar question. We sort of reach this state at which it's no longer growing. It's no longer exploding. Now, the second piece of that is the revenue piece. When I first published the piece, I basically said, "Hey, we need to generate $600 million revenue. Are we doing that? " Like, are we succeeding at that objective? And I tried to count as generously as I could. I said, "Okay, some billions of dollars of revenue from OpenAI. You have a bunch of startups that are each generating tens or hundreds of millions of revenue. Let's assume the big tech companies are generating five billion dollars of revenue, which I think was a very generous assumption. " And in the summer of last year, I basically said, "Hey, there's this $500 billion dollar hole where there's this huge gap between where we need to be and where we are. " Now, I think if you fast forward the clock to today, that's also pretty similar. Sure, OpenAI is bigger, anthropic is bigger, but OpenAI is still the lion share of revenue that's been generated in the AI ecosystem. The big tech companies have not fully unlocked AI revenue in their businesses. We saw this week Google announced that you have to buy their AI product through Gmail, right? They're trying to figure out how to distribute this AI product, but we're actually in a very similar place to where we were in July of 2024.
Cloud Prisoners Dilemma
— So, where then is this money coming from to buy all these data centers if the revenue is not being generated? I mean, there's certainly not enough VC investment, I think, to fund um hundreds of billions of dollars of spend. So, doesn't it have to be inside of enterprises this is really happening? This is a great question and this was something that I all of these pieces that I'm writing it's just me thinking out loud, right? I asked the same question. I didn't know the answer and then I went and tried to figure it out and I published this piece about what I call the prisoners's dilemma of AI and I basically explained that the cloud business itself is about a $500 billion business and the cloud business is the golden goose for Amazon, Google uh and Microsoft and it's a very profitable business and we now live in a world and I think this is remarkable even in the history of business right where seven companies represent 33% of the S& P 500 right the magn Magnificent 7. And so what's basically happened is you have these seven oligopouloolists, monopouists, whatever you want to call them, and they're all they have this golden goose, their cloud business, and they're all trying to protect their golden goose. And so I think the AI, what's happening in AI is so fascinating in part because we're living in this unique time where we have this very powerful cloud oligopoly that is competing with one another. And so if you're sitting in the board meeting of Microsoft, you're telling yourself, "Hey, we don't want to fall behind Google. If we do, we're going to lose our position in this amazingly lucrative oligopoly. " And so you have this sense that everybody has to spend in order to compete with one another. And so what basically to answer your question very directly, the dollars that are going to fund these data centers are coming out of existing profits from what is a very lucrative business that's been built over the last decade. — The money has already been committed. the infrastructure is literally going up right now. So the real question is who's actually building something that's worth all of that investment. — We always said uh it was always one of our goals was to be a multi-deade technology company from the very moment we started cuz we figured that was hard but every 10 years like right now you get this major technical shift half the companies disappear some new ones arrive and some survive and thrive in the new era. So, we really wanted to be a multi-deade technology company. And someone asked me the other day in an interview, you've succeeded in that. And I'm like, no, we haven't. And I'm like, I guess we kind of have we don't think that we're like, you know, we're not done by any means, but uh I like to say since the start, we've solved we solve people problems. We don't solve technical problems, right? Uh we don't really have anything that knows what languages you're writing in or anything else. Uh and developers are a relatively tiny portion of our audience. Technical teams are a less than half of our total audience of active users if you like of technology teams. But that's important because it comes to the nature of your question which is firstly what we do is we try to connect technology teams with business teams to run let's call them business processes. It's a big word workflows whatever you want to call at a broad level. Um, turns out as most businesses nowadays are technologydriven businesses that is incredibly important, right? A bank, an automaker, a drug company, these are really using huge amounts of technology, software, hardware, whatever else to do their fundamental business. And so how their technical teams work and what they build and how they make products and how they connect with business teams is really important. That's the core of what Duran Conference have done for 20 years. Uh and in fact it's as we've grown there are far more business people in most companies than there are technical people broadest possible you know product managers developers all them put together we started maybe six I think 2019 building what we call the teamwork graph so that is because our applications have many links in them to each other and to many other apps right across our 20ome apps collected in maybe five or 15 collections they all have a central cloud platform We track billions of links to other applications. Be it Google Doc, be it a Figma mockup, be it a pull request at GitHub, uh be a Salesforce customer record. Um these links we started to assemble into a big graph. Uh it's now a massive graph. It's north of 100 billion objects and connections and it's going up at like I think it's growing like north of 50% quarteron quarter at the moment. Um the reason for that is the most essential part is how are all these objects floating around in your world connected? We've put all those into the teamwork graph. That seems like a simple challenge at the start. Turns out it's very complicated because you have to understand the type of the link. Um in a technical scenario, if I have an issue in Jura connected to a pull request, I can infer something about that. That pull request is part of a repository. That repository project. That project is part of a business initiative, etc. uh that initiative has a spreadsheet like so the types of content are really important in the team of graph and obviously all of that in enterprise has permissions so I need to be able to navigate it surf it query it and get things back um then we add semantic search over the top that's the core of what we've built into robo so um we call it for customers often an enterprise search engine probably one of the biggest if not the biggest enterprise search engine in the world at the moment and um I would also argue it's the best because we spend a lot of time we have a lot of knowledge to give you great ranking and context on that information. Now 2019 when we started on teamwork graph that was because we needed it to connect our workflow applications with all the other SAS things you were doing. You're starting to use Figma diagrams over here in product management and you started to see IT teams connecting uh you know the data dog or big panda or Splunk reports and you started to see marketing teams connecting into sales cloud and so we had all these links and we started to put them into a bigger and bigger graph. got more and more complicated. Um, call it 3 years ago when long come LLM. We have a superpower change moment because we have a graph of all of the objects in your company. Your organizational knowledge that's now become your organizational memory. Why does that change? Because now we can understand the content of the document, right? We can use the LLM to get out concepts, uh, definitions, right? We have a feature that's super popular where you can select any word in Confluence or Jira and just say define this word. Now people don't usually define I don't know the word progress or the word disruption. They define the word fairy dust or alchemists because they're like what the [ __ ] is this internal code word mean? And we can give you a full history and then you can start chatting to it with robo chat. So obviously the graph and the organizational memory is a large scale permissioned knowledge base of your organization across hundreds of SAS apps that is at the core of what Robo is as a as a piece of technology let's call it for now.
Beyond Language to Reality
— I don't think there's one single like element to why models are getting better over its time. I think it's a combination of these elements. is I was like listening to Ilia's like podcast from a couple days ago and I really like the idea of phrasing like this eras the era of research but with bigger computers. Um so we're back to like you know fundamental research you need to understand and spend a lot of time doing uh doing science trying experiments and so if you're really good at trying experiments it's I would say it's one of the things uh that works really well here but like specifically I guess like where's the frontier here like where do models like still kind of underperform? like what are some things that like your latest model, you know, can do that like a previous generation couldn't or like where in the leaderboard do you think you're like, you know, getting an advantage and why? and why? — Um, I think there's definitely a lot of things that models can do today um that they couldn't do before, but it's still like there's other stuff that they're not able to like do today that I'm sure we'll be able to solve in a hope in the future. I think overall this idea of like world understanding has become more evident. um that the models are really good reasoning systems that understand like spatially temporal spatial like uh um consistency that can understand like uh cause and effect just the world um and the implications of it are kind of like pretty broad. There's ways of customizing or fine-tuning these models so they can be uh also useful in other domains. Um and that becomes like extremely interesting from an like general intelligence perspective where I think one of the bottlenecks of language models is that language is always constrained by by what language actually is which is an a human obstruction of reality right we've created this uh this mechanism for us to communicate with each other and describe the world but it's not an accurate representation of the real world it's an obstruction of the world and so being able to train not a language alone but on like observ observational data on real data on video data um allows the model to um kind of grasp reality and how the world works in a much more consistent way. Um and a lot of the work we're doing now is kind of heading towards like what that means and how you ex scale and extrapolate that sort of like the abilities to hopefully do much more than just like video generation. Understanding the world is one thing, but actually building the data that teaches a model to understand it, that's a totally different problem. We're trying to generate data that will help AGI. Like one of the things that we often do is sometimes when customers or like when new when new companies come to us and they ask us if they can work with us and if their goal is kind of just like unaligned with AGI then we actually just say no to them because we don't want the revenue and we want to focus on the AGI companies in a sense. Um so I think like again going back to not raising um the fact that we don't have a board that or like an external board the fact that we don't have VCs who are uh just dying to uh you know make as much money as possible I think that gives us sort of freedom that allows us to focus on the most important problems which has allowed us to kind of maintain that research focus. [snorts] — Totally. So you only work with companies focused on building a jet. — Yeah. So like for example, if a company came to us and you were saying like um uh like yeah we just want to train a what that was. Let's say I just want to train a like I'm a newspaper and I don't know like a category algorithm category classifier. Yeah, we just say no. — What if I like what if I want to make like a video like autom video generator? Would that be um in your realist or is that to — uh yeah I mean we do it in that sense just because um like building such video generators is like a part of building AGI. Um so yeah that sense look I mean I imagine you know five years ago the data being collected was pretty different than you know the data collected now. Like I would think probably some of the tasks well tell me if it's true, but I would think some of the tasks that you'd be doing 5 years ago would be like easily automated by LMS today. It's just been such an astonishing um you know pace of improvement. Can you talk about how the types of tasks have changed over the last few years? — Uh yeah, so many of the types of work that we do are very different. So when we first started a lot of our work was in tasks like search evaluation or content moderation and uh like you know whereas today it's almost purely llm work uh so that's one big difference and then even within lms there's been this obvious trend towards higher complexity higher sophistication higher expertise um so I can give a couple examples so like for example there's been a big increase in multimodal complexity So a few years ago it was all text data just conversational text assistance but now we do a lot of work with images and audio and video and I think the interest is like you actually want the models to understand all of these modalities simultaneously like one of the things you might want to do is like okay I'm taking a video of something on my phone and now I'm asking the model to create a program based off of the video on my phone that simulates this in real life. So um yeah there's been a lot of multimodal um uh increase in complexity. There's also been a big expansion in languages like you know first people were naturally focused on English only work but we actually work in over 50 languages now and I think what's also interesting is it's like very hyper specialized like we support coding in Argentinian Spanish and we support legal and financial expertise in Bolivia and it's because each like even today I think a lot of the models are actually just not that good uh surprisingly so like they're not that good at kind of the different nuances of different languages or different dialects or different cultures and so I think there actually still was a lot more progress to be made there and then probably the biggest shift is just like the depth of expertise that a lot of the work requires like yeah you see the models winning IMO gold medals now and you see them doing all these incredibly advanced tasks and so you actually really need serious thinking power behind us and so even today some of the tasks that we do they involve spending days or even weeks solving these really interesting problems and so it's a very far cry from you know tasks 5 10 years ago where you might spend 5 seconds uh labeling possible. — What is like a topic that you don't get to work on that you wish you had more time to work on or what's something that's sort of underrated for you in machine learning right now? And I realize it's a funny question to ask an obsessed ML founder, but I'll ask it anyway. — I think audio generation. Uh I think it's catching up now, but it's not no one really has been paying a lot of attention. There's some really interesting open source models uh from like Taco to a few other things out there. I think that's going to be really like um transformative for a bunch of like applications um and we're already kind of like um stepping into some stuff there but uh I mean it's hard to focus as an industry or as a research community like a lot of things at the same time and now that I guess image understanding has kind of like been solved in a way like people are moving to other specific fields I think one of the ones that we're going to start seeing very soon is uh audio generation um so yeah excited for that
Open vs Closed and Big Bubble
for sure — do you have an opinion on whether open source or closed source models are going to win in the long term. So my guess is that at least with the current state of things, closed source models will continue winning. And part of that is because I mean elements are just so valuable that if you ever try to build open source models, the way incentives currently work is that eventually you're going to be forced to close source them. And so if we want to build truly open source models that are really good, we almost need a different kind of incentive structure to make sure that happens or remains in place. Otherwise, like if you just look at the history of like other types of open source models, um they have kind of gotten more closed over time. — What do you mean like what are you referring to? So I mean if you even think about like meta that is thinking about making their uh making vomit flow source and just models are just so expensive to train and people want to fully capture the value if you ever build a truly good open source model that I think it won't remain open source for very long again unless you can change center structure in some way that I haven't figured out yet. One of the spicy takes that I found of yours from I don't know like six months ago or something was saying AI is going to kind of enter a bubble and the bubble's going to pop which you could argue that was extremely impression. I'm not sure where you feel like we are in like a hype cycle or if you're expecting like a bigger bubble but it doesn't feel like we actually kind of had a maybe like a peak maybe two months ago and maybe coming down from there. What what's your take? — Yeah, it's hard to say. I think that in the '9s this happened with dotcoms too. There was kind of like this it wasn't just docoms but it was just it was a mix of like docoms and other tech including like communications technology and so you had this this bubble and it felt like maybe you're getting to a sigmoid but then it sort of did another big gap. I feel like that's probably gonna happen here. I think this is like maybe one of the bubbles to end all bubbles. And one of the funny things about it is that in every bubble sometimes you actually sit there and think maybe it's real, you know? And I think the smartest people after a while are like, I don't know, maybe I should do internet stocks. And I feel like that's exactly what's going to happen here. I think Open AI will go public. I think it'll be a trillion dollar market cap. I think that the bubble will get bigger. I feel like it's possible it all comes tumbling down tomorrow. But I also feel like almost more than the internet like AI has immediate impact for a lot of organizations. And the other key thing is that the biggest beneficiary is going to be companies like Verizon and Proctor and Gamble and stuff like that where they're going to be able to like save a billion dollars here and there which is a huge amount for the S& P 500 where they can save some money deploying AI tools and maybe they can spend money using products like yours to sort of say what what can we do with the data that we do have internally. really smart companies will will pass those savings on to efforts like that. But I think that the internet did that for so many different S& P 500 companies. It's what fueled global growth. — We've seen with coding agents, this is a really powerful for existing running businesses solution set. We have tons of examples internally where we use that quite a lot. Um, so as an example, [clears throat] we had an internal service that changed its shape and location. Uh, I'm trying to think what the service did. I forget now, but let's just assume it was a rest kind of endpoint and someone's changing the URL, I think. And but not only changing the URL, which is kind of find and replace. They were actually changing the shape of the request that went to it and how they came back. The API sort of changed in some way. — Not meaningful, but small. We had to change more than 500 repositories internally. Sometimes it was configuration, sometimes it was actually the code depend on how they use the thing. That is a great task to put AI coding agents into which is very different than what we traditionally think about. — How we did that uh is using rodev and agents of Jira and things um and I think it mostly use rodev was to write a few examples of how this worked. So maybe one in JavaScript, maybe one in Java and to commit those examples and then you get the examples into the graph. Now you can say go find me any other place you think I should do this and it comes back. But it requires I think the other learning is we have a model where we think about human AI collaboration as the way the world is going to evolve. AI is not going to replace human beings. It is a force multiplier for human creativity. It is an accelerant, not a replacement. — It's a force multiplier. That's like the practical, boring answer. But underneath that, there's this much older question. One that every religion, every philosopher has been trying to figure out for centuries, which is just what does it actually mean to be human? I wonder about your um if your religion influences your thoughts on AI. And I asked this because you're one of the few people and so I think you talk about religion more than most. And I do feel like AI sort of has almost like um I don't know like religion scale implications um when you think about AGI and things like that. So um maybe this is a little bit of the question but I wonder if there's any connection there for you. — I'm deeply fascinated as about religion and we've had some conversations about you know religion in the past. I find all religions to be interesting because I think religion is trying to answer this question. What does it mean to be human? And that question of as you say has a lot of relevance in AI and I've read the major books of Christianity, Judaism, Islam, the Mahabarta, the Tao. I've read sort of the books of all the major religions in my own sort of personal quest, my own curiosity to understand, you know, what does it mean to be human? What is this experience? And I think that religion institution I think of religion as like this like package like institutional religion is the wrapping paper. And so once you kind of tear away the wrapping paper, what's inside is similar I think in different different cultures and contexts. And what's inside is this moment of confrontation. Confrontation with consciousness, confrontation with God, confrontation with truth, confrontation with things that are really essential uh and important. And so bringing it back to AI, you actually you probably don't know this or remember or know that this influenced me so much, but you sent me uh GE one year for Christmas as a gift. This is Godell Echerbach, which is a book about by an AI researcher about AI. And he sort of offers his own hypothesis which is a non-religious hypothesis which says that you know this unique element of human consciousness is this idea of self reflection of recursiveness and he runs that through math in the work of God in art uh through the work of Cher and in music Bach and he says that this is you know essentially the uniqueness of the human condition. — Thanks so much for listening to this episode of gradient descent. Please stay tuned for future