# Stanford CS221 | Autumn 2025 | Lecture 19: AI Supply Chains

## Метаданные

- **Канал:** Stanford Online
- **YouTube:** https://www.youtube.com/watch?v=lPx5PF1ttkc
- **Источник:** https://ekstraktznaniy.ru/video/20902

## Транскрипт

### Segment 1 (00:00 - 05:00) []

lecture we started thinking about the societal impacts of AI and the picture I think is that most of us when we think about AI think about what's in the box and we think about how to make AI and how design objective functions with algorithms and so on um but since last time we've been thinking a lot more about what's really outside the box and how is connect to what's inside the box. So just aside from up here we talk about what are the resources that are used to make AI paper and then downstream where AI is used and this is the mental picture. So I think it's really important for us technologists to appreciate the larger ecosystem. This is something I've tried to emphasize a lot. Um, so today we're going to continue on our journey to understand more deeply how to think about societal impact. So we brought in Rishi here to help us out. So Rishi did his PhD at Stanford last year and now he is a senior research scholar at HAI. uh Richie is uh famous for leading the paper that coined the term foundation models and he's involved in high level discussion to AI policy in both the US and the EU and most recently he's been thinking a lot in terms of understanding the impact of AI through economics which is obviously a whole field itself um and so we're excited to have them tell us more — uh Yes, that would be great. I just Okay, great. Excellent. Um, yes. — Okay. Uh, thanks Percy um for the introduction and I'm sort of glad uh that you're thinking about and society in this class because there is plenty to think about. Um and so I was excited when Percy asked me to give this uh lecture because uh you know several years ago I was in the position you all are in of taking AI classes um at Cornell when I was in undergrad and you know I learned a lot of things there uh about how to build AI systems. Um, but I think something I didn't learn at Cornell and sort of I spent most of my PhD working on is trying to think about things beyond how to work uh and study uh AI systems. Um, and instead sort of think about how they affect society and in particular as is the subject of this talk the economy. And so I'll try to give both some kind of concrete uh aspects of this puzzle of the economic impacts of AI as well as sort of some abstract conceptual frameworks uh over the course of this talk. So when we talk about uh the economics of AI, we could mean a variety of things. I'm going to try to cover uh to start a few of the important questions in the field. So one is that uh the relationship between AI as a sector or a class of technologies and the economy um is evolving uh and it's already quite clear that AI is quite important to the economy. So one sense in which you can gather that uh is if you look at uh the sort of top seven AI companies in terms of valuation um or in terms of uh market cap uh they form over a third of the entire S&P 500 and a more general question is how uh will um the overall economy be contingent on uh advances in AI and how important are these uh companies uh to that overall uh future. So that's one kind of question of trying to understand how does AI as a technology and the companies that build it sort of operate within the entire uh global economy. A second question is within the share of that global economy that is uh related to AI who which specific companies will do particularly well and extract most of the value uh in this part of the uh space and I'll talk about this in a bit uh when we get to the topic of supply chains in particular. So that's one important sort of very large class of questions. A second class of questions is more at the individual level. So at the macro level there are a lot of things going on. Um at the individual level uh especially as many of you will soon enter the labor market. There's the question of how uh is AI affecting jobs uh in the future of work. So here is a plot from some colleagues here at Stanford um where what they were looking at is payroll data. One of the key insights that will come up throughout this talk is that payments are sort of a key mechanism by which we can study the economy. So what they looked at is uh data from ADP. This is a

### Segment 2 (05:00 - 10:00) [5:00]

payroll company that it's the largest payroll company in the world. And so in particular what is able to what ADP has data on is how do different firms in the economy uh hi spend money on workers. And so you can look at for example when are firms spending less money to hire workers or paying them lower wages. what they find uh and the relevant line here is the blue one that's falling uh towards the bottom right um is after the advent of chatbt in 2022 what we saw is a pretty uh rapid and steep drop off in terms of uh software development uh hiring at the junior level right and so more generally I think there's a sort of collection of work on the economics of AI which is trying to understand how are jobs changing how is the rate at which we hire in different jobs changing how does the collection of tasks we bundle within the construct of a job changing. So all of these questions are in scope. Another sort of interesting category of questions is more so on the relationship between uh workers or between uh different uh demographic groups uh because of AI. So in this plot also from Eric Nolson here at Stanford uh what they're looking at is the context of a call center. Um and one of the interesting findings they have is since uh so they do this sort of um case study where they look at a call center where some of the workers adopt a sort of generative AI based tool in 2023. What they find is that the impact of this tool is most pronounced for the most junior workers. So in the context of a call center uh you have workers that have you know different amounts of time spent at the call center. Um for workers that have been there have just joined the call center. So at the left of this plot you see that the actual impact of AI is the largest whereas for people that have been at the call center for a while the benefits of using AI are much smaller. So there are a lot of questions um to think about at these varying levels of scale in the economy when we think about AI. Uh how can we uh make progress towards thinking about those different types of questions? So one of the key things sort of in line with what Percy mentioned is that uh we need to think about two things in parallel. AI as a class of technology and we need to think about a set of organizations that are developing and using and uh you know affecting society through AI and then given that uh sort of dual lens of simultaneously thinking about the organizations and the companies uh and uh the technology I'll uh walk us through some stuff on supply chains and how they're developing at the moment and then I'll sort of shift gears more to study uh the economics of growth uh and how we might think about the future and AI's impact on the economy. So to give you this kind of perspective of these two different ways we could think about the economy um or think about AI uh on one hand we have the technology as we studied in this course and the other you have all of these organizations that are in the world operating in complex uh ways and relationships with each other. So why do we actually need this second piece? Why does it not suffice to reason about the economy just with the former? Like why can't we just focus on the technology, understand its trajectory which is already quite complicated and in itself use that to determine where the economy will go? Well, there are a few reasons uh for why you need to think about organizations to reason about this. Uh and it may seem sort of straightforward. Of course, the economy is sort of built from organizations and so you need to think about them. But I want to give you sort of precise reasons for why uh it's useful. uh to think about these organizations. The first is that when we think about AI as this general purpose technology that's sort of crosscutting um across uh the different uh economic sectors, you know, most of economic work and most of the economy is not uh just sort of the tech sector. And so the overall cumulative impacts of AI will come about because of the behavior of all these other organizations most of whom will either choose to or not uh adopt AI in some particular way right and so we need to think about these other areas of the economy if we're going to try to understand overall economic change the second is more of a distributional reason which is understanding the trajectory of the technology might tell you something about AI being important uh economically uh but it you know in itself tell you which specific companies will benefit most or uh be the worst off because of AI right and so if you want to understand those types of questions then you need to actually pay attention to the specific uh organizations in play and then finally at a sort of more micro level there are a lot of decisions that we can think about that go beyond just the technology itself and I'll talk about this on the next slide right and so for all of these reasons you know organizations are going to be an important part of how we think about uh AI and its economic impact So let me ground that through a more concrete example. Uh so here is um some benchmark results. It doesn't actually matter which benchmark it is for this uh

### Segment 3 (10:00 - 15:00) [10:00]

talk. Um what I want you to notice is that the three uh models and thereby companies on the left. So at the sort of top end of this uh benchmark are uh Google, Anthropic and OpenAI, right? And they sort of get roughly the same score. The point at the high level I want you to take away from this is in terms of capabilities this might lead us to conclude uh that these companies are roughly the same or at least their models are roughly uh the same level of capability. Right? And this might lead you to have a broader conclusion that basically we can sort of think of these companies as uh substituting for each other in the economy. The problem with that is even if that were true, even if we gloss over all of the ways in these diff in which these different models might be uh different from each other and say that they're all the same, they're all similarly capable, there are a lot of other choices these three companies have in particular that differentiate uh the role they play in the economy. So what are uh decisions that these companies can make beyond uh the level of capability of their models? Well, they have decisions about when they even release those models and place them on the market. uh they have decisions on how to price those models uh which ultimately affects how uh consumers and enterprises choose to use them. Uh they make decisions about what further products they build and how they vertically integrate their models into other markets downstream of them. Um and they have decisions about how they work with other actors in the ecosystem uh and in the economy, right? And so for all of these other reasons beyond the kind of core technology which might be similar um you know these companies can have very different impacts on the global economy right so you know this is just one uh vignette but you know there are many such examples here where there are a lot of things happening in parallel in the economy um and so the nature of individual firms and changing the economy can be quite complicated and sort of differs from just what the technology is alone uh tells And so that's sort of the inroad I want to give for why we want to study supply chains is that you know we understand something about the technology about the different uh elements in play. Um but there's a lot of other sort of structure uh in the ecosystem that's sort of shaping uh the economy uh that we also want to think about. So to do that, I'll probably use a abstraction fairly similar to the one Percy gave you um which is uh this uh sort of graph um where what we're thinking about is the resources that go into model development and then ultimately the applications that come out uh as the ultimate products of these uh models uh which affect users in a variety of ways. So these two principal inputs are uh data and compute. I'll sort of talk about their supply chains in a moment. uh and then you have these foundation models that are distributed uh by a variety of sources and then ultimately a bunch of uh AI systems built on top of them uh that either people use or enterprises use for some particular purpose right so when we talk about the supply chain there's one way of talking about it which is technologically of you know there are these specific data sets there are this many GPUs there are these models there are these uh you know uh coding tools APIs and so forth And then there's sort of a parallel set of way of discussing the same supply chain of you know there are these news companies and these cloud service companies and these AI companies and these um you know coding tool companies right and so when we're thinking about the supply chain what I'm trying to encourage you to do is think about both of those in parallel that there are these companies doing things and having relations with each other and that there are these specific technical assets that are mediating those relations. So I'm going to talk about three particular regions of the supply chain. The supply chain is obviously very complicated. I'm going to pick out three that seem pretty important um uh where we can sort of understand things on both sides at the same time. So the first is compute. I think for most of us in CS, we often have a sort of simplified understanding of comput is like, oh, there's some NVIDIA GPUs that's sit in some data centers. Uh, and that's, you know, pretty important and maybe that's roughly true, but the compute supply chain is actually reasonably more complicated than just Nvidia and data centers. Uh, and so I want us to think about some of those different elements of that supply chain um that are worth attending to. So semiconductors and chips are pretty complicated. Um here is a actually somewhat simplified map of that space. Um a lot is going on. This is not a talk on semiconductors. So I'm not going to try to explain all of it. Uh there are two things I want you to uh remember from here. First is that uh you know there are many uh companies that are playing multiple roles in the supply chain right um so Nvidia might be doing something on chip design. might also be doing something in the development of CUDA and other aspects of chips. Um, and then the other is that there are a

### Segment 4 (15:00 - 20:00) [15:00]

lot of companies, right? And so I'm going to pick out three that are very important uh in the supply chain on the next slide. Uh, but there is a lot of complexity here uh that I'm going to skip uh again because that's not the main focus of this talk uh and really try to pinpoint three specific companies that are very important uh to our understanding of the supply chain. Right? Uh so the first uh and I'll keep uh I'll actually stay on this slide and then advance it later. Um is ASML. So ASML is a company in the Netherlands. Uh they develop lithography. Uh lithography is a sort of optical technology that you need uh to fabricate ships um or um uh and you in particular need fairly sophisticated lithography tools. Um the point I want to make here is not uh necessarily about how lithography works but the role ASML plays in the supply chain in the sense that as this picture shows you ASML is pretty much the only game in town. Um and so it holds a global monopoly on this level of lithography technology right and so as a result uh you know ASML is a very important player in the supply chain uh because of that monopoly it has and because it's a dependency for everything that follows in the development of chips. Then the second company I want to talk about is TSMC uh based in Taiwan. Um TSMC actually manufactures um these materials and as a result is going to be a critical dependency to Nvidia which is the third company I'm going to talk about. Right? And so the point I want to make here is when we study supply chains, one thing we're often very interested in is monopolies or their approximations, right? And so these three companies have a very outsized role at their three levels of the stack, right? And so we'll see this in that uh these companies are valued very highly as a result uh to no surprise um given how important semiconductors and chips are to the overall economy right and in particular these are the three most valuable uh companies in their respective parts of the world right and so when we think about supply chains and compute you know these three companies and the fact that each of them holds very large market share in their respective markets turns out to be very important and sort of first analysis of what's going on uh in the compute part of the supply chain, right? And why should we care about that? And you know, I think when we talk about data and we talk about compute, the reasons that care about the supply chain are somewhat different. And I'll try to distinguish them. So even though these are the two principal inputs uh towards building modern AI uh one of the reasons we're interested in uh the structure of the compute supply chain uh is to understand things about resilience in particular because we have these bottlenecks where there are individual players that are sort of fully responsible for the functioning of a layer of the supply chain and therefore the essential dependence for the next layer of the supply chain. And because we see that at the compute level a lot of the value acrrues to these specific companies in part because uh they're so essential uh to the functioning of the supply chain. Right. The final point I want to make in passing um though happy to talk about um is this also adds complexity because each of these companies is so important that they feature in the entire uh sort of geopolitical conversation about AI. Right? So for example, TSMC which is based in uh Taiwan um is if you go to DC and talk to people about AI is sort of very central to the conversation of the US and China's competition um because it's located in Taiwan. Uh but also because of how essential TSMC is. uh sort of similarly at the geopolitical level you see this with uh Nvidia as well uh in the discussion of export controls about which Nvidia chips can be uh distributed from the US to China right and so uh when we're thinking about these supply chain conversations this maybe adds a third layer of abstraction of you know first I was telling you about the technology and how chips are being built then there are these individual companies that are sort of very important to the supply chain and then there are sort of these broader government level or larger societal conversations that are kind of all connected to how the underlying technology is built and the amount of concentration in uh how chips are being developed. So some that's something about uh chips. So now let's uh move one level downstream uh to clouds. So when we go to the cloud level, right? And when we're thinking about this in the AI context, we're thinking about clouds for or compute for two purposes. One is for model training and the other is for model inference. Um you know we have three major clouds that have uh for quite a while now been very large players. These are Amazon, Microsoft and Google right and this has a sort of clear role in shaping how model training is done. So for example we see that many of the frontier labs or companies developing AI have partnerships or dependencies on these three um uh cloud providers. Um on the other side on the inference side we see maybe a more heterogeneous uh market where there a lot more sort of

### Segment 5 (20:00 - 25:00) [20:00]

fledgling uh companies um doing inference at scale. Um and you know we're seeing that sort of specialized market evolve as well. The reason I want to bring this up is that actually there's a sort of additional set of elements of the supply chain that I don't have time to cover but are also important for thinking about things uh which is that uh to get compute to work you don't just need uh chips right it's not sufficient to just throw a bunch of uh chips in a data center uh you need the actual data center and its infrastructure as well which implicitly means you need land and you need electricity and you need water and all these other resources right and so there's actually a lot of work happening in parallel of defining that uh part of the supply chain especially as it becomes more of a bottleneck uh in the development of AI. So this is maybe to stop something about the nature of the current compute supply chain. I'm going to shift gears now to talk about the data supply chain as the other sort of important uh input in AI development. So on the data side, uh what we're going to see is a very different picture in the sense that we're going to have a much more heterogeneous set of uh entities in play. Um and we're not going to see the level of concentration that we saw in the compute side in part because um you know data is less capital intensive to produce uh than compute. when we take the supply chain perspective towards thinking about data in the context of model development uh I think it's useful to think about there's data within AI companies and then there's data outside AI companies that needs to somehow get in the AI companies for them to use it right so when we think about it in that way uh we can sort of parcel out different ways in which that data comes to be acquired by an AI company building models so first there's some data that the company itself may produce by some method meth. So this is data say synthetic data used in post-raining um models. So this is using say reinforcement learning or other such methods right this um will largely not be of interest in this talk because it's data that the firm itself is able to produce. Then there's also data that the firm acquires uh through a set of pre-existing methods. So this is data that say users provide. These can be uh users using something like chatgpt in the AI context. But this could also be users of existing um uh products and services the firm uh builds. So something like Gmail or Facebook or so on. So this is all data that by a set of existing mechanisms comes to be within the firm before we think about training any specific AI model. Then there is data that is outside the firm but is nominally public and can be acquired um in some way. So there are say data sets. So there's data sets that we built here at Stanford like squad or others like pi the pile um uh that are openly available and freely accessible. Um there's also the entirety of the web uh which can be crawled. We'll come back to the topic of web crawling in a moment um as a very important means of data acquisition. So what's important here is that um in some cases you know there might be a specific owner of these data sets but generally for the firms trying to acquire this data they're not necessarily negotiating with a very specific entity and they're using some other means to acquire it maybe just downloading it from some website. Um and that's uh distinct from uh from data that is also outside firms uh but that they sort of specifically interact with a party uh to acquire right and so this is data for example uh that is generated to create models. So for example, new annotations or new types of data from companies like scale or mechanical turk or mour um that is maybe used in post training or other things at this point. Um or data that was generated for some other purpose that some firm owns. For example, Reddit owns uh the data on the Reddit platform or the New York Times owns uh its news articles. Uh and these firms can license that data or sell that data to uh AI companies, right? So you have all of these different mechanisms that are responsible for providing some fraction of the data used to train modern models. Um and often those that data is used for different purposes. Some of it's maybe used for post- training for safety, others is used for general pre-training. Um which maybe relates to the volume of that data uh and how easily it can be acquired. Diving into one very specific example um of web crawling. So you know a large fraction of the data say used to uh pre-trained models comes from crawling the internet. Um one of the interesting things to understand is just like on the compute side the data ecosystem is complex and is changing in parallel to AI development or synchronously with AI development. And so one of the things that we can look at is how are individual websites changing their policies and how their data can be acquired for the purposes of model

### Segment 6 (25:00 - 30:00) [25:00]

training. So what you can see here in the first two plots is as a function of time especially in the last few years uh there are more restrictions placed on the ability of web crawlers to crawl the internet right. So for example uh as you can see uh in the second plot um the share of uh websites that have sort of no crawling has increased as a function of time or the share that have no restrictions at all has decreased as a function of time. Another thing that's interesting is the asymmetries that imposes upon different companies using that data. Right? So here you see in the special case where a website uh in its robots. txt file specifies a specific uh crawler that is not allowed to crawl on that website. Uh at what rate is that true for different crawlers? Right? And so you see some asymmetries. For example, OpenAI's crawlers are more often targeted than anthropics. And this is going to be important as we think about how the data ecosystem evolves given that uh data might be a way that we differentiate between uh different models or at least the capability uh level of the models trained on that data. [snorts] Right? Another thing that's sort of interesting when we think about data through the lens of supply chains and how it's acquired is it reveals a sort of uh interesting distinction in how it's priced, right? in the sense that firms or AI companies pay uh very different amounts and use entirely different pricing schemes to price data based on how they acquired it. Right? So if you think about the six categories of data I just showed on two slides prior, right? So if you think about them um you have say synthetic data where what you as a company are paying is the cost of compute in exchange to acquire uh some synthetic data you produce right whereas if you think about usage data you're providing some service and in exchange as part of uh you know how your terms of service is written you acquire the user's data right and this is quite different from say how you do things for buying data from say uh companies that do uh data annotation where what you're instead paying for is individual um worker hours labor um or you're you have some large transaction in terms of you know you're trying to acquire all of the New York Times data or all of Reddit's data right so I think this is interesting because as we think about how the data ecosystem is going to evolve one thing that sort of is different across the different acquisition methods is the cost to acquire ire data of specific kinds and therefore uh the asymmetries that imposes in terms of how um firms down downstream choose to acquire data and build models. Another interesting thing to think about on the data side is how it entangles with the law. Um so there are number of relevant uh laws uh to think about that govern certain aspects of data. So laws on copyright, piracy and the like. Um, one of the interesting things is how that intermingles with the topic of pricing. So, for example, we saw fairly recently uh Anthropic settle its lawsuit or one of its lawsuits uh on a copyright question um with Barts um uh in a uh the largest copyright settlement uh in history. One of the interesting things is that actually reveals a price which is the actual rate of that settlement. So they settled for $1. 5 billion uh for about 500,000 works. That implies a rate of about $3,000 paid per work. Uh if all of those works claim um uh in the settlement, right? And so again, you sort of start seeing that um you know, these aspects of the law actually implied relationships about prices. So on the data side, like why again should we care about the supply chain structure? What important questions does it give us a handle on? Right? Right. So one is a sort of fundamental question that comes up on the technical side of you know what is the availability of data will we run out of data um what types of data remain for us to train on. um here you know I think there are a bunch of fundamental quantities that often we don't uh as technologists usually know the answers to but we could try to estimate which is uh the simplest of those is when we train models currently what uh mixture of data do we use at what rates u but then given that uh how much do we know about the underlying rates at which data is being produced across these different uh sources of data uh in the world uh and ultimately for each of those sources what's the current level of abundance of data? Um, and so I think there are a lot of uh debates on the technological side which are about us maybe guessing at where we think data is going to come from where sort of understanding the current ecosystem and what is available and at what rate it gets generated uh is quite useful. The other point here is to be in

### Segment 7 (30:00 - 35:00) [30:00]

compliance with the law as I mentioned on the previous slide. There are a bunch of laws uh that we need to think about here, right? So there are copyright related laws um and piracy which are both relevant in the anthropic lawsuit. There are other uh laws as well which might prohibit the use of certain types of data or even retaining that data in the first place. So laws on child sexual abuse material or it's in many jurisdictions prohibited uh to even have the material in the first place uh let alone train models on it. Uh laws for example in Europe especially on privacy and GDPR. Um, and so you need to understand the supply chain to understand if and not just the data itself at the end as the product to understand if you're in compliance with these laws. And then finally, and it's going to come up again when I talk about growth, uh, is the idea of the relationship between data as a primitive and thereby the effect it has on competition. Um and in particular one of the interesting things to understand there is when there are restrictions on the ability of some developers but not others to acquire data right and so when for example data is exclusive to one entity uh does that give them a sort of advantage in terms of the quality of models they can produce. So the final um point I'm going to make on the supply chains um before pausing is on distribution. So I told you something about the two resources that go into building models. Now let me tell you something about how those models get distributed and thereby use in the economy. In short, I'm going to walk us through this uh kind of thing uh which talks about the different ways in which uh model developers can distribute their models or at least do uh today. Right? So on the top of this figure you have the extreme where the developer doesn't distribute their models uh externally uh by any means. Uh so they build the models, maybe they integrate them into some products that they um uh distribute uh but they they don't provide external actors access to models at all. Right? And this is actually not particularly atypical in other parts of software and uh technology where specific algorithms or specific systems are built and fully siloed within the companies that build them. Uh and it's only downstream kind of products and services that are exposed uh more broadly. then you have a sort of bunch of intermediary options um which are employed today right so maybe there is an API by which you can query the model maybe there isn't even uh these still fall in the regime of being more kind of restricted uh or closed options right that have their analogs uh in traditional software right and each of these as we'll see in the next slide sort of uh opens the door for some kinds of applications but keeps it still closed for others. And then you have the sort of more open end of the ecosystem which is I'm going to define here as the boundary between whether you distribute the model weights of your model or not. Right? And so on the more open side of the ecosystem, there are also choices which are about what else you distribute in addition to the model weights. Right? So for example, are there restrictions on how the model weights can be used for certain purposes and say the license for your model? uh do you also release the data for the model and the code used to train it or the like and you know if you go to the full extreme there you get to things that are more reminiscent of traditional open source software right so you have this spectrum of options where we see model developers currently you know employ different strategies based on who they are across the spectrum why should we care about this choice um that firms can make about how they distribute their models well the reason is that choice affects a bunch of things in the structure of the downstream supply chain and broadly how their models come to affect the economy. Right? So the first point um and perhaps the clearest point is when you operate in the more closed uh end of that uh supply chain, what that affords you in particular is the ability to have more exclus exclusive or just greater um control vertically of how your models are going to be used. Right? So if you for example don't uh distribute your model at all and exclusively retain it and then build products and service on top um this as the firm allows you to have greater control greater vertical integration into downstream markets. Um on the other hand it might seed uh certain opportunities um for those models to be used for other purposes that you aren't specifically investing in. Right? Correlated with that is the topic of pricing. So if we look empirically in the market at the rates let's say API providers like OpenAI or Google or so on that own their own uh model or who are paying for the use of their own model rates or selling uh the use of the uh via an API uh of their model. um they tend to be higher on average than say how the rate um you have to pay to use say open models gener uh distributed by

### Segment 8 (35:00 - 40:00) [35:00]

a variety of parties right and this is because here you create some amount of competition on the open side right if you know you're Quen or Alibaba and you release the model weights um uh now a bunch of uh companies can uh serve those models and host inference and therefore you have a more competitive market on the pricing of those models compared to say the amount of competition we see on the more closed side of the spectrum. A third thing that comes out uh again contingent on this maybe more open closed uh weights level distinction is what specific types of applications we see built uh on top of these models right and so two aspects of that are first when you distribute the model weights and then other parties use them locally um or use them uh through certain cloud service providers you might have stronger um protections in terms of or just greater understanding of uh where data when that model is being used goes and thereby you may in some cases have better uh privacy and security or at least you have better understanding of your privacy and security risks. Um and this is important especially when we think about the application of AI in a bunch of uh highly regulated sectors where those kinds of asurances are needed. uh in addition when you distribute the model weights of course you can do a lot more uh with the model in terms of how you fine-tune it or distill it or do other such things um that open up the door to a lot of applications that might be hard to achieve on the more closed side of this ecosystem. So the point is that you know if we look empirically at the market at all the different ways in which models are being used um there is sort of a lot of variation across how say open models are being used that is sort of quite different from the more closed side of the ecosystem. Um and so this decision at the more upstream level of how to release the models shapes uh the nature of those downstream uses. So the second half of this talk is now going to be less descriptive of you know what is the supply chain look like today who are the major players things like that uh to instead talk about a question of where are we going and how can we reason about that in some kind of grounded fashion. Before I tell you about how I'm thinking about these topics, I'm going to instead point to a bunch of other works that have come out in the last two to three months I think are very useful to read um from a variety of folks. Um so first one I'd recommend for a lot of folks to read is this writing is a pretty short piece by Bar Chandra um about the current state of knowledge on AI and labor markets. Um this is great for actually empirically grounding your understanding of what is happening today and what do we know not know. uh in particular uh and then a number of pieces about where are things headed right of course it's hard to predict uh where things are headed and so people have very different views about where things are headed um but I think there's this interesting blending we're seeing between ways AI people are thinking about things and ways econ where there's a lot to learn by looking at that kind of intersection of those types of views um and for folks interested in this topic here there are a bunch of folks that are sort of leaders in the field uh including the four that I named at the bottom of this slide. Before I talk about growth, I'm going to sort of um give a advanced notice that this is going to be a thing that's going to maybe feel unfamiliar uh to folks um in CS uh because uh we're going to take a very macroscopic lens uh towards thinking about uh growth and economic growth, right? And as Percy mentioned um you know in CS I think we are used to things that are maybe as economists would think about more micro in nature and that we think about very specific decisions and how to uh you know make them better right how can we build uh you know better model architecture how can we uh build a better algorithm um or better system right where we're thinking about a bunch of specific decisions like what is the specific alignment um method or like the specific data set right and we have all these individual decisions we're trying to improve upon uh collectively, right? In contrast, the very macro view uh of econ is really thinking about this huge uh economy uh and trying to reason about it as an aggregate. Um and so uh what I'm going to tell you about will sort of in some ways lack some of the specificity you can have when you think about very concrete things, but we'll still try to reason about an important topic of the economy as a whole. The second thing to caveat here um is uh when we actually seriously talk about uh AI and the future of the economy, there are a bunch of other aspects of AI and the economy uh and society that we should remember are important, but they're just not the subject of this talk, right? So there's a bunch of AI that is not going to be what I'm referring to here. So I'm really just talking about sort of LMS, frontier AI, this like class of models. And so not things like um autonomous driving for example. uh nor am I talking about

### Segment 9 (40:00 - 45:00) [40:00]

things on the economic side that are very important like how uh does uh do the benefits of technology distribute to particular people or particular firms. I'm just talking about the aggregate uh trends. Uh nor am I talking about say a lot of things that are outside the scope of um uh growth or GDP. So for example, how students use AI is an important topic but uh you know doesn't show up in sort of workplace statistics. Before telling you how I'm thinking about uh AI and growth, uh let me tell you something about how other people are thinking about AI and growth, right? And so um first I'll talk about this kind of in a less formal or less economic uh parlament of just you know how we think about CS of impact of technologies. Uh the important distinction there is like impact is is maybe unsigned, right? It can include things that are very bad and very good and they all contribute to a technology being larger impact whereas growth is going to have a more positive connotation. To do that uh there's this nice recent example from uh Nate Silver's book of uh this like technological richtor scale of trying to assign technologies importance in this kind of uh um sort of exponentially more important sense. Uh so the idea is you know they're roughly 10 months in a year and 10 years in a decade and so forth. So you have this exponential progression um and uh how can we you know assign like the most important uh technology in a particular decade or so on and can we use that as a sort of intuitive characterization of how important the technology is relative uh to other technologies uh over the course of time. So he does that and gives some assignments on specific technologies a specific examples. So for example credit cards are like the most important technology in some decade electricity is more of a technology of the century etc. Um this is very informal. Um the point is I still think it's a useful kind of highle take on how to think about things. Um one of the fun things u which I'm not going to talk about but I encourage you to look at because it just came out very recently is uh the folks at the forecasting research institute had a bunch of experts. So a bunch of AI professors and economists uh including Tatsu here amongst others um and super forecasters make their own forecasts of how important they thought AI was going to be. Um I guess the modal thing is most people think it's going to be the most experts or super forecasters technology of the century. Anyways, you can go read about why they think that. Um so you know a lot of people have different views of what they think AI is going to look like. Uh now let me give you some more specific forecasts before breaking into mine. Uh since there's a talk about econ let's start with economist and a very famous one in that in Duron Asamoglu last year's Nobel Prize winner. Um uh Duron's uh forecast in his paper the macroeconomics of AI the simple macroeconomics of AI um uh says a bunch of things uh you know the headline is uh this um we'll come back to what these terms TFP and GDP mean in a moment. um you know the point uh I think most people took from this is that this is a fairly modest uh estimate uh again exactly how to interpret these two numbers of 0. 5 and 0. 9 I'll also come back to in a moment um uh but you know maybe that's one specific uh belief uh that captures the views actually of a number of economists that AI is going to be important it is going to show up in the GDP as an important thing but it's sort of uh you know important but not gamechanging kind of Some computer scientists had a different uh prediction which I would say is like slightly predicting AI to be slightly more impactful on the scale. Uh so this is Arvin Sash from Princeton um in their uh book uh or their piece AI is a normal technology. I'll come back to that in a moment. Um some folks uh here in Silicon Valley had even more um you know optimistic pos uh or predicted AI to be even more consequential. Um so for example Daario who's the CEO of Anthropic gave this kind of uh you know description of AI as like we're going to have this country of geniuses in the data center and that's going to be a huge thing uh that's going to dramatically transform society and then you can you know go even further of like you have uh you know you start thinking about the sign and not just the impact right so you have is AI going to have very great impact over time or very bad impact over time and you get maybe two very different views of Sam telling us something about how AI might cure cancer with 10 gigawatts of energy and lesser telling us you know we might all die because of AGI. Um the point I want to give you as I talk you through these different forecasts is it's sort of hard to reason about a technology and its impact as that is all happening in real time especially when we have these very different kind of understandings of you know how big is that technology how consequential is it right and so I'm going to try to get a handle at that kind of question to do that I'm going to go back to this

### Segment 10 (45:00 - 50:00) [45:00]

term like I mentioned um a couple of slides back with our vin and s of these normal technologies as they call them. Um when they say normal they don't mean like normal in the sense that like a toaster oven is normal or like pedestrian or unconsequential but normal in the sense that like there are there's some properties of a certain class of technologies that are shared uh and we can actually do quite a bit to understand them economically. Um and so maybe a better word is not normal but uh but generic. And so if you look up the word generic technologies uh you'll actually find this paper from Tim Breahan um who is a Stanford professor and Martin Chhattenberg uh to economists um where they talk about generic technologies or the more conventional name that economists give them general purpose technologies and so this is what I'm going to talk about is the study of general purpose technologies and what we can use from that to tell us about the economics of AI. So the first question is AI a general purpose technology and in a formal sense and then if so does that imply that AI is governed by the principles that economists have unearthed for studying general purpose technologies in the past. So to answer the first of those questions uh let's try to understand what a general purpose technology is. Um so again going to Breahan and Trrenenburg uh they give us three conditions for a general purpose technology um uh which I'm going to describe um you know Percy and I as he mentioned wrote this paper with a lot of other folks at Stanford on foundation models in that paper um with we sort of claimed we thought foundation models were a general purpose technology and in particular because they satisfy these three principles of pervasiveness improvement over time and the ability to spawn complimentary innovation. So what do those three criteria mean and why do we think they hold? So the first is pervasiveness. Uh so when we say a technology is pervasive in a formal sense um uh economically what we mean is that technology is adopted uh cross- sectorally across many sectors right so no in the lay sense when we say technologies are pervasive we just mean something like they're used a lot um so chachi is but what I mean more specifically here and why I think this first condition holds uh is these models are not just used a lot by some people for some purpose this but they are used specifically across many different economic sectors. That's where the general purpose comes into play. Right? So if you look at anthropics economic index you do see that they observe uh you know non-trivial usage in a variety of different downstream market sectors. Okay. So that's why I think the first one in broadstrokes holds the second is improvements over time. So uh in Brezahan and Trrenenburg's uh kind of definition you need sort of two conditions that fall broadly under this category. One is you want the technologies capability level to be improving as a function of time and you want its price to be falling as a function of time, right? Uh we see those as well uh for modern AI. Um so if you look at this plot of like the complexity of tasks measured in the time it takes humans on average to do them. Um you see that models as a function of time are becoming more capable at least according to this. Of course we know that from many other benchmarks as well. all sort of directionally tell that story. The second is uh and I think the thing that sometimes gets forgotten but it's very important um is this one that the price is falling as a function of time. So this price in dollars to run inference on models for a specific level of capability and you know you see that it's falling as a function of time. Okay so the second condition is met um and then the final condition and this is really the heart of the matter of these complimentary innovations. So we have this quote uh from Eric um whose name I've mentioned a few times and will continue to mention throughout this talk. Um uh you know his point is really that you know for example why is uh electricity um you know a big deal. is really about the complimentary innovations, right? It's that we build all these other technologies on top of electricity like light bulbs and all these other things. Uh and we redesign the entire structure of how organizations operate and doing all of that together is like where we see the large gains to the economy from a general purpose technology, right? And so um you know you can ask CHP this and gives actually a pretty nice uh synthesis of it which is you know if electricity itself just existed with nothing on top of it. Uh you know you'd get maybe a little bit of a productivity boost probably wouldn't see much happen to GDP or other sort of macro indicators. Um but if you do all of these things of redesign factories and you know create night shifts and motors and all these things and these appliances then you actually see the broad economic transformation come about because of uh these complimentary innovations. So economists have some conditions formally for when this is met. Uh like when do we actually quantitatively believe there are uh complimentary

### Segment 11 (50:00 - 55:00) [50:00]

innovations. I think right now we lack the evidence um to say that those quantitative conditions are met. Uh at least from what I'm aware of, but you can informally say that there is some things happening that suggest we're on that trajectory. Right? So first we are building things on top of models. Um all these different applications like we're building coding tools like cursor on top of models. Um uh we're also changing the way organizations have workflows and assign tasks to workers, right? So for example, there's like entire new class of tasks people are doing of having to verify the correctness of the outputs of uh existing AI systems. Uh and so these are you know maybe the signal that both of the things we think are necessary for these innovation complimentaries to happen are happening. Um but again the evidence of this I think is kind of more uh uncertain even if it may be true. What's the point of like knowing that AI is a general purpose technology? One of the points is that we know something about the nature of the relationship between productivity and more generally GDP uh and general purpose technologies um where you see this sort of J curve. Basically the point is that there's initially a trough where there's a sort of time for organizations to learn how to productively use the technology and extract the benefits of it. Um and then after that you see the sort of growth. Um, so you know this is one piece of this of like right now today there's sometimes this kind of conflicting view of you know people in the Bay Area usually think that like models are really capable and should be able to do extraordinary things. Uh yet the actual observed empirical effects today are more muted relative to that uh kind of belief. Um and part of that might be explained by something we've seen throughout the history of general purpose technologies uh that there is this kind of period of delay before large productivity gains. The final part I want to talk about is you know how can we actually make forecasts? How can we reason about uh where things are going or in particular how do specific beliefs about AI translate to specific projections of what happens economically? So this plot is maybe one of the most famous plots in growth economics. Uh and then the zoom in on the next slide is perhaps even more famous. Uh which is the long run growth of GDP in the world. Um the point is that it's been you know going up and to the right and in recent uh years historically uh it's been going up uh very rapidly. What is more interesting is this next plot um which shows that growth in GDP is very predictable and grows uh you know in on this log plot uh you know at a rate of 2% per year. So it's a exponential uh growth um or GDP per capita it is um and you know this is interesting because it suggests that even though the economy is super complicated you know history is very complicated lots of things are happening uh we had this extraordinarily predictable kind of um relationship between how the economy has changed over the last 150 years and um and all the things that happened underlying that. So given this should we like what do we expect will happen next with AI right that's sort of the heart of the question of like do we think AI will disrupt this curve it will sustain this curve if it disrupts it in which direction will the curve go uh will it break from this exponential trajectory um you know what will happen to start um there's a famous quote uh you know from uh the Nobel laurate uh Bob Salo um the economist uh where he mentioned that you really don't see the effects of computing uh in GDP in productivity statistics right and uh there's this observation that you know in our experience of life these technologies are super important yet uh the nothing really special has happened over the last 20 or 30 years or so on because of computing in the view of sort of the ways we think about growth economics So, so why does that really not show up? If anything, actually, um, if you like zoomed in, uh, to the, you know, top right is actually seems like we're, you know, growth is actually not, uh, sustaining at that 2% growth. Um, so in the last maybe 20 years, uh, you know, we might be actually sub, uh, 2%. Um, so, so what's how do we reconcile those two questions? Um well part of the issue is that uh many internet era technologies are free or very cheap or very subsidized. um right and they might be you know priced in by other mechanisms like ads um or so on and so as a result when we talk about GDP uh they're not going to show up right if there are no payments

### Segment 12 (55:00 - 60:00) [55:00]

required in the you know you using Google search then there is nothing that's going to appear at the end of the day on the ledger uh in the computation of GDP and so it'll appear as if nothing has really happened if you look through the lens of GDP even though something quite consequential has happened um economically or society, right? So, uh this is kind of an issue um of you know, we think these technologies are very important. We observe and experience that as reality. Um but how do we uh make progress? So, uh one thing I'll talk about before the end and the last few slides here um is about uh alternatives to GDP uh before actually saying some forecasts about GDP. So then alternative uh from Stanford from Eric and uh his colleagues uh is GDPB. The key idea here is uh to try to measure what the economist would call the consumer surplus which is when you buy some good uh you do it because you think you'll extract more value than the price you have to pay for that good. Um and so can we measure this as a sort of direct measure of how valuable technologies are right? Can you measure you know how much good you would you know you extract from using Google search let's say if you do that and so you run these sort of choice experiments where you have people uh sort of express how much they're willing to uh say give up uh or how much they would be willing to pay for a good or you know um the like uh you can measure this notion of willingness to Um so if you do that uh so uh some folks here ran a survey earlier this year um you know roughly what they found is about 40% of people use uh generative AI tools very frequently in their day-to-day um and if you'd you had to pay them to stop using those tools uh the rate they would be willing to accept on average is $98 per month. Um so if you just you multiply those two numbers and scale by the number of people in the US um you get uh that on an annual basis there's a consumer surplus of about hundred billion dollars right and so the point is that you know AI and generative AI and so on might be very valuable consumers might already uh know this um but it might at the same time not show up in traditional GDP accounting because maybe the cost of generative AI services is very cheap or subsidized um but by these other means we can sort of reveal this uh to be true. [clears throat] So okay that was a brief caveat to say that you know uh when I talk about GDP you know there are these reasons that GDP itself might not be the metric to think about um I'm just going to cabin that for a second um and now assume that GDP is the metric to think about uh and then think about if AI affects the economy in specific ways what does that imply uh in terms of GDP so I'll give three highlevel takes which I think capture different views different people have. So the first is that uh you know AI is in theory this general purpose technology. It could be useful for all kinds of things but the majority of the benefit of AI is concentrated in particular sectors like the development of better software or the like where it is one particular sector that is going to become much more productive because of AI. Um and that is going to explain the large fraction of AI's impact on society and then I'll say something about what that means economically. Another is a more crosscutting view which is that you can think of AI in broad strokes as just sort of a cheaper form of labor um that is sort of applicable at maybe different rates across different domains. Um and so what are the effects of injecting basically a new source of labor supply uh into the economy? Um this is sort of similar to how Duron arises his estimates. Um and then the third is that uh AI is specifically an interesting technology because it allows us to generate new ideas and if you go down that route you know what comes out of that right the point is like none of these are like full forecasts of what you know I think is going to happen because I'm not telling you which of these three like I believe necessarily or I'm not certain about that. But my point is like if you believe one of these three things or you think is the main thing that's happening with AI, you know, can we uh complete the analysis and say something about what's going to happen to GDP? So the first one um is you know if we make one sector way more productive that surely is a good thing. Um so what falls out of that? Well, uh what we know from econ is actually maybe less than we would hope for which is that if you make one sector um so one say final good sector uh way more productive um you will affect GDP right that sector contributes to the overall GDP um but the growth in GDP will be

### Segment 13 (60:00 - 65:00) [1:00:00]

comparatively much smaller than the growth the increase in productivity of that sector why is that um well Even if that sector becomes super productive, prices will fall in that sector and then uh even if you know there's latent demand and people want more goods of that particular kind uh probably the overall effect in that sector is the prices will go down uh even if there's a consumption increase. A nice example from uh David Otter the MIT economist is illumination. So um you know 100 years ago we had candles uh then we had electricity and light bulbs and so the productivity of the illumination sector went up tremendously. Um the price of lighting fell dramatically and the number of jobs in the lighting sector also fell uh with it. Um and ultimately the share that lighting has in the overall economy is now much smaller than it used to be back uh when we had candles and other you know technologies of that kind. Um there's a second order effect uh which also is interesting to understand um which is uh what economists call bombless cost disease um which is okay one sector becomes way more productive what else happens in the economy in reaction to it well to compete the other sectors need to char need to provide wages comparable to the workers in that very productive sector. Um and so you have this wage matching effect uh in these other sectors. You know, you could summarize this as, you know, if you think the sector that's going to become super productive is software and the other sectors are not going to change that much, then you'd get some really great software, but you know, healthcare might become more expensive. These other sectors uh that are not uh becoming more productive because of AI will just become more expensive. And then ultimately when you compute a GDP share, it's these sectors that will sort of dominate the share computation. And then ultimately, you know, the change in GDP and the increase in GDP that you're hoping for because of AI uh is kind of much more muted um as a result of that kind of analysis. So that's one view of like if you mainly just think about um one sector becoming more productive, what happens on net? Right? Now let's try to analyze a question more generally of like you have kind of more optimistic views in a certain sense of what AI is doing. um you know what happens to GDP. You don't really need to remember this slide. I just want to use a notation for a moment because it'll give me access to some terms. Um so uh what you have at the top is a production function as economists would call it. Um the main point is that GDP which you can think of as y is the uh output of three inputs in this uh function um which are a k and l. So k and l will be sort of abstractly familiar concepts of capital and labor. So this is like people hours of people doing some stuff. Maybe also AI hours of AI doing some stuff. Capital is things like you know data centers or things like that. Um and then there is the efficiency of which K and L get converted into Y. Right? So at what rate do capital and labor combined uh you know com uh get translated into um uh outputs uh and that's the TFP total factor productivity. Right? The other final point um is you know economists don't all agree that like you can describe the economy in this specific way uh in this specific functional form but if you take this for a second uh and um what you see uh is that the best way like if you could only increase K or L you basically want to grow them at equal rates uh to maximize Y or like grow Y as fast as you can. So this is known as like constant returns of scale. Okay. Uh sorry I just mislabelled the slide. So this slide should say if AI is a new source of labor uh then what happens? Um uh so if AI contributes uh labor cheaply. So this was the second of this hypothesis. Then what happens? Well what happens is GDP will grow and it might grow a lot if the capital share go grows up uh at the same rate. Right? So if you think of AI as like a new form of labor and so basically the point is that like the L term is becoming bigger by some amount um you know if the capital term K also grows roughly in line with it then you'll see like broad GDP growth if L rapidly outpaces K you know you might not be able to sub like translate this new uh production of labor into anything useful and so GDP might not grow that fast uh or keep up with pace of L. Um and a reason to be kind of optimistic about this picture is basically the baseline of what do we know about the share of labor and like how labor is evolving over time, right? And the basic observation is that in developed economies at the very least um population growth has basically stagnated and the share of labor share the fraction of people who work uh in the economy um uh given those

### Segment 14 (65:00 - 70:00) [1:05:00]

that are alive is also plateaued or maybe even falling. So under this hypothesis, what matters uh is really specific things of like how uh effective is uh AI as a source of labor, right? So like which specific task can AI do and how much more efficiently can it do it and ultimately how much of this type of labor uh can we create because of AI? Um or sort of uh so you know this is maybe one view. The other is uh AI is and so this is really maybe more about you know AI being used in to you know write reports or prepare uh documents or do medical diagnosis or like other such tasks in the economy. The other is more on you know AI's function is to produce ideas. This actually gives you a fairly different um uh or maybe different view of what happens in the long run economically, right? And so if AI's main purpose is to produce idea or if an important part of AI is that it produces new ideas, um then you actually get a sort of more aggressive prediction of actually increasing the rate or maybe even making the function uh grow faster than exponential. Um and this comes to us uh from a couple of uh um Nobel laureates at this point but maybe most influentially from Paul Romer. Um so in his work uh what he shows is that ideas are actually the kind of foundation for growth in a certain technical sense economically. And this comes from a very simple observation that ideas are special and the sense in which they're special is that uh they're non-rival and that you once you invent an idea come up an idea uh you can reuse it arbitrarily right so you know once we came up with the concepts of linear algebra you know we can reuse them many people can reuse them this is different from traditional goods where if I make one computer uh you know that might be helpful for me but for you to benefit I need to make a second computer so that you can use it. Right? So this is there's a difference. Of course there's some nuances there that like you need to learn how to use the idea and so on. But um in broad strokes ideas are very special. Um uh and so if you really want to change the long run, you make this kind of more aggressive um thing, then actually it's not that important how much AI affects like existing tasks. it's maybe much more important that AI gives us new ways to become much more efficient at doing those tasks perhaps or we create new ideas altogether new tasks um and so in this view the main thing to focus on is not like how does AI affect the existing broad labor market but it's really about how AI affects the production of ideas so it affects specific parts of that um which are about R&D and science and the like right and so I'll pause there and just say that you know I think people have very different views about what is going to happen and different reasons to believe those views um at the intersection of AI and the economy. But uh given any particular view, I think it is increasingly possible that we can sort of write out like what will happen under that view uh economically or at least have some first kind of principles way of thinking about it. Um, and if you're interested in this last category, there's a nice uh piece by economist Ben Jones uh on studying uh specifically AI and R&D and how AI will affect R&D theoretically. Um, so with that, I'll end the talk and take any final questions. — I'm curious what your personal views are on where AI falls in that spectrum of being a once in a decade, once in a century, once in a millennium and what your personal forecasts would be for how it might affect GDP. Yeah, I think um to so I think I'm uh probably in the boat of somewhere in the like uh sort of sentry. Like if you give me those two options, I might air on century instead of decade. Um I think I place mo both most of my uh mass on one of those two options. Um uh on the second question I think or maybe the third question on how it'll affect um GDP. I think that's not clear to me. I think I'm maybe more uh confident in the sort of second category that like the main effect of AI will be this kind of crosscutting effect on labor and the substitution. Um and you know it there are subtleties there of like is the main effect of AI in some jobs and some uh sectors that it you know augments workers or automates workers that maybe is actually not that important if you want to understand overall what happens to GDP but is very important if you actually want to understand what happens to workers. Um uh but yeah I think I would buy that. I

### Segment 15 (70:00 - 74:00) [1:10:00]

do think we're going to need and I'm working on some methods that are more like GDP or like methods that will if as an AI person the thing I'm interested in is AI. My uh belief is like GDP is a kind of very coarse way to see the effects of AI in particular and I sort of prefer that we had like more bespoke tools for studying AI even if they're not the best tools for studying the economy as a whole. Um, so I do think we're going to need some new measurement uh methods that will maybe better emphasize AI's impacts in a way that GDP masks. — Natural resources will — the question was um will uh natural resources uh limit the growth of AI or something like this, right? Um, I think we're seeing like in the current conversations um AI testing parts of the supply chain that for a while we got to assume would just like be there and like weren't um rate limiting um when we talk about this large capital expenditure and data center buildout and the like. Um, so I think those factors will matter. I'm guessing right now that if we're talking about the US context, like an important rate limiting factor is just like our ability to transmit power rather than our ability to produce power. um on the energy side. Uh yeah, I wouldn't say that I understand those things that well to say anything more than that, but I think um yeah, this is maybe why part of the conversation in the US has been about those kinds of things. Yeah, — I'm interested in uh kind of what you were talking about with the J curves um more other kind of comparable normal technologies like I don't know electricity or the internet what were some of the exacerbating factors that led to like the magnitude of those dips um and I guess what are the corlaries in the current technological — yeah so the question is like for past techn technologies, past normal technologies. Um, what do we like sort of know about their J curves and why? Uh, and how does that relate to what we can say about AI? Um, [snorts] it's a good question. I mean, I think so the J curve paper um from Eric is sort of uh in a sort of theoretical setting like it's like uh under some assumptions like you see this productivity trough in the near term before productivity growth. Um I think it aligns with people's maybe intuition and like some observations but there isn't maybe compared to other work in econ like a clean sometimes the nice thing you have in econ is like you have these like very clean demonstrations like empirically with data we see like the J happened. Um I'm not aware of like uh like a specific measured instance where like we've actually seen the J happen for like a specific technology and like TFP or like some measure of productivity. Um uh my general understanding is like I think like people think the or economists think like the sort of innovation complimentarities like bundle into two categories of like first like the thing that's slow with electricity or something is like you just need to build some of these uh things downstream of it like you need to build electrical g grids or like power output outlets or like ways of transmitting electricity throughout the world in the economy. Um and then the slower thing is the organizational change to actually uh leverage this even with these uh some set of complimentary innovations already built. Um, so I think that's like one view is like it's like the organizational human factors that are the slowest thing or like the thing that happens later in the process before you actually see productivity gain. Um, yeah, I don't know. I think it's a good question. I don't know if I know anything else more specific than that.
