BS: I mean, speaking of just the sheer compute requirements of these systems, let's talk about scale briefly. You know, I kind of think of these AI systems as Hungry Hippos. They seemingly soak up all the data and compute that we throw at them. They've already digested all the tokens on the public internet, and it seems we can't build data centers fast enough. What do you think the real limits are, and how do we get ahead of them before they start throttling AI progress? ES: So there's a real limit in energy. Give you an example. There's one calculation, and I testified on this week in Congress, that we need another 90 gigawatts of power in America. My answer, by the way, is, think Canada, right? Nice people, full of hydroelectric power. But that's apparently not the political mood right now. Sorry. So 90 gigawatts is 90 nuclear power plants in America. Not happening. We're building zero, right? How are we going to get all that power? This is a major, major national issue. You can use the Arab world, which is busy building five to 10 gigawatts of data centers. India is considering a 10-gigawatt data center. To understand how big gigawatts are, is think cities per data center. That's how much power these things need. And the people look at it and they say, “Well, there’s lots of algorithmic improvements, and you will need less power." There's an old rule, I'm old enough to remember, right? Grove giveth, Gates taketh away. OK, the hardware just gets faster and faster. The physicists are amazing. Just incredible what they've been able to do. And us software people, we just use it and use it. And when you look at planning, at least in today's algorithms, it's back and forth and try this and that and just watch it yourself. There are estimates, and you know this from Andreessen Horowitz reports, it's been well studied, that there's an increase in at least a factor of 100, maybe a factor of 1,000, in computation required just to do the kind of planning. The technology goes from essentially deep learning to reinforcement learning to something called test-time compute, where not only are you doing planning, but you're also learning while you're doing planning. That is the, if you will, the zenith or what have you, of computation needs. That's problem number one, electricity and hardware. Problem number two is we ran out of data so we have to start generating it. But we can easily do that because that's one of the functions. And then the third question that I don't understand is what's the limit of knowledge? I'll give you an example. Let's imagine we are collectively all of the computers in the world, and we're all thinking based on knowledge that exists that was previously invented. How do we invent something completely new? So, Einstein. So when you study the way scientific discovery works, biology, math, so forth and so on, what typically happens is a truly brilliant human being looks at one area and says, "I see a pattern that's in a completely different area, has nothing to do with the first one. It's the same pattern." And they take the tools from one and they apply it to another. Today, our systems cannot do that. If we can get through that, I'm working on this, a general technical term for this is non-stationarity of objectives. The rules keep changing. We will see if we can solve that problem. If we can solve that, we're going to need even more data centers. And we'll also be able to invent completely new schools of scientific and intellectual thought, which will be incredible.
BS: I think that brings us nicely to the dilemmas. And let's just say there are a lot of them when it comes to this technology. The first one I'd love to start with, Eric, is the exceedingly dual-use nature of this tech, right? It's applicable to both civilian and military applications. So how do you broadly think about the dilemmas and ethical quandaries that come with this tech and how humans deploy them? ES: In many cases, we already have doctrines about personal responsibility. A simple example, I did a lot of military work and continue to do so. The US military has a rule called 3000.09, generally known as "human in the loop" or "meaningful human control." You don't want systems that are not under our control. It's a line we can't cross. I think that's correct. I think that the competition between the West, and particularly the United States, and China, is going to be defining in this area. And I'll give you some examples. First, the current government has now put in essentially reciprocating 145-percent tariffs. That has huge implications for the supply chain. We in our industry depend on packaging and components from China that are boring, if you will, but incredibly important. The little packaging and the little glue things and so forth that are part of the computers. If China were to deny access to them, that would be a big deal. We are trying to deny them access to the most advanced chips, which they are super annoyed about. Dr. Kissinger asked Craig and I to do Track II dialogues with the Chinese, and we’re in conversations with them. What's the number one issue they raise? This issue. Indeed, if you look at DeepSeek, which is really impressive, they managed to find algorithms that got around the problems by making them more efficient. Because China is doing everything open source, open weights, we immediately got the benefit of their invention and have adopted into US things. So we're in a situation now which I think is quite tenuous, where the US is largely driving, for many, many good reasons, largely closed models, largely under very good control. China is likely to be the leader in open source unless something changes. And open source leads to very rapid proliferation around the world. This proliferation is dangerous at the cyber level and the bio level. But let me give you why it's also dangerous in a more significant way, in a nuclear-threat way. Dr. Kissinger, who we all worked with very closely, was one of the architects of mutually assured destruction, deterrence and so forth. And what's happening now is you've got a situation where -- I'll use an example. It's easier if I explain. You’re the good guy, and I’m the bad guy, OK? You're six months ahead of me, and we're both on the same path for superintelligence. And you're going to get there, right? And I'm sure you're going to get there, you're that close. And I'm six months behind. Pretty good, right? Sounds pretty good. No. These are network-effect businesses. And in network-effect businesses, it is the slope of your improvement that determines everything. So I'll use OpenAI or Gemini, they have 1,000 programmers. They're in the process of creating a million AI software programmers. What does that do? First, you don't have to feed them except electricity. So that's good. And they don't quit and things like that. Second, the slope is like this. Well, as we get closer to superintelligence, the slope goes like this. If you get there first, you dastardly person -- BS: You're never going to be able to catch me. ES: I will not be able to catch you. And I've given you the tools to reinvent the world and in particular, destroy me. That's how my brain, Mr. Evil, is going to think. So what am I going to do? The first thing I'm going to do is try to steal all your code. And you've prevented that because you're good. And you were good. So you’re still good, at Google. Second, then I'm going to infiltrate you with humans. Well, you've got good protections against that. You know, we don't have spies. So what do I do? I’m going to go in, and I’m going to change your model. I'm going to modify it. I'm going to actually screw you up to get me so I'm one day ahead of you. And you're so good, I can't do that. What's my next choice? Bomb your data center. Now do you think I’m insane? These conversations are occurring around nuclear opponents today in our world. There are legitimate people saying the only solution to this problem is preemption. Now I just told you that you, Mr. Good, are about to have the keys to control the entire world, both in terms of economic dominance, innovation, surveillance, whatever it is that you care about. I have to prevent that. We don't have any language in our society, the foreign policy people have not thought about this, and this is coming. When is it coming? Probably five years. We have time. We have time for this conversation. And this is really important.
Let's say we don't screw it up. Let's say we get into this world of radical abundance. Let's say we end up in this place, and we hit that point of recursive self-improvement. AI systems take on a vast majority of economically productive tasks. In your mind, what are humans going to do in this future? Are we all sipping piña coladas on the beach, engaging in hobbies? ES: You tech liberal, you. You must be in favor of UBI. BS: No, no. ES: Look, humans are unchanged in the midst of this incredible discovery. Do you really think that we're going to get rid of lawyers? No, they're just going to have more sophisticated lawsuits. Do you really think we're going to get rid of politicians? No, they'll just have more platforms to mislead you. Sorry. I mean, I can just go on and on. The key thing to understand about this new economics is that we collectively, as a society, are not having enough humans. Look at the reproduction rate in Asia, is essentially 1.0 for two parents. This is not good, right? So for the rest of our lives, the key problem is going to get the people who are productive. That is, in their productive period of lives, more productive to support old people like me, right, who will be bitching that we want more stuff from the younger people. That's how it's going to work. These tools will radically increase that productivity. There's a study that says that we will, under this set of assumptions around agentic AI and discovery and the scale that I'm describing, there's a lot of assumptions that you'll end up with something like 30-percent increase in productivity per year. Having now talked to a bunch of economists, they have no models for what that kind of increase in productivity looks like. We just have never seen it. It didn't occur in any rise of a democracy or a kingdom in our history. It's unbelievable what's going to happen. Hopefully we will get it in the right direction. BS: It is truly unbelievable. Let's bring this home, Eric. You've navigated decades of technological change. For everyone that's navigating this AI transition, technologists, leaders, citizens that are feeling a mix of excitement and anxiety, what is that single piece of wisdom or advice you'd like to offer for navigating this insane moment that we're living through today? ES: So one thing to remember is that this is a marathon, not a sprint. One year I decided to do a 100-mile bike race, which was a mistake. And the idea was, I learned about spin rate. Every day, you get up, and you just keep going. You know, from our work together at Google, that when you’re growing at the rate that we’re growing, you get so much done in a year, you forget how far you went. Humans can't understand that. And we're in this situation where the exponential is moving like this. As this stuff happens quicker, you will forget what was true two years ago or three years ago. That's the key thing. So my advice to you all is ride the wave, but ride it every day. Don't view it as episodic and something you can end, but understand it and build on it. Each and every one of you has a reason to use this technology. If you're an artist, a teacher, a physician, a business person, a technical person. If you're not using this technology, you're not going to be relevant compared to your peer groups and your competitors and the people who want to be successful. Adopt it, and adopt it fast. I have been shocked at how fast these systems -- as an aside, my background is enterprise software, and nowadays there's a model Protocol from Anthropic. You can actually connect the model directly into the databases without any of the connectors. I know this sounds nerdy. There's a whole industry there that goes away because you have all this flexibility now. You can just say what you want, and it just produces it. That's an example of a real change in business. There are so many of these things coming every day. BS: Ladies and gentlemen, Eric Schmidt. ES: Thank you very much. (Applause)