3 Possible Futures for AI — Which Will We Choose? | Alvin W. Graylin | TED
10:25

3 Possible Futures for AI — Which Will We Choose? | Alvin W. Graylin | TED

TED 16.02.2026 35 462 просмотров 916 лайков
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
After decades working in technology across both the US and China, Alvin W. Graylin sees three possible paths for the future of AI: one where tech giants create a class of trillionaires, one where competition escalates into war or one where humanity builds and shares this technology for the common good. In conversation with TED Radio Hour host Manoush Zomorodi, Graylin cuts through the hype to clarify how we choose the right path. (Recorded at TEDNext 2025 on November 9, 2025) Join us in person at a TED conference: https://tedtalks.social/events Become a TED Member to support our mission: https://ted.com/membership Subscribe to a TED newsletter: https://ted.com/newsletters Follow TED! X: https://www.twitter.com/TEDTalks Instagram: https://www.instagram.com/ted Facebook: https://facebook.com/TED LinkedIn: https://www.linkedin.com/company/ted-conferences TikTok: https://www.tiktok.com/@tedtoks The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more. Watch more: https://go.ted.com/alvinwgraylin https://youtu.be/jnPfz0WXLIM TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com #TED #TEDTalks #AI

Оглавление (3 сегментов)

  1. 0:00 Segment 1 (00:00 - 05:00) 883 сл.
  2. 5:00 Segment 2 (05:00 - 10:00) 986 сл.
  3. 10:00 Segment 3 (10:00 - 10:00) 85 сл.
0:00

Segment 1 (00:00 - 05:00)

Manoush Zomorodi: Humans, still amazing. Alvin W. Graylin: I don’t think we’re replaceable yet. MZ: No, not yet. But, Alvin, we're going to get some straight talk. To me, for normal people, this was the year of AI. And I don't think anyone really knows what to think right now. So, Alvin, you have been in this field -- AI, cybersecurity, VR, semiconductors -- 35 years you've been doing this. But what makes you very different is that it's been both in the United States, as a US citizen, and in China a lot of the time. I think a lot of people feel ambivalent about AI. They feel like, what is actually really happening, what is hype and what is transforming our existence? Where are we right now according to you? AWG: I mean, this is one of the biggest questions that we have as a society today. And unfortunately, there's just a lot of misinformation. And my answer to you is probably going to be a little different than the Silicon Valley consensus, even though I work at Stanford, and it's going to be probably a little scary to a lot of you. But hopefully, by the end of this, it will convince you to take action. Just like what TED, and the little note I saw in TED, it says, what action are you going to take after this event? We are really at this inflection point and the inflection point, not the traditional one that just keeps going up. We are essentially at a fork in a road between three possible futures right now. One where the big labs essentially take control of the government by growing their power and their resources as much as possible, then creating essentially a class of trillionaires and everybody else. This is kind of the Elysium future that's ahead of us. The second option is that actually we are heading towards a Mad Max future, where we intensify the conflict between countries, k and going from AI race to AI war to kinetic war and potentially to nuclear war. And I've talked to people in DC who actually see that as "inevitable," which is a little scary. And the third option that we have right now is potentially the Star Trek option, the option where technology is being used and shared, and something brings us -- you know, in the Star Trek stories, essentially, the Vulcans bring us advanced technology. Peaceful, rational species brings us technology and saves us from ourselves and brings on this century of discovery or millennia of discovery. We have a potential to get there. Unfortunately, today, we are heading towards the first two. And the forces of what's driving it is actually going to take a lot of work for us to move from the first forces towards the first two, towards that last one. MZ: Can we get into that a little bit more? Because I think the narrative we've all been told, at least certainly by Sam Altman and maybe some other AI executives, is that we've got to lock this technology down, we've got to grow it, it fast, because if we don't, China will. Would you agree with that? AWG: That's actually one of the biggest myths out there, and most scary things out there. In fact, two days ago, I just came back from China. I've worked there half my career, and I think essentially, the AI industry today is using the same tools that the military industrial complex has used over the last century in terms of you have to create an enemy, once you do that, then you get funding, you get support, you get deregulation, you get to move faster, and then you get to make money. And what the AI labs are actually trying to do is not to save the world. It is actually to create billions, actually trillions of dollars. In fact, they specifically said AI is worth trillions of dollars. And they want to be the first one to create AGI, artificial general intelligence. And it's defined, actually by Sam, as a technology that can replace the average worker. And what that means is he wants to create a technology that can take everybody's jobs here. Now on the surface, that actually may be scary, but I think if it's coming from the right place, it actually could be an amazing thing because that means we get liberated so that we can spend time doing art and music and watching, coming to TED. But, unfortunately, I think right now there isn't the other side of the story being put in, which is how do we protect the people who are going to be displaced by it? MZ: OK, so I mean, despite what we've just talked about so far, Alvin is actually an optimist. (Laughter) He is, I promise. Explain the vision that you have come up with about how we take the right track, that we take this moment of inflection and we actually pivot in a good way. AWG: Yeah, so I actually just turned in the paper to Stanford, which is an AI policy paper about what we need to do going forward and how we move from today's trajectory into something better.
5:00

Segment 2 (05:00 - 10:00)

And it's a three-piece, a three-part story, which sounds simple, but it's actually very hard to execute. One is we actually have to decide that instead of competing over resources and creating hundreds of labs around the world, trying to create, duplicate, actually, the same work and having a undersupply of chips and memory and talent, rather than doing that, we need to come together and create what some people call the CERN of AI. Essentially a single lab that aggregates all of the talent around the world. MZ: Like the space station. AWG: Like the space station, like CERN, like the ITER labs that we do, we've done for other types of technologies. It is very doable. And then whatever comes out of it, rather than hoarding it for one company or one country to do it and share it with the world, which is the whole idea of open science. This is what's made progress in this world happen. MZ: Woo for open science, yeah, TED crowd, alright, nerds, I love it. AWG: Yes. And then two, is that we need to put together everybody's data from around the world so that we're not creating -- in fact, the thing that a lot of people want to do today is create "sovereign AI" which means an AI that works for your country, culture and represents you. And it essentially is a subset of data feeding into it. And it sounds like, OK, that's good, because I have something on my side. But what data right now, what research is showing is that the less you give data, the more biased these AI become. And what we really need to do is to make sure that the entire world's data, all of our history, all of our languages are represented, all of the culture, because then the AI can come in and create an optimal for everyone, that there is a way to find a way to balance everybody's needs without taking other people down. MZ: So how are we going to convince people to do this, technologists, governments, to go along with this? AWG: That's the hard part, I think. The thing is, we need to understand or we need them to understand, that the world is not zero sum, and that actually by working together, it's not weakness. Working together is enlightened self-interest. Because when you work together, you actually raise everybody up. And when you raise everybody up, there's a lot less reason to have conflict, my children fly 10,000 miles around the world to kill your children. Why would I need that when I have everything around me from this technology that’s going to give us. Because this is amazing technology. It's going to solve cancer, it is going to bring us better energy sources, it's going to solve hunger, all these things. But we have to choose to share with the world, and use it for humanity's good, not for one country's good. But there was a third part to the plan. MZ: Sorry. AWG: The third part of the plan is something called the GI Bill for the AI age. So why do I say that? Because in 1944-45, there was about 15 million American service people coming back from World War II, and they were going to create a giant employment shock to the world because they're going to come in, they're going to be unemployed. What did America decide to do? The government says, we're going to give you free education, zero-interest loans, free medical, and then we're going to help you, essentially, buy homes, because that's what's needed for people to have secure lives. And it created the American middle class. It created a boom in our economy and turned us into what we are today, which is the most successful and most powerful nation in the world. We can do that again, but not for 15 million people, maybe for 150 1. 5 billion people. Because America has 170 million workers, and the displacements that we are seeing is going to be the proportions that people are predicting. It could get to 100-plus million people affected just in this country. And globally, it will be billions of people. And we have to take care of them. Because if we don’t, this world not going to be a very good place for us to hang out in. MZ: OK, that's a lot to take in. (Laughter) I do want to give us something actionable, right. Because it can feel like, oh, this AI thing is happening to us and that it's inevitable, but what can we do? Like when we walk out of here. AWG: I think what you need to do is actually to start to change your mindset, to start to understand that the world is not zero-sum. And you actually have a responsibility as business owners, most of you guys own businesses or work in very senior positions in businesses. You need to see about how does your company integrate AI, not in a way to replace people, but make things more efficient. And rather than saying, I'm going to lay off 30 percent of my staff, which some companies are doing, recently I've talked to 50 companies in the last two months about how they were implementing, a lot of them are saying, I'm going to just replace my people. Giving them four-day workweeks, or reskilling to other places, we need to reduce the shock of what this technology is going to do to our society. The prior industrial revolutions took 80, 60 and 40 years to play out. This one is going to happen in the next five to 10 years, maybe shorter, and our society is not equipped to move at that speed. MZ: So play with the models, see what they're like, know what these companies are talking about.
10:00

Segment 3 (10:00 - 10:00)

Do you recommend that? AWG: Oh, you have to do it. You have to actually use these models, because you’ll hear people say, oh, this thing is not that scary, this thing will never replace humans. The reality is, the more you use, the more you understand how powerful they are and how quickly they're changing every day, and if you don't use it, you won't understand it. MZ: Alvin Graylin, thanks for giving us a glimpse into our future. AWG: Thank you, Manoush. (Applause)

Ещё от TED

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться