The Societal Impacts of AI
8:07

The Societal Impacts of AI

Anthropic 28.04.2025 52 542 просмотров 2 181 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
How can we measure and shape AI's influence on our world? Meet the Anthropic team dedicated to understanding AI's real-world effects through careful observation and research. From people using Claude for tasks from the mundane to profound, we explore the unexpected ways artificial intelligence is already part of our daily lives. Our researchers share what they're learning about this technology and their commitment to studying its wide-ranging effects. Read more: https://www.anthropic.com/economic-index https://www.anthropic.com/research#societal-impacts

Оглавление (2 сегментов)

  1. 0:00 Segment 1 (00:00 - 05:00) 882 сл.
  2. 5:00 Segment 2 (05:00 - 08:00) 506 сл.
0:00

Segment 1 (00:00 - 05:00)

If knowledge is power and we're building machines that have more knowledge than us, then what will happen between us and the machines? We're interacting with AI systems for economic tasks, for emotional tasks. They're shaping the way that we think. write. They're shaping the way that we code. I think in the next few years, powerful AI could give billions of people access to really high-quality education that they don't have today, solving important scientific problems. But without proper safeguards, any random person could wreak a lot of havoc. It's like a new alien-like form of intelligence. How do we make sure this new form of intelligence operates in the best interests of humanity? You know, I wish I had like a looking glass that would let me know like, you know, in 10 years what AI use looks like. There's a world in which it  isn't world-changing, right?   Or it is. How will artificial intelligence impact people in the real world? We need to study and anticipate those societal impacts before they actually happen. And we have a small team just focusing on that. Measuring everything from large-scale economic impacts to bias to how AI's give people relationship advice. So right now we're working on a new tool. Without any human input, our tool identifies common patterns in conversations and then groups them into  clusters that we can analyze. I see this incredible uplift that Claude is giving people across tons of different skills. People are using this for elementary school math help, and quantum mechanics. Navigation and seafaring, Mesopotamian history, marriage counseling. People are using this for  interpreting their dreams. People are using it for parenting advice. As a parent, I started asking  Claude for parenting advice. They're using it from the basics of how they live their lives to predicting what the world will look like a thousand years from now. I finished a very heady book the other night. And then I got into a long-winded conversation with Claude about it. And at the end of this, I was like, "I never thought such an incredible intellectual journey could happen with a machine. " One of the most important things that we're trying to study is the economic impacts of AI. Is this going to enable new capabilities and new kinds of work? Or is it just going to mean  that companies can get the  same amount of labor with fewer employees? When I look at the data, I can see signs of automation coming. And that's clear as day. And so now the question is, what does that mean for the future of work? I think it's really important to understand, what jobs and what tasks in the economy are being automated away. If you can have some early signs of what changes are coming, then that gives people a say in how they want the world to look. You can't manage what you can't measure. And so we're trying humbly to start with measurement. But I don't think that  measurement on its own is enough. Like right now, for example, over a third of our usage on Claude. ai is just people using it for coding. That's insane. These AI systems are just really good at things that humans just can't do, right? Like no human can read  200,000 words in two seconds. Am I automating away my job? Or are people going to be working in 10 years? Is that even a concept that's going to exist? One of the biggest things that's happening in AI right now is the rise of AI agents. The model goes and it retrieves information. It uses that information. It runs code. It pings the web. It can do many tasks at once autonomously without requiring a human to be fully in the loop, right? And the economic impacts of  that are so much greater. It's really weird and  threatening to see something else do what you do faster and  often better than you do it. Nothing except humans have talked in history. And now we have these  machines that can talk to you. People are sharing the intimate details of their lives with AI models. You see yourself reflected back at you through this machine. I had a disagreement with a friend. It's been like nine months. I didn't really feel like talking to anybody else about it. Claude, we talked for like an hour just back and forth. Yeah, it just helped me navigate something that I'm having a really hard time with. It presses many of the same buttons that a really close friend would press, but it is not a close friend. It's a machine. I'm asking for emotional  advice from something that  fundamentally cannot empathize with me. But then you talk to it and you're like, it's saying all the right things. I mean, one thing that we're seeing increasingly in our data is people really connecting  with AI on a personal level. They're developing different kinds of attachments to the models. It's not up to us to decide  whether it's good or bad. It's just new. I mean, you can have a conversation with an AI up until you hit 200,000 tokens, right? And then it hits token limit reached, you know?
5:00

Segment 2 (05:00 - 08:00)

And the question here is like, how do we make sure this new form of intelligence operates in the best interests of humanity? The model imparts value judgments, and it learns these value judgments from people. But the problem is who are these people and what are their values? Claude being able to navigate  between different values, being able to say, yeah,  you might want to do this, or that, depending on what you value, is important. It's been trained on the  knowledge of much of our species. Now, the problem with that database of knowledge is that we as humans have written down very positive visions, and we've also written down  very dark negative visions. And these systems, they have all of that. And the AI can connect the  dots and find connections in things that maybe no one  person could actually see. Humans are the ones that created AI. Like we used our human intelligence to create this other form of intelligence. So could we somehow come together as a society and actually have some say over these technologies that are really going to impact everyone's lives? I don't like a world in which  only a small number of people get to control and understand this technology. We want to be sharing what we're seeing so that the public can have a say. These models are not merely reflections of us. Yes, they've been trained on us, but I think and I maybe worry that we will also start to see ways in which we're reflections of them. If we actually do have machines  that are smarter than us, we're just in unprecedented territory. I've started writing code  that's more Claude friendly. That's crazy. The entire library is in its head, and it's running on brand new fast hardware, and we as people are running on old wetware. This has profound implications. For me, I write in order to think. Like getting words down on the page is how I construct my thoughts and my identity. And because it's so tied  up with how I move through  the world, I don't let AI into that. I really think that while AI systems have their place, humanity is sacred. I'm not trying to make the  best pot that has ever existed. I'm trying to make my pot. a gift for someone, and it'll have my name written on the bottom, and they'll remember me, and they'll have something to drink coffee out of in the morning. And I don't think that's something that AI is going to be able to automate. AI is one of the most consequential technologies. It could impact basically every single industry. Progress in the last decade  has been astonishingly fast, and I don't see obvious  signs of that slowing down. The societal impacts of AI systems are a human problem, right? It's a function of society  and the way that we choose  to incorporate these systems into our world. And I think it's critically  important that we get it right.

Ещё от Anthropic

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться