Algorithms and AI don't just show us reality — they warp it in ways that benefit platforms built to exploit people for profit, says etymologist Adam Aleksic. From ChatGPT influencing our word choices to Spotify turning a data cluster into a new musical genre, he reveals how new technology subconsciously shapes our language, trends and sense of identity. "These aren't neutral tools," he says, encouraging us to constantly ask ourselves: How am I being influenced? (Recorded at TEDNext 2025 on November 11, 2025)
Join us in person at a TED conference: https://tedtalks.social/events
Become a TED Member to support our mission: https://ted.com/membership
Subscribe to a TED newsletter: https://ted.com/newsletters
Follow TED!
X: https://www.twitter.com/TEDTalks
Instagram: https://www.instagram.com/ted
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences
TikTok: https://www.tiktok.com/@tedtoks
The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.
Watch more: https://go.ted.com/adamaleksic25
https://youtu.be/ZkXrTHpnQrQ
TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com
#TED #TEDTalks #Language
How sure are you that you can tell what's real online? (Laughter) You might think it's easy to spot an obviously AI-generated image, and you're probably aware that algorithms are biased in some way. But all the evidence is suggesting that we're pretty bad at understanding that on a subconscious level. Take, for example, the growing perception gap in America. We keep over- and overestimating how extreme other people's political beliefs are, and this is only getting worse with social media, because algorithms show us the most extreme picture of reality. As an etymologist and content creator, I always see controversial messages go more viral because they generate more engagement than a neutral perspective. But that means we all end up seeing this more extreme version of reality, and we're clearly starting to confuse that with actual reality. The same thing is currently happening with AI chatbots, because you probably assume that ChatGPT is speaking English to you, except it's not speaking English, in the same way that the algorithm's not showing you reality. There are always distortions, depending on what goes into the model and how it's trained. Like we know that ChatGPT says “delve” at way higher rates than usual, possibly because OpenAI outsourced its training process to workers in Nigeria who do, actually, say, "delve" more frequently. Over time, though, that little linguistic overrepresentation got reinforced into the model even more than in the workers' own dialects. Now that's affecting everybody's language. Multiple studies have found that, since ChatGPT came out, people everywhere have been saying the word "delve" more in spontaneous spoken conversation. Essentially, we're subconsciously confusing the AI version of language with actual language. But that means that the real thing is, ironically, getting closer to the machine version of the thing. We're in a positive feedback loop with the AI representing reality, us thinking that's the real reality, and regurgitating it so the AI can be fed more of our data. You can also see this with the algorithm through words like "hyperpop," [not a] part of our cultural lexicon until Spotify noticed an emerging cluster of similar users in their algorithm. [When] they identified it and introduced a hyperpop playlist, however, the aesthetic was given a direction. Now people began to debate what did and did not qualify as hyperpop. The label and the playlist made the phenomenon more real by giving them something to identify with or against. And as more people identified with hyperpop, more musicians also started making hyperpop music. All the while, the cluster of similar listeners in the algorithm grew larger, and Spotify kept pushing it more, because these platforms want to amplify cultural trends to keep you on the app. But that means we also lose the distinction between a real trend and an artificially inflated trend. And yet, this is how all fads now enter the mainstream. We start with a latent cultural desire. Maybe some people are interested in matcha, Labubu or Dubai chocolate. The algorithm identifies this desire and pushes it to similar users, making the phenomenon more of a thing. But again, just like how ChatGPT misrepresented the word "delve," the algorithm is probably misrepresenting reality. Now more businesses are making Labubu content because they think that's the desire. More influencers are also making Labubu trends because we have to tap into trends to go viral. And yet, the algorithm is only showing you the visually provocative items that work in the video format. TikTok has a limited idea of who you are as a user, and there's no way that matches up with your complex desires as a human being. So we have a biased input. And that's assuming that social media is trying to faithfully represent reality, which it isn't. It's only trying to do what's going to make money for them. It's in Spotify's interest to have you listening to hyperpop, and it’s in TikTok’s to have you looking at Labubus because that's commodifiable. So again, we have this difference between reality and representation, where they're actually constantly influencing one another. But it's incredibly dangerous to ignore that distinction, because this goes beyond our language and consumptive behaviors. This affects the world we see as possible. Evidence suggests that ChatGPT is more conservative when speaking the Farsi language, likely because of the limited training texts in Iran reflect the more conservative political climate in the region. Does that mean that Iranian ChatGPT users will think more conservative thoughts? Elon Musk regularly makes changes to his chatbot Grok when he doesn't like how it's responding, and then uses his platform X to artificially amplify his tweets. Does that mean that the millions of Grok and X users are subconsciously being trained to align with Musk's ideology? We need to constantly remember that these aren't neutral tools. Everything that ends up in your social media feed or in your chatbot responses is actually filtered through many layers of what's good for the platform, what makes money and what conforms to the platform’s incorrect idea about who you are. When we ignore this, we view reality through a constant survivorship bias, which affects our understanding of the world. After all, if you're talking more like ChatGPT, you're probably thinking more like ChatGPT as well, or TikTok or Spotify. But you can fight this if you constantly ask yourself: Why? Why am I seeing this? Why am I saying this? Why am I thinking this? And why is the platform rewarding this?