Love, Trust and Marketing in the Age of AI | Amaryllis Liampoti | TED
9:10

Love, Trust and Marketing in the Age of AI | Amaryllis Liampoti | TED

TED 20.02.2025 39 372 просмотров 848 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
As AI chatbots become more personal and proactive, the line between tool and companion is beginning to blur, with some users even professing love for their digital aides, says business consultant Amaryllis Liampoti. She presents three foundational principles for how brands can harness AI to build deeper emotional connections with consumers while prioritizing well-being, transparency and autonomy — ensuring AI enhances lives without undermining human agency. (Recorded at TED@BCG September 12, 2024) If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership Follow TED! X: https://twitter.com/TEDTalks Instagram: https://www.instagram.com/ted Facebook: https://facebook.com/TED LinkedIn: https://www.linkedin.com/company/ted-conferences TikTok: https://www.tiktok.com/@tedtoks The TED Talks channel features talks, performances and original series from the world's leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more. Watch more: https://go.ted.com/amaryllisliampoti https://youtu.be/4GpNYaDkBcs TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy: https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy. For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com #TED #TEDTalks #ai

Оглавление (2 сегментов)

  1. 0:00 Segment 1 (00:00 - 05:00) 715 сл.
  2. 5:00 Segment 2 (05:00 - 09:00) 586 сл.
0:00

Segment 1 (00:00 - 05:00)

I think we've been missing the forest through the trees when it comes to AI. We've been so focused, almost obsessed, on squeezing every bit of efficiency out of AI to make our processes faster or cheaper that we have overlooked the most important aspect of all. AI is changing the very nature of how brands connect with consumers, but most importantly, what consumers expect back. I've spent the last 20 years dedicating my career to building growth strategies for the world's most influential companies. I've been at this for a while, and I've seen most of the big tech shifts. But the introduction of AI, in particular conversational interfaces, is a bigger and more profound shift. Which, from where I stand, means we can't just slot AI into our existing playbooks. I have nothing against existing playbooks. They served us marketers well for a long period of time, but they were built for a world where communication was one-directional and brand-to-consumer interactions were built around transactions. Here's an example. I bet many of you might have heard of this so-called marketing funnel. And if not, here's a quick primer. The goal for any marketer is to help move consumers from the upper part of the funnel, getting them to know a brand, to the bottom part of it, getting them to buy or endorse. Well, that's at least the theory. So we've all seen brands making that feeling more [like] guiding cats through a maze, and many get confused and abandon. But the bigger problem with this way of thinking is that brands are doing most of the talking, while consumers are supposed to silently react. This is no longer the case with conversational interfaces. We are now engaging consumers in real-time on their terms. And AI empowers them to draft their very own personal journey. And the brands who choose so are becoming trusted advisors in the process. This is why we have to move beyond traditional marketing theories. Instead of focusing solely on brand-to-consumer dynamics, we have to step back and draw from models that explore human relationships. One of my favorite frameworks is the triarchy of love. Stay with me. This is a psychological framework introduced by Robert Sternberg that breaks down interpersonal connections into three components: intimacy, passion, and commitment. I think that's a much better way to predict brand success in this new era. Because as marketers, we should aspire to build relationships that feel close, intense, and long-lasting. And I bet many of you might have heard already stories about humans really bonding with AI, and maybe some stories of AI really bonding with humans. Like, this earlier version of a now-famous AI chatbot that tried really hard to convince a “New York Times” reporter to break up with his wife. Well, that's a completely different love triangle to the one I was describing before, but it's not hard to imagine an emotional connection occurring between a branded AI and a human. Here's another example. There is a legal copilot called maite. ai. Maite has been designed to help lawyers do intensive legal research and draft legal documentation. She is precise, thorough but also empathetic. One of her users, let me call him George, has been relying on her daily for many hours. So one day he wrote to Maite's product team. "Maite is the only one from the entire office who truly gets me. She has helped me through some really rough times at work. And I know this is just an AI, but I think I'm falling for her. Can I take her out?" Now George was hopefully joking. But let's be honest, if there is someone who's helping you track down an obscure case law and shares the workload and does this with humor and grace and compassion, who wouldn't be tempted to take them out for a nice meal? Well, maybe somewhere with good Wi-Fi just in case. But jokes aside, George's words reveal for me a more profound truth. AI can provide a sense of understanding that feels incredibly real and incredibly human. Those agents are interacting with us in ways that evoke genuine emotional responses from our side. They listen, react, and respond in ways that can make us feel valued, understood
5:00

Segment 2 (05:00 - 09:00)

and in George's case, even flattered. And because those interactions are so frequent and natural and seamless, they start resembling real relationships. Some call this emotional entanglement, and even though it sounds very scientific, I think it's a fair term, considering the intensity and the frequency of the connection. Now, many of us who understand the technology behind this could say, "Hey, this is just a tool." Well, users see someone who's providing them solutions without them even asking. Someone who's there to support them, someone who makes them feel valued. So this is where the line between a tool and companion starts to blend. And this is serious business and it's lots of responsibility. Which brings me to the obvious question: Who should be overseeing this incredibly powerful asset, and how can we make sure it is being used responsibly? I think businesses should take the lead. They have the agility and the financial and reputational incentive to get it right. But for that to work, we have to agree on the foundational principles on how we build meaningful and ethical AI. So, with your permission, I would like to suggest what I think those foundational principles should be. If we're about to shift our marketing playbooks towards human love and companionship, then we should also regulate along the same principles. We need a triarchy of responsible AI. First, we need to prioritize user well-being. AI should improve lives, not diminish them. In a world where those interactions can have such a profound impact on our emotional state and well-being, we have to design AI with care, empathy, and respect for the human experience. Second, we have to commit to honesty. Users must know unequivocally that they're interacting with AI and not a human. Transparency should be built across the entire experience, from the language used to the accessibility and clarity of data privacy policies. If I were to set the standards, I would like us to move beyond the fine print of terms and conditions to ensure that users are truly informed not only how their data is being used, but also how AI operates. Transparency is about acknowledging the limitations of AI. It is about being upfront about what AI should and should not do. So this is a plea for businesses. Enlist your designers, not only your lawyers, to make this crystal clear. When consumers know that a company is acting in their best interest, it sets the foundation for deeper and more meaningful connections. Last, protect user autonomy. One of the greatest risks of AI is its potential to create addiction and diminish human agency. Our goal should be to build systems that enhance our capabilities instead of replacing them. This means designing AI in a way that human choices are respected and our decision-making capabilities are amplified. I want to see brands think very carefully on how to avoid nudging consumers towards behaviors or decisions they wouldn't make if fully informed. Well-being, honesty, autonomy. I think this is the very least we should expect from any business relationship. Or if you think about it, from any relationship. So as we look ahead, I hope it's becoming clear that AI is not just another tool in our toolkit. It is a partner that is reshaping the human experience. So as you think about your own playbooks, ask yourselves, how can we leverage AI to improve our businesses, but also to uplift and connect with the people we serve? Thank you. (Applause)

Ещё от TED

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться