# What is the Hidden Cost of AI Efficiency? - Future of Work Podcast

## Метаданные

- **Канал:** Workday
- **YouTube:** https://www.youtube.com/watch?v=vIBGn5DyVaM
- **Источник:** https://ekstraktznaniy.ru/video/35439

## Транскрипт

### Segment 1 (00:00 - 05:00) []

Welcome to the future of work podcast. I'm Kathy Fam, the vice president of AI and open technology here at workday, and I'm so thrilled to be joined today by Dr. Kate Netherhoffer, the chief scientist and head of BetterUp Labs. Thanks for joining us, Kate. — Thank you for having me, Kathy. So Kate, we're so lucky to have you here because you bring two over two decades of experience as a social scientist and have been an absolute powerhouse and leader of our time and all things at the intersection of behavioral science and technology. And you've led so much groundbreaking research on how we think, feel, perform, especially in this era of AIdriven work. Um, and you all can find so many examples of ways she's translated complex psychological research into tech enabled behavior change and your work really at better app has shaped the future of organizational culture and all of our personal performance and how we think about that. And one thing that has come up in the work that you've done um is this concept of work slop and how 40% of the workforce is now drowning in this concept of work slop and um how this is behind some of the AI adoption approaches that we're seeing. Can you tell us a bit more about that? — Yes. Well, thank you for those really kind things that you said about our research. It very much is a reflection of my work with my whole team at BetterUp Labs and our collaboration with Stamford Social Media Lab in particular Jeff Hancock and Angela Lee and um they have been integral to uh all the work that we've done on this new concept that we refer to as work slop which I'll define so that we're on the same page because a lot of people think about AI generated content that just is full of uh consistent mistakes or particular stylistic cues And that's missing the point of what we're talking about with this phenomenon of AI generated work content that masquerades as good work. It looks like it completes the task, but it really lacks the substance to meaningfully advance a given task. And that's really important because part of this definition shows how it shifts the burden from sender to receiver. And that's a new behavior at work that violates some of the principles on which we work. — So it sounds like then now we have to think about the onus being on the organization and what that looks like. And so can you tell us more about what you've seen with that the organizational change side of it? — So glad that you said that because I think we have a tendency to blame people as lazy um when we have this sort of effort asymmetry. when you receive a document and it looks to be AI generated and it lacks the context that you need to get the work done and we all have this sort of eye rolling experience like this is so annoying you're wasting my time but what we've come to realize in our research is that it very much is a product of the situation and that is that right now in organizations we have so much pressure to use AI we have these mandates that sometimes encourage and sometimes more forcefully require us to use AI or to begin with AI and those mandates on top of an environment that we refer to as having low psychological safety, but essentially a culture where there's low trust, where you can't really make mistakes or ask questions about where you're going wrong and the pernitious combination of those two things, the mandates on top of this kind of shaky ground. We can talk about other reasons why the ground is sort of shaky underneath us right now. That's what really predicts the creation of work slop. — Yeah. Well, can you tell us more about that shaky ground? What are you seeing as the shaky ground? — Yeah. Well, some of our research over the past 10 to 12 years has been tracking some of these foundational mindsets and behaviors that predict performance. And we think about performance in three different ways. We think about basic performance, your ability to essentially fulfill your job requirements with a given amount of quality. or if you're um more into the sort of widget production space, this is just are you fulfilling the task at hand. Then second is collaborative performance. So we think about the way in which a knowledge network operates and the extent to which you need to champion people on your team and really get to know their strengths and weaknesses so that you can be aligned and get that knowledge work done. And then the third type of performance that's really of the moment today is adaptive performance. And that's really about innovation. And what we find is that all of the predictors, the mindsets and behaviors that drive those types of

### Segment 2 (05:00 - 10:00) [5:00]

performance are just slowly, insidiously declining over about a 10 to 12 year period. In fact, we didn't really recover from what we all experienced during CO. And so because of that, we have this workforce that's really low in something we refer to as psychological fuel, but it's essentially agency, optimism, and motivation to do work. So it feels a bit like burnout. you know, we haven't really reestablished connection and we're not really in a trusting culture where we can just receive these new um ideas about using new tools, using really powerful tools and take to them. Yeah, it's so helpful to hear your framing about how these different his recent historical events have shaped how we show up at work and show up um and then how when you add a new technological shift to that, how that then affects how we are able to consume um or take in that moment. Building on the topic of burnout, we've had some research out at workday that shows that some of the highest adopters of the technology, the people most excited about it also experience the most burnout and the most rework in this era of some of the new AI tools. What are you seeing? Yes, I was actually excited to see that in some ways your research is very consistent with ours. We have found um over the past few years that when you have what we refer to as high AI optimism, which I think you could sort of classify as enthusiasm on its own, it's not exactly the same thing as having high optimism and high agency. So to back up a step, we've been researching what we refer to as AI mindsets. And so this idea of having a pilot mindset is having a unique uh cocktail of high optimism and high agency together that leads to um the most powerful usage, higher adoption and higher discerning usage, productivity, creativity, things like that. And so when you have just enthusiasm, what you tend to see is over reliance or um I guess people would call it like blind faith. you know, it's just sort of like too much excitement to use these tools for everything. And I think actually a lot of the research that's coming out from anthropic, for example, corroborates this and showing that when we have this perception of things looking really polished and done, it's easy to walk away and not iterate and ask questions and refine your thinking. It's almost too tempting. So it doesn't surprise me to see AI optimism alone or even high AI trust is what predicts the creation of work slop and also more rework. You know, it's like you're just doing whatever it takes to um work with what other people give you because you want so badly for this tool to be a panacea. — Yeah. Let's build on that a bit too. Um and perhaps maybe the thread about agency. Um so optimism and What happens um what happens to our culture when we give our employees or people even in our personal lives these tools but perhaps skip the meaningful enablements and the parts around that? Can you tell us more about that from your research? — Yeah, I think what we're finding is that training is really important and AI literacy is critical to using a new technology, but it's really part of the story. And the other piece of that is coupling it with what I would think of as coached skills, but essentially having the right mindset and approach. And part of that is your um you know in your core how agentic and optimistic you feel and how much motivation you have for work. So it is that psychological fuel and then on top of that it's your mindset toward these tools and the way that you're thinking about them. So, it's really similar to um discernment, you know, the ability to go in and use judgment to edit the output of a given text or to think about is this missing the contextual clues that I need to get this work done. like these really deeply human abilities that we have that for some reason get shortcircuited I think because of the well you know the situation the temptation to have um somebody or an entity complete the work for you and then um it just looks so polished you know we have so much going on right now we have a lot that feels important and urgent and it's just so compelling to have a tool that appears to complete the task without having to put um what would normally be the agency with which you approach your work. — That resonates with me so much right

### Segment 3 (10:00 - 15:00) [10:00]

now. And if any of my students are listening to this, this is um what I've seen is some of the apps. I teach a product management class and there's so many brilliant, incredible students and even the brightest students that really are there to learn in a time where perhaps they're taking too many credits or they're just focused on graduating or all the things that life has taken over. and an assignment could be completed pretty quickly by putting through one of your favorite chat tools. It's easy to just rely on that and just get it out the door and get your grade, right? Even if you know that you're here to learn, and that is the purpose of this education. It can be easier and just so much quicker. to your point, Kate, it's polished, it's quick, and you can just get it out um and solve that quick problem, but then in the long term, you've now missed out on this learning opportunity. Yeah, it's um it's trickery of some form, right? Like there's an illusion of competence there. And I like I really want to reiterate and study more about this, but it's no fault of our own. I think there are new clues for our brains to adapt to. And there's like a new equation really that has decoupled effort and quality. And our brains have become so accustomed to those things being um so very intertwined. And so then when they are separate, it's almost like you trip over yourself a little bit. You know, it's like, wait a second, I'm being asked to use these tools. I don't really have the ability to ask questions. It looks really good. And you know, it's actually pretty cheap to produce. It's just unfortunately expensive to consume. — Yeah. So, what can we do? How do we think about knowing that there are all these different levers that bring us to this moment we're in of works like the slop of the work. Um whether it's the encouragement of the tools or just it gives you polished results or all these different reasons. Um or the illusions. Are there either guard rails or maybe even not guardrails something that we put in place um to prevent some of that while also leaving space for the experimentation and trying and the benefits of some of this? — Yeah, there are so many things that we can do right now. So I would say the first thing is probably more foundational and I guess you can think of it as an investment in your culture. Um, it's really thinking about your talent infrastructure and trying to refuel the organization. So, infusing some agency, optimism, motivation. We're finding in some of our new research now that the way that people are perceiving an organizational communication about their AI strategy holds really important signals in it that um sends to people a sense of whether the organization will be resilient to the future. So people are highly attuned to um organizational resilience right now through the lens of an AI strategy. So I think it's really important to invest in the culture and show that you have um an augmentative approach that you're really thinking about augmenting the individuals while infusing the sense of optimism and agency and motivation. you know that you're also thinking about, hey, here are some powerful tools that can be like salt to the flavor that you bring um to this work. I don't know. — Oh, that visual of it being salt to the fl because the salt alone is not what you want for your meal, but it's like it adds assuming you like salt. It adds it enhances. — I've like searching this world for metaphors for AI. We've actually done some research about um the metaphors that people have for AI that I can talk about, but this week one came to me that I was sharing with some people. Um, uh, my son was watching a video of this woman who was recreating ancient recipes. And this one was for something called panis fortis. And it's just a an old bread. And um, and she was kneading the dough um, in this way to say that, you know, in the old days, we didn't have KitchenAid mixers and we didn't have all the baking equipment. You know, this was sort of like out on the slab and you were just having to like knead it. and she got up on a stool and she was really like putting herself into it to need it more. And I was like that prevents work slop that kind of effort and engagement and leverage and you're coming from this culture where you've been invested in to put that type of energy into your work. — It sounds cliche but it's like the journey and you're like but also the journey or if you watch the Olympics the Olympic gold medalist is like the journey is what's important not just the end. And we've given these tools that get us to the end. — Yeah. — And that effort of kneading that bread

### Segment 4 (15:00 - 20:00) [15:00]

which I now will never get out of such a great visual. Um the video. — It's just as important though to get to the bread as well, you know. — Yeah. I mean, Kathy, you're giving me ideas for other intervention studies. I do think like priming people to think about the process is an interesting idea. We're playing with these ideas to intervene on someone's sense of mattering. and whether you um really make salient them and the unique human qualities that they have or their skill set and this opportunity for development because again it is really easy to use AI in so many different use cases and to um abandon the whole process of creating a beautiful work product or um the difficulty of development that really comes from the coupling again of effort and quality and what you put into it. — Well, let's bring this back to this the topic of work that we're in and perhaps related. One thing that comes up at work, the changing dynamics of us and our colleagues with these tools that are introduced and one very specific example um is using the tools to create, let's say, difficult feedback or just better ways, better in quotes, better ways to communicate with each other. Um, and some may argue, well, that risks now eroding our ability to deal with conflict because now we have that kind of aid. How do you see the workplace dynamics changing as we now have these tools to perhaps help in other ways maybe not help uh us navigate the workplace dynamics? Well, I think that thinking about the existing workplace dynamics and the way in which you will or will not use AI is part of having a pilot mindset. So, I think approaching each of these use cases with a sense of optimism and agency or that pilot mindset is sort of the first step. What we're seeing in our research right now is that people tend to use AI because it's just faster and easier, right? So, it's very convenient and it's sort of a great way to get a draft. That's the most common use case. Um, but then the next reason is something we've been seeing for years now is that it's about this sort of um judgment, fear of anxiety around interacting with other people. And I'm just sort of hypothesizing and wondering here. And part of me thinks that that's because we're a little bit out of practice um from COVID from um sort of differential return to office policies. And so people have this tendency to want to explore ideas privately first. Um that's sort of the number one reason why people are interested in AI coaching over human coaching too is just this fear of judgment. Um people are really concerned about the way that other humans react to situations that we're in. And so taking that into account, I think it's giving us a really nice stepping stone to prepare for human interactions and that's fantastic and we really support that. The place where the pilot mindset comes into play is not doing it instead of having the human interaction. And it's making sure that you explore those ideas privately and then you have that boost in preparedness and confidence and you use that and you take it to whether it's a human coach or a human manager or a peer um that you do that and you understand you're not bothering people. You are coming across as prepared and we need that human connection to collaborate productively. That's such a powerful thread to highlight for us about how the area the era of co created this muscle of perhaps wanting to be a bit more introspective or private in our thinking to some extent and then there is a tool now that allows you to maybe do that a little bit but then now we have to also make sure we lift up and also include the human connection part of it as well. But it kind of started at in and how we were retrained in how we think about work and communicating with her colleagues. — Yeah, I know. It's um you know, we're such evolutionary beings and um we are motivated by connection and we're such social animals and even with that, we're still so awkward in humanto human interactions right now and we have such a tendency um to worry about judgment or to think that we're bothering people. Um and yet we see the cost of that with work slop. You know, it costs us cognitively when you sort of avoid putting the human into it. It costs us emotionally. We have all sorts of negative judgments when we receive work slop. And then it costs us interpersonally because we're less likely to want to work with someone um who has produced this sort of loweffort, lowquality AI generated work that doesn't have their human judgment and context cues in it. — Yeah. Yeah, thanks for sharing that.

### Segment 5 (20:00 - 22:00) [20:00]

Let's wrap up on that note with being human. You've shared that one of the greatest irony of all is that for AI to work, we need to be better at being human. And as we pass along more tasks to AI, in the same way that you might outsource bread making to something else versus making the bread, how does it change about how we feel about the work that we're doing? Yeah, it's um a really powerful reframe that the better we think about our essential human qualities and the humans on the other end of the work that we produce that will make us better at using AI. I used to love this idea of AI as a multiplayer tool, you know, where like if you could simply just remember that AI is mediating your communication and there's a human on the other side of that who's going to receive the work that will probably lead you to have a little bit more judgment. of course given the sort of um neural circuitry um rewiring that's happening with the illusion of competence and the polishing that's in the dock. But um what we're finding is that you know when people feel like they're fueled like they matter and their work matters and they approach these tools remembering that they're in the um the pilot seat if you will and they can edit the outcomes. they can work with it, ask questions, ask for divergent ideas, and really put themsel into it like they're kneading that dough. Um, the better the outcomes. Uh, it doesn't create the effort asymmetry where it's really cheap to produce and expensive to consume. And it allows for a much more effective collaboration where people feel comfortable going back and responding to that work, augmenting it, and making it better. A great note to end on about human collaboration and making sure that the people we work with are always valued. Kate, thank you so much for being here. — having me, Kathy. Thank you for sharing all your ideas and I look forward to some humanto human collaboration with you. Yeah, I would absolutely love that. I already have some ideas percolating about agency and um AI and humanity. To everyone listening, thank you so much for joining us and don't forget to subscribe to the Future of Work podcast wherever you get your shows. Thank you.
