# A Conversation with Eric Horvitz, Chief Scientific Officer, Microsoft

## Метаданные

- **Канал:** Stanford Graduate School of Business
- **YouTube:** https://www.youtube.com/watch?v=aWqfH0aSGKI

## Содержание

### [0:00](https://www.youtube.com/watch?v=aWqfH0aSGKI) Segment 1 (00:00 - 05:00)

(upbeat electronic music) [JENNIFER AAKER] I'm Jennifer Aaker and I'm so pleased to be able to introduce Eric Horvitz who's here with Sarah Soule in Fireside. He is the chief scientific officer at Microsoft and he's really been working on the edge of AI, society, and science for some time. I got to know Eric many years ago. He's a board member at HAI and he was one of the most thoughtful, intentional, humanistic thinkers that I met in that context, but beyond. For decades, he's been an influential voice for human-centered AI. He founded Stanford's 100-year study on AI. He co-founded the Partnership of AI. He served as congressionally appointed commissioner on the National Security Commission on AI, is the distinguished fellow at Stanford Institute for Human-Centered AI, and just a really nice person. So it's really wonderful to have him here. What I find to be most remarkable about him is his deep commitment to human flourishing, which he was talking about over a decade ago. Within much of this AI conversation that's going on today, we think about what is agency? What does it mean to have empathy? design these technology tools with that? And ultimately, how might technology augment rather than replace humans? Eric doesn't just write about this. He is once known someone told me, at, to have had more mentees at Microsoft than anyone else at the company so he acts much with kindness. And with that, I wanted to welcome Eric Horvitz and Sarah Soule. (applause) [SARAH SOULE] Well, thank you, Jennifer, for that incredibly generous introduction to our distinguished guest today. It is such a pleasure, Eric, to be in conversation with you and to see so many interested students and students' staff and faculty alike. So, welcome everybody to this fireside chat. [ERIC HORVITZ] And thanks for having me, everybody. [SARAH SOULE] We're so happy. So Eric, I wanted to begin with something that you have said before in the past, and that is about describing this moment of rapid transformation that this is gonna be one of these rare moments of rapid transformation that will fundamentally change the trajectory of human existence. I want to just ask you that if you were to look back in 20 years from now, say, how do you think this period is going to be remembered? [ERIC HORVITZ] It's interesting you mention 20 years. I often think about it looking back from the point of time 700 years from now, that this will be a named period of time. Like, in history books, there'll be graphics and there'll be some color about this time and it'll have a name. I'm not sure what name that would be. 20 years is interesting. It's-- when I hear 20 years, I think in terms of other general purpose technologies like electricity and steam. You know, steam was invented in, you know, the first working inventions were, like, I think 1769 or 1770 or so, and it was about 100 years later before they really came into their place, showing the power of transformation they would have in industry and beyond. Electricity, 1880s, and it kind of percolated around for decades before it had impact. Okay, so I'll give you that AI might move faster than steam or electricity especially now that we're all electrified, most of us. But I think 20 years from now looking back we'll look at this time as a time of early deployment early implementations. There'll be lots of interest in how much foresight and anxiety and expectation there was in this time. I don't think during the rise of steam or electricity, there was the kind of audience, for example, sitting to hear this topic. Or even with the advent of flight. I don't think there was much going on with thinking about whether or not we would have to have norms about whether we drop incendiary devices from these things or not. Now, we have all sorts of interesting deliberation which is really, really just heartwarming to see

### [5:00](https://www.youtube.com/watch?v=aWqfH0aSGKI&t=300s) Segment 2 (05:00 - 10:00)

how much interest there is in how to guide this technology. So I think looking back, we'll say, "Wow, that's where it all started. " But we'll still be in a time, even 20 years from now, of pretty fast-paced transformation. I don't think we're gonna converge by 20 years. [SARAH SOULE] Well, I know I, for one, am very glad that you're part of the discussions and deliberations that are going on right now And I hope that I-- I would like to say that in 20 years from now, people will be thinking of you as one of the heroes of this moment. [ERIC HORVITZ] Oh, I wouldn't go that far. You know, I would be happy to be forgotten as long as I made some contributions that helped us going along the way, helping things go a little bit better. [SARAH SOULE] Well, speaking of thinking about ways to make things better students here and all over the country and, in fact, all over the world are hearing a lot of advice about making sure that they become AI literate, and in fact, our initiative here, AI at GSB, has been really intentional about making sure we do that with some workshops including one this evening with our own Celeste Bin. So you're involved in a lot of these debates and discussions across industry, policy, academics and so on. What advice would you give to our students, both MBA students, but also undergraduate students about what they should be thinking about right now in a world where knowledge and capabilities are easily being commoditized by AI? What would you say to these students about what they should be investing in, and also to students who feel a genuine sense of anxiety in this moment, given how rapidly this is changing? What's your best advice for our students? [ERIC HORVITZ] Well, the first thing I'd be curious about would be to hear how much the anxiety and excitement are at odds in your minds in terms of being at a very special time in human history really at the crux of the moment at the vertex when you're experiencing and watching and observing and absorbing the changes that are happening and that could happen. And all the discussion about what that might mean for career, especially being in an educational program, especially being you know, at GSB y-- in an MB-- MBA, MB-- I-- I guess there are several diff-- several different demographics here, but MBA program, which is, you know, comes out of the world of Peter Drucker and thinking about business as a science, and how do you really guide and think about production and, and value and, and profit. My first reaction is that there is a huge opportunity in thinking about the role of people in the management and business space more generally when it comes to thinking about how these technologies will begin to be deployed. Back to my prior comments people really didn't figure out how to deploy electricity in industry. You all see those pictures of going from this, like big central pulley that's, you know with leather straps going out to all the workstations and how people didn't-- weren't really thinking about even separate motors and what that would mean. That's just an analogy for all the thinking we have to do when it comes to the impedance mismatch between where business processes are organization, how it all works down to basic technologies and the core technologies of production, how to interweave these AI technologies, which are themselves evolving. We also have, in our minds, a sense for what AI is. You know, for me, I, I was at Stanford in the, in the mid-'80s and finished my PhD in 1990. AI was this really rich tapestry or constellation of technologies. And even though there's some homogenization going on with this idea of deep neural networks and how they're trained at the top level, it's still a broad tapestry, and it's gonna become even more differentiated over time. So it's not just about, about thinking about, oh, how will Claude or ChatGPT, or Gemini be used in this way or that. It's basically getting to really know the technology more deeply in terms of where the opportunities are for deep thinking, decision-making, management, oversight, and creativity with moving us toward 20 years from

### [10:00](https://www.youtube.com/watch?v=aWqfH0aSGKI&t=600s) Segment 3 (10:00 - 15:00)

now and onto the 700. You know, it's such a rich space of opportunities. And from what I understand about what it is, like this program is part of and what you're doing with AI at Stanford GSB, it sounds like just the right kinds of thinking. You know, one more thing. There's so many opportunities. I've run into several startups now that are finding a very important, I was gonna say niche, but it really is bigger than a niche 'cause it's, niche implies it's a, you know, small little spots. But companies that are, or startups that are looking at what companies are doing figuring out how to gain access to data sets and processes at these individual, at these companies, across sectors, working to become expert at one sector or two and then trying to generalize. But they're looking at all the AI technologies, looking at some early sparks of positive gain, or traction, or maybe places where there wasn't repulsion and sharing those insights out, including specialized data sets that they use to fine-tune and train some of these models, so they can actually perform in a very, very industry-specific way. This is a huge transformative time and a huge transformative need right now, and it takes understanding in-depth specific areas and figuring out how to do the applications which will just be a tiny piece along the way of long-term transformation. So yeah, so find your passion, go deep on the passion, think about the relevance of AI, go interdisciplinary 'cause you're gonna have to think more broadly, and then look at what's going on with people trying to really go beyond the hype to really integrate and apply. [SARAH SOULE] I love that, and I also think it hits on something that I think a lot about myself when I think about what we should be offering to our students in terms of this moment. And so part of that, of course, how I often talk about it is, yes, you need some technical knowledge in this space for sure, and in some areas, to be able to dive very deeply. But you also need the leadership skills to be able to understand what needs to happen in an organization to actually roll this out in a responsible way, asking not just what AI can do, but what should it be doing in organizations. [ERIC HORVITZ] Yeah, and people are, you're seeing all this discussion right around now happening more frequently, which focuses on, wait a minute, have we gone too far with hyperbole? Well, maybe not. No, I think we might have. Well, where's the payoff? We're not sure. No, look over here. That's the kind of chatter we're seeing right now. And look, nobody doubts that in the longer term, we will be seeing huge transformative changes. I like to see, to say for the better overall per the theme of human flourishment, but there are lots of rough edges and pits and little veins of opportunity scattered everywhere right now. [SARAH SOULE] I want to come back to something else that I've often said to our students a lot, and it came up when we had a prep call a couple of weeks ago. And one of the things that I've said to our students is that we really need to work on building a culture of curiosity and generosity. And, you know, one of the things that I often say is that our students are at their best when they're allowed to be curious and to try to satisfy that curiosity, and our faculty are at our best when we are curious and asking really very interesting research questions. Our staff are at their very best when they are engaging with curiosity, with faculty and students, and that this is one of the ways in which we build the sort of culture we want. And that also requires generosity too, of spirit, of being generous with one another and giving each other grace, and just trying to bake that into our culture. Well, when we spoke a couple of weeks ago, you talked a lot about curiosity. And you remember I kind of quoted on that moment when you started talking about curiosity. Maybe you could reflect a little bit on what you and I talked about, because I think everybody here will be interested, in terms of thinking about human AI collaboration as a way to push people tackling problems and ideas that they couldn't have approached before. [ERIC HORVITZ] Yeah. I'm-- I've been passionate about thinking through the

### [15:00](https://www.youtube.com/watch?v=aWqfH0aSGKI&t=900s) Segment 4 (15:00 - 20:00)

way that computers could help people with, with cognition, problem-solving exploration over decades now. The phrase human-AI collaboration came up as a strange concept just maybe 23, 25 years ago, when we started showing demos of technologies that made it clear what the opportunity space might look like. What's the border of per what we know about the cognitive substrate of human minds, characterized typically by cognitive psychologists? You know, the gaps in our abilities, our strengths as humans. Of course, we're all different in different ways, but of course there's a human substrate that we're trying come to understand over the years. And early on I thought, oh, wouldn't it be beautiful to build computing systems that understood that deeply to know where to step forward, to where to bolster, where to hang back? And on top of that, not just the complementarity challenge and opportunity for computer scientists and AI now more specifically, but also initiative. How and when should computing systems come forward, when should they hang back? And the same for the human driving these machines, to understand how and when to use them on the pursuit of augmentation of their own curiosity, getting answers to questions and problem-solving more generally. It's interesting to think today about that some people are learning how to use these general tools like the, the GPTs and the Claude's and the Geminis to properly, or to appropriately prompt them in ways that put them in a place as the drivers coming to the system with sets of goals, which are very much human in nature, preferences, curiosities, and directions, and using the systems in a way to guide by introducing new efficiencies for simulation, exploration, expanding sets of possibilities under consideration. So I-- I've find, I-- I find these, these generalist AI tools as mind-blowing. At the same time at Thanksgiving a couple years ago, right after GPT shipped it was shipped in February of 2023, and I think it was Thanksgiving that year, my sister, who's a professor of literature at UNC Asheville, showed up with her hands on her hips saying, "What are you doing to my students? " Like, what, you know, She knows I'm a long-term AI researcher. I think it wasn't clear what I was doing until recently. And in terms of the topic in general. And she said that, you know, she, no matter what she tells her students in freshman writing, they're just using these tools, and they're not thinking deeply. She believes that, and I think she's correct, that, you know, learning how to write is probably part of learning how to think as you mature into an adult. And so we talked about, you know, sort of the roadmap for what it might be like to design tools to not just depend on human volition, wanting to keep their goals separate from the machines and wanting to be in charge of the critical thinking. But to build machines that are more insightful about the goals of celebrating and nurturing the human, and the individuality and intellect. [SARAH SOULE] You used a great phrase in that conversation, and I think it Was something about the edge of possibility. Was that what the right-- w as that the phrase (laughter) [ERIC HORVITZ] Yeah so like as-- as Microsoft Research Director one thing I would say a-- A bunch to our people across the world at our labs was, "Are you really working at the edge of doability? " And we would pause and think about what that meant. Like this, it was at the edge, at the very frontier where just a few months ago this would, or a year ago, this would have been considered not just impossible But you wouldn't even think about it because it was ruled out to work in that space. And I do think some of these models tend to very comfortably walk with humans if they're pushed to the edge of doability and help one think about feasibility at the frontier. People have often said, "Oh my gosh," for as early versions of these of the generalist tools "They hallucinate. "

### [20:00](https://www.youtube.com/watch?v=aWqfH0aSGKI&t=1200s) Segment 5 (20:00 - 25:00)

Yeah. Well, these stochastic engines, they can certainly do that and we're working on making sure that doesn't, it's not harmful in a high-stakes decision setting like a healthcare setting. But when it comes to writing fiction or imagining what's possible even, I, you know, I've spent time sitting with Nobel laureates in physics, for example. It's pretty wild what these systems might come up with at the edge where you might think they're hallucinating but we're pushing them into unknown spaces and we say, "Use your stochastic engines to help us out as humans, and we will be the filters. " And we're, we have the aesthetics and the understandings to help guide. It's the guiding and driving, and learning how to do that well with these tools that's gonna be really important. [BEN] Yeah. I think so too. That's really great. Thank you. So, you know, when Jennifer introduced you, she talked about AI and human flourishing which of course has been something that you have been very passionate about and written about and spoken about over the years. And I wanna talk a little bit about that, and I wanna sort of ask you to tell us about first, if it's possible for AI to really enhance human flourishing and human agency, and give us some insight into some of the wonderful things that you have talked about over the years in this space. It's very much on the minds of certainly my mind, but I think of most people in the room. [ERIC HORVITZ] Well, let me start cosmically. Um, I gave a lecture called the Tanner Lecture at Michigan a couple of years ago before GPT-4 where I spoke about AI technologies perhaps maybe not exactly, not to the same extent, being as powerful over several thousand years to tens of thousands of years as another invention of ours, language. You know, we co-evolve with language And it's changed everything. How we coordinate it really is the secret tool that gave us civilization and just really sort of allowed humans to go from where they were as Homo sapiens on the planet largely in small groups into deep thinking collectives. I do think that in the long term, we will see artificial intelligence, and I don't actually like the term artificial intelligence as, and this I agree with several of my colleagues agree with me on this in AI, I wish the field was called computational intelligence because I think it applies to biological nervous systems as well as machines, and together we can go far. With-- with humans, I'm gonna say, well take a humanistic standpoint here, always being on top of things and guiding with our values and our preferences and our goals as much as they might be shaped over time by the machines we work with. Let me bring it in from cosmic to today. Look, I think in our own lifetimes we will have, we will all experience, we're seeing a little bit of this now But it's gonna get, it's gonna accelerate over the next 10 to 15 years, incredible breakthroughs in understanding biology but with applications in medicine, in healthcare, that will be named as AI breakthroughs. I expect there to be one or more breakthroughs in-- in this long term challenge and prevalence of neurodegenerative diseases like Alzheimer's and ALS, FTD, in our lifetimes based in AI powered insights and therapies. Same for cancer. I think that I'd like to see numerous cancers in the next decade, come to be known as chronic diseases or cured. You know, we can now design molecules, we can design proteins, we can come to better understand biological networks in ways that would not have been possible without tools that we refer to as being in the artificial intelligence family. You know, we've seen snippets of how people are applying AI systems in education, but we all know, man, this is just interesting general power for shaping into these incredible tutoring systems for helping people to learn, to personalize how they get through a conceptual space, to understand what specific math problem that we all

### [25:00](https://www.youtube.com/watch?v=aWqfH0aSGKI&t=1500s) Segment 6 (25:00 - 30:00)

have seen that bothered us, we couldn't get through easily and maybe because of that, moved away from math. Break me through. I wanna understand that piece that I may have repelled me in the past. Take me forward in new ways. So I think educational systems will even help us re-skill very quickly as workforce is adapting to all sorts of interesting changes we'll be buffeted by over time. You know, in the whole world of self-understanding and knowledge and daily life and relationships we're just seeing glimmers now of how systems can help people communicate with each other better to get on the same page, to come up with clarity in viewpoints that can be displayed. So I see these systems as being very helpful in person-to-person, in nation state to nation state engagement over time, hopefully lead to a renaissance in what it is we're here for anyway in terms of, you know, optimizing goals and promoting empathy. So, you can name the space material science, biology and healthcare, education. Pick your sector, production efficiencies. I think that these tools of optimization, of integration, of evidence gathering and synthesis, of specialization, of generalization, of emergent concepts will change everything. [INTERVIEWER] Thank you. That's inspiring And I actually tend to agree with you. It's really been amazing to see even just in the last two years how powerful and useful these have come, become. I wanna come back to MBA students again. I asked you a little bit about advice for them generally but in thinking even more specifically, which domains do you think are on the verge of non-obvious transformation where AI changes the economics of work rather than just efficiency? [ERIC HORVITZ] That's a really good question. First of all, I think we should prepare for surprises framed by that question, and maybe we can talk in smaller groups about what surprises might look like, so maybe the less surprising, we can characterize them in some way in advance of the surprise happening. One thing to keep in mind is there might be whole new fast-rising infrastructures, we'll call it for right now. One that I'm looking at very carefully is agentic marketplaces. So, what will it mean if, within a decade, there are proxies for buying and selling and middle processes for making those things happen, where we each have our agent proxies and there are whole economies of this agentic interaction. We recently published a paper from-- from our teams at Microsoft one called, you know, the-- the-- The Coming Agentic Marketplace. We also developed a simulation tool where we can, we actually can watch what happens when we unleash as prototype agents that are buying and selling among other things, scheduling, calendaring and so on that can, you can download. It's open source to experiment with. The HAI, you know, Human-Centered AI Institute, had hosted a whole day workshop and into the evening on the future of agentic interactions and of all groups sponsored by, you know, Consumer Reports, and because that group believes this is something important to invest in and understand like now. Other areas that we wanna keep, I think, look at as prototype are sectors like healthcare. We often think about, oh, AI and medical diagnosis or oh, we see transcription systems and people using tools that can actually make the patient encounter easier on the administrator front by capturing the conversation and summarizing notes. But if you look at all the end-to-end of how healthcare works pairs prior authorization, the way

### [30:00](https://www.youtube.com/watch?v=aWqfH0aSGKI&t=1800s) Segment 7 (30:00 - 35:00)

responsibility is attributed and required by the, you know, different actors in that system, the patient-facing component of it, the connecting of the patient-facing component to the physician-facing component, where there's some sort of a system in between. That's just a prototype for thinking deeply in a holistic way about complex systems, where today we see pieces of the friction points being addressed by startups of various kinds, and then responded with HHS regulatory activity, and it's all right now a huge exploration even in healthcare, and you can just take another sector, and the same thing is going on. There might be some shared aspects across these worlds, and of course, we've seen higher-level statements of regulation in terms of conducive and maybe not so conducive to advancement of the technology in the European community, in the United States, among the states now separately, that I think should be viewed as also part of the exploration right now, rather than the final word. So when it comes to the MBA students, I mean, there's so much, back to my first comment, there's so much to be done right now at understanding what's the role of insight management, good decision-making, understanding goals, clarity of what happens to an organization when you make a change in terms of workflow or process. There's anxiety and uncertainty about this right now of people coming to the table with skills in artificial intelligence, in management and business will be at a premium. [SARAH SOULE] I love that answer in part because as we think about what we need to be offering our students right now, you've given us a lot of really nice advice, so I really appreciate that, so it's good for the students, but it's also good for those of us who are trying to think about, you know, what we wanna offer in our courses and so on, so I really deeply appreciate that. On our prep call Eric and I talked a lot about authenticity, and in particular, we talked about this in terms of deep fakes and your warnings about deep fakes and so I'm wondering if you can say a little bit about how you know, how you think the kind of lack of trust and misinformation and in the presence of various kinds of deep fakes how that's going to affect how we live and work, and more importantly, what we can all do about it. [ERIC HORVITZ] Yeah, around 2015 or so, I started seeing snippets. In fact, one of the first deep fakes which weren't named that just yet that I looked at was by a Stanford University computer science team who had shown that you could build a system that could put words in a politician's mouth and I was just really impressed by this. You know, of course, it's like, no worries, this is a CVPR demo. What conference and vision. You know, these were behind closed doors in laboratories and they're kinda cool because they kinda dissertation level advance and someone can write about this And it was all super exciting, and I gave a talk saying, "Where might this go? " And it's, again, it's only about nine or ten years now, and here we are. Along the way, I've raised the concern among our teams at Microsoft, as well as in groups that I work with, both in civil society and government, that it, there'd come a point very soon where it'd be difficult to discriminate fact and fiction, and to ask the question, what would that mean? But secondly, what might we do technically, and for policy and law when it comes to protecting, as best we can, the veracity of information? And the first version of this that our teams worked on, we had, we, we called together a set of, a set of groups, were, answering this challenge I had two, like three or four teams at Microsoft in front of whiteboards. How can we basically, I'll have to put the mic here certif-- build a system that whereby every photon hitting a light-sensitive surface of a camera could certifiably be linked to a photon coming out of a display anywhere on the internet? What can we do to make that happen? And that led to a technology we called media provenance technology, or secure cryptographic provenance

### [35:00](https://www.youtube.com/watch?v=aWqfH0aSGKI&t=2100s) Segment 8 (35:00 - 40:00)

which puts kind of a wax seal on content captured by cameras and microphones, and so you can see that seal is unbroken when you get it at the end on the display side. That became what's now called C2PA Content Credentials, which is a standard used by all the big tech companies and beyond, camera companies and so on, but we have to go further, because once you create a solution like that, which has promise, you have to also red team it and attack it to make sure that the solution itself doesn't become a problem. So just two days ago, we released a 54-page report which came out of an internal Microsoft study where I, where I asked people, "Red team this. How could it go wrong? " How could people use this wax seal to make you think that there wasn't really a crowd in Detroit greeting Kamala Harris, that was AI or vice versa? And I think we need to sort of work through the possibilities. And I have to say, there was good news at the end of that study, which came up with whole new methods for having very high confidence authentication of content from at least even if we don't say it's true or false, we know where it came from, which is interesting and valuable. And that's just a piece of authenticity because now we're studying I our teams and other teams the last meter. Here's the display. The technology works. We can kind of certify it works. What do people think? What do end users think when it comes to seeing these various symbols and wax seals? Do they buy it? Do they believe it? What will it all mean? And also, even having human rights people who, in organizations like WITNESS, Sam Gregory who runs WITNESS, raising concerns, what does this mean about, like, people taking pictures of violent acts against people and human rights violations that aren't certified with that stamp? Does that make these pieces of evidence less believable and less valuable, and that, what's the cost of that? So, we have to think through these technical solutions very much socio-technically and also throw them out for red teaming and for attack. And in this case, we attacked our own cool solutions to make sure that the world knew their weaknesses. There's another whole dimension of authenticity, which is, you know, if I asked this audience right now how many people have gotten, like, a beautiful note or poem from a family member, and it's not the same thing anymore, you know, when you get it now because there's no proof of effort there. Anybody raise your hands if that's happened to you. Yeah, so I guess most of this audience that's not happened to you yet. Well, I don't know, if you just didn't want to raise your hand or it's you've already experienced that. But the idea of thinking about what's the future role of authentic communication where you might have cultural norms. One thing I asked the designer to do at Microsoft recently, I've been using this, is a little create me a little icon. It's a little circle that says, "100% human crafted. " And I'll put it at the end of emails sometimes when I craft a nice message And I wanna make sure people know that was, you know, that was really me. (laughter) [SARAH SOULE] I love that, and I love that example as well. And, I'll ask one more question, and then we're going to open it up to take some student questions as well. And this is really a question about mentorship. So, Eric is also known at Microsoft to be one of the sort of most generous and prolific mentors, if that's a word, and has lots of mentees over the years. And I think this really is important to our students who are being mentored and will eventually mentor people, and also to our faculty here who are mentoring PhDs and post-docs and pre-docs and so on. Can you reflect a little bit on how AI is changing how humans are mentored and the role of mentors and mentees in this relationship, and how you're thinking about that perhaps as you think about the people that you are mentoring currently? [ERIC HORVITZ] Well, for me, mentoring has been a way that I stay on top of things And I learned a lot. Typically, we have a beautiful intern program at Microsoft for PhD level, graduate student interns. I've had over 150 now. I'm proud to say that my interns include, like, Jure Leskovec at Stanford. Who else is in the department? Michael Bernstein was my intern.

### [40:00](https://www.youtube.com/watch?v=aWqfH0aSGKI&t=2400s) Segment 9 (40:00 - 45:00)

Johan Ugander, who just left to go to the East Coast. But it's a proud, it gives me pride to know that I possibly could have helped, even though if I've learned more from them than they learned from me, these careers, and I've had great, I've had grandchildren interns and great-grandchildren interns. So, now, professors of professors have been my interns over the years. And it's just been a beautiful web, as in a family situation. I remember we had, when I turned 60, we had this kind of fetch-fifth-like thing, and I said, "Just invite my interns from the past. " And the room was filled, and I could look across the room and I saw every face, like, our, like, two and a half month project, and then sometimes it goes beyond per station committee and so on. So, mentorship, apprenticeship helpful relationships, collaborations are going to continue to be central, even in a world of rising automation. I like to think in my more optimistic moments, that no matter, that as the world fills with AI tools and tools of cognition that chatter and think, that there'll be even more of a focus on what makes us human. What it means to be working with somebody to learn from them. What it means to work with a team that you're very close with, that you produce creatively in a joint way with. I think that the idea of the rise of a caring economy among humans, getting even richer is not unlikely in a world of automation. I also see the rise of more of a focus on artistry and mastery of what people do with their own hands. And that leading to more important opportunities for apprenticeship and mentoring over time. I would just love to see, like, what kids and adults and adolescents are doing 35 years from now, and whether or not something has changed about human agency and self-dignity. I'm guessing that we will be nurtured in that by the machines we build, as opposed to have those machines take over our independence and autonomy. And so I like to be optimistic about that, moving into the future, and mentoring will always stay central, I think. [SARAH SOULE] That makes me feel much less anxious, so thank you. Yeah, I think many of us who are on the academic side have mentored people, we often say, at least I often say, that the measure of my career will not be on what I've accomplished, but what on the people I've mentored have accomplished. [ERIC HORVITZ] Yeah. I should say that we just had a meeting an international meeting at Microsoft here that they call, these are called Foo Camps if you have social science Foo Camp. O'Reilly puts these on. And one of the discussions that I led was called, you know, Protecting and Nurturing Human Agency in an Era of Artificial Intelligence. And we had a really interesting discussion and people started saying, like, what they really care about, what they will value in a world of automation. And I have to say that for me, what came up was I will always value mentoring others. That's one of the things that I view as produce my production rules, and AI will not take that away. [SARAH SOULE] Agree. Foo Camp is cool, by the way, if you ever get invited, it's really fun. [ERIC HORVITZ] FOO. [SARAH SOULE] Yes. [ERIC HORVITZ] Kind of interesting. [SARAH SOULE] Good. Okay. Well, Eric has generously agreed to take some student questions and we have microphone runners, and I will let the, the runners go ahead and choose some folks. [SERENA] Thank you so much for joining us today. My name is Serena, and I'm a master's student in the Department of Management Science and Engineering. I also did my undergrad in data science. I'm curious, as you look toward the next phase of deployment in AI, what do you think are some of the most important open questions and evaluations and safety assessments, particularly when it comes to governance and emerging standards in the AI ecosystem? [ERIC HORVITZ] I have a lot to say about that. But I'll just start by saying that for all the fanfare and celebration and investment going into large-scale language

### [45:00](https://www.youtube.com/watch?v=aWqfH0aSGKI&t=2700s) Segment 10 (45:00 - 50:00)

models or large language models, we don't know how to calibrate them probabilistically. These systems are being used to make recommendations and in decision-making contexts where the world has to demand, if you're gonna say something, I want a probability of its truth. I want to, you to have well-calibrated confidence. To me, that would go a long way across the board in safer systems because we could then fold these outputs into our own utility functions or, I'm speaking Management Science and Engineering because that's the master's program. We can then fold them into our utility functions, our cost-benefit models, and understand how to use these systems and how to, how seriously to take them when it comes to an assessment. So, there's a lot more to say about safety and controls. One of my recent thoughts on that on the broader brushstroke is we spent a lot of time and effort at Microsoft on the safety front, both metrics, evaluations. We've collaborated with Stanford folks on MedHelm, for example. Metrics that we can use to measure and to characterize the performance of these systems. Coming up with interesting tools to keep these general models, we ship safe in terms of their, the content they can generate. Safe from, you know, making they're -- we ensure that they perform well in terms of potential harm to people psychologically or physically per-- Recommendations. But it's getting to the point now where these models are getting very powerful And I have this sense, we haven't given up just yet, 'cause we're gonna push on, keep on pushing on safety. At some point, the companies producing these models become like electric power companies and can't guarantee all the safety in how they're used and we have to sort of turn towards governance, towards practices and norms, to electricians and Underwriters Laboratory, and I can't believe you put that radio near the tub. It's like, you know, like practices that are outside of the actual general power of the models. And I'm, I'm sort of getting myself prepared for helping with that transition to a, where it's people, society, culture, norms, regulatory activity, laws and practices become what is the core aspect of safety and best practice of use. [LU] My name is Lu, a science student from School of Medicine. My question related to, as AI is becoming very common in healthcare, do you think the main challenge ahead is improving the performance of the AI or improving how the institutions making decision with AI, specifically in healthcare? [ERIC HORVITZ] Oh, I didn't understand how you're separating out-- maybe you can clarify the question. How are you separating out better diagnosis from better decision-making? [LU] Yeah. Like, improving the model performance itself, the performance of the model, or improving how the institution making a decision using the AI. [ERIC HORVITZ] Oh, I see. Well, certainly those are interdependent. If you have higher capabilities for diagnosis and therapy planning coming out of an AI system those considerations are very central in what an organization will do or how they'll meld or harness that technology. If I took a freeze frame now, my reaction would be, you need to think deeply about the tool. You need to, not just look at the general performance on metrics like MED-HELM, which is a beautiful set of metrics for clinical medicine, but also how well it's performing on your own datasets, on your own demographic. I-- I just was t-- talking at National Academy of Medicine meeting where I shared the big secret. I said, "Everybody, big secret, these medical models are not portable. You can't just take them from hospital A to hospital B and expect they'll work just fine. " We found that out years ago with, with Bayesian network models and machine-learn-- you know, traditionally machine learned models. So, thinking deeply about not just decision-making, even transcription you have to really think deep, you know, think clearly about potential errors and what it might mean and how that compares per uplift or down draft from human-only systems. In fact, it's gonna, it takes for years to

### [50:00](https://www.youtube.com/watch?v=aWqfH0aSGKI&t=3000s) Segment 11 (50:00 - 55:00)

come, I think it's gonna take clinical trials, randomized clinical trials to understand how much these AI tools can help in healthcare delivery in different kinds of settings. I have a lot more to say about that, but I'm gonna say, I'll say one last thing, which is, in medicine the FDA and other, other views of healthcare tools or AI in medicine is, you separate out average performance that you might see in, in means and medians on, let's say, a diagnostic task, from issues of safety. And the, the analogy for self-driving cars or Teslas is, for example, like a, like a Tesla is, oh, Teslas make the world safer. Look at the statistics. Say, person X or person Y, I won't mention who the people are. You know them. But then someone else will point out, wait a minute, that was a dramatic failure. That would never happen if it was a human driver driving under a truck like that. So, these are the safety issues at the edges that will not be acceptable by society. We have to sort of address not just the, again, average specificity and sensitivity in diagnosis, but what are dramatic failures and understand and characterize them. [BEN] My name is Ben. I'm an MBA student here at the GSB. And I wanted to ask about your sort of comparison to steel and steam, saying that even 20 years out, we'll still be, you know, seeing meaningful, or I guess, like, seismic shifts. How do you sort of bring that together with the current excitement, hype, and all of the money being thrown into this sector? [AUDIENCE MEMBER] Do you think that most people share your view that, you know, we're playing a really long game here? Or do you think that we are gonna see short term sort of seismic shifts still to come while also expecting a long term or a long term horizon for seeing the true value of the technology? [ERIC HORVITZ] Yeah. Thanks for your really pressing question. I think we're going to be surprised by how fast AI moves in certain ways, and also surprised by how slow it moves in certain ways. And it's gonna depend on many, many factors. I do think that overall, things will move faster than steam or like, and electricity. However, let me just say that on the front of dynamism and transformation, unlike steam and electricity, we're dealing with a fundamental substance that's changing under our feet as it evolves as well. So what would I say to people who are excited in investing? I'd say go for it. There'll be some disappointments and some big, big wins. And what I hope doesn't happen is some sort of a premature retraction and disappointment of the form that's happened in the past in other industries, and we've all seen the Gartner Hype Cycle and the curves and so on. And the question is, you know, might there be enough embers in the fire right now, even if it cools a bit, to keep things moving to get to the actual delivery across many boards when it comes to value. I am intrigued and excited about the public level of interest. It's almost like for years we've heard about the possibility of AI. People sort of haven't really known what it is. And all of a sudden, there's a version of it that's easy to interpret sitting in their pockets. So you can imagine why there's excitement along with early deployments that show things like, wow, we can do transcription in medicine and save on administrative time, and we can draft outgoing mail in patient inbox, and that's like a big pain point for physicians. And, oh my gosh, we're going step at a time and we're seeing some really great applications. We've seen material science applications. We've seen incredible protein design work going on. So, I think that there's enough heat to keep appropriate levels of investment moving right now. [SARAH SOULE] Great. Well, we are at time. I will say, having just had my physician use one of those, and then I was reading the notes and it said that I presented as confused and delirious. I immediately wrote her and she-- and it was actually flagged in red and she said, "Oh my gosh, I'm so sorry. " So, do check. [ERIC HORVITZ] Well, we-- [SARAH SOULE] Somebody was hallucinating and it wasn't me. [ERIC HORVITZ] We won't make any comments about what the doctor really thought.

### [55:00](https://www.youtube.com/watch?v=aWqfH0aSGKI&t=3300s) Segment 12 (55:00 - 55:00)

[SARAH SOULE] Exactly. Well, please join me in thanking Eric for joining us here today. (applause and cheering) (upbeat electronic music)

---
*Источник: https://ekstraktznaniy.ru/video/24756*