Moving from single-purpose systems that kind of recognize patterns in data to these kinds of general-purpose intelligent systems that have a deeper understanding of the world will really enable us to tackle some of the greatest problems humanity faces. For example, we’ll be able to diagnose more disease; we'll be able to engineer better medicines by infusing these models with knowledge of chemistry and physics; we'll be able to advance educational systems by providing more individualized tutoring to help people learn in new and better ways; we’ll be able to tackle really complicated issues, like climate change, and perhaps engineering of clean energy solutions. So really, all of these kinds of systems are going to be requiring the multidisciplinary expertise of people all over the world. So connecting AI with whatever field you are in, in order to make progress. So I've seen a lot of advances in computing, and how computing, over the past decades, has really helped millions of people better understand the world around them. And AI today has the potential to help billions of people. We truly live in exciting times. Thank you. (Applause) Chris Anderson: Thank you so much. I want to follow up on a couple things. This is what I heard. Most people's traditional picture of AI is that computers recognize a pattern of information, and with a bit of machine learning, they can get really good at that, better than humans. What you're saying is those patterns are no longer the atoms that AI is working with, that it's much richer-layered concepts that can include all manners of types of things that go to make up a leopard, for example. So what could that lead to? Give me an example of when that AI is working, what do you picture happening in the world in the next five or 10 years that excites you? Jeff Dean: I think the grand challenge in AI is how do you generalize from a set of tasks you already know how to do to new tasks, as easily and effortlessly as possible. And the current approach of training separate models for everything means you need lots of data about that particular problem, because you're effectively trying to learn everything about the world and that problem, from nothing. But if you can build these systems that already are infused with how to do thousands and millions of tasks, then you can effectively teach them to do a new thing with relatively few examples. So I think that's the real hope, that you could then have a system where you just give it five examples of something you care about, and it learns to do that new task. CA: You can do a form of self-supervised learning that is based on remarkably little seeding. JD: Yeah, as opposed to needing 10,000 or 100,000 examples to figure everything in the world out. CA: Aren't there kind of terrifying unintended consequences possible, from that? JD: I think it depends on how you apply these systems. It's very clear that AI can be a powerful system for good, or if you apply it in ways that are not so great, it can be a negative consequence. So I think that's why it's important to have a set of principles by which you look at potential uses of AI and really are careful and thoughtful about how you consider applications. CA: One of the things people worry most about is that, if AI is so good at learning from the world as it is, it's going to carry forward into the future aspects of the world as it is that actually aren't right, right now. And there's obviously been a huge controversy about that recently at Google. Some of those principles of AI development, you've been challenged that you're not actually holding to them. Not really interested to hear about comments on a specific case, but... are you really committed? How do we know that you are committed to these principles? Is that just PR, or is that real, at the heart of your day-to-day? JD: No, that is absolutely real. Like, we have literally hundreds of people working on many of these related research issues, because many of those things are research topics in their own right. How do you take data from the real world, that is the world as it is, not as we would like it to be, and how do you then use that to train a machine-learning model and adapt the data bit of the scene or augment the data with additional data so that it can better reflect the values we want the system to have, not the values that it sees in the world? CA: But you work for Google, Google is funding the research. How do we know that the main values that this AI will build are for the world, and not, for example, to maximize the profitability of an ad model? When you know everything there is to know about human attention, you're going to know so much about the little wriggly, weird, dark parts of us. In your group, are there rules about how you hold off, church-state wall between a sort of commercial push, "You must do it for this purpose," so that you can inspire your engineers and so forth, to do this for the world, for all of us. JD: Yeah, our research group does collaborate with a number of groups across Google, including the Ads group, the Search group, the Maps group, so we do have some collaboration, but also a lot of basic research that we publish openly. We've published more than 1,000 papers last year in different topics, including the ones you discussed, about fairness, interpretability of the machine-learning models, things that are super important, and we need to advance the state of the art in this in order to continue to make progress to make sure these models are developed safely and responsibly. CA: It feels like we're at a time when people are concerned about the power of the big tech companies, and it's almost, if there was ever a moment to really show the world that this is being done to make a better future, that is actually key to Google's future, as well as all of ours. JD: Indeed. CA: It's very good to hear you come and say that, Jeff. Thank you so much for coming here to TED. JD: Thank you. (Applause)