Aicha Evans: Your self-driving robotaxi is almost here | TED
11:02

Aicha Evans: Your self-driving robotaxi is almost here | TED

TED 01.02.2022 41 099 просмотров 892 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
We've been hearing about self-driving cars for years, but autonomous vehicle entrepreneur Aicha Evans thinks we need to dream more daringly. In this exciting talk, she introduces us to robotaxis: fully autonomous, eco-friendly shuttles that would take you from place to place and take up less space on the streets than personal cars. Learn how this new technology works -- and what a future where we hail robotaxis would look like. Visit http://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more. The TED Talks channel features the best talks and performances from the TED Conference, where the world's leading thinkers and doers give the talk of their lives in 18 minutes (or less). Look for talks on Technology, Entertainment and Design -- plus science, business, global issues, the arts and more. You're welcome to link to or embed these videos, forward them to others and share these ideas with people you know. Become a TED Member: http://ted.com/membership Follow TED on Twitter: http://twitter.com/TEDTalks Like TED on Facebook: http://facebook.com/TED Subscribe to our channel: http://youtube.com/TED TED's videos may be used for non-commercial purposes under a Creative Commons License, Attribution–Non Commercial–No Derivatives (or the CC BY – NC – ND 4.0 International) and in accordance with our TED Talks Usage Policy (https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy). For more information on using TED for commercial purposes (e.g. employee learning, in a film or online course), please submit a Media Request at https://media-requests.ted.com

Оглавление (11 сегментов)

  1. 0:00 Intro 207 сл.
  2. 1:23 Transportation and technology 232 сл.
  3. 3:04 Computer vision 121 сл.
  4. 4:04 Camera systems 83 сл.
  5. 4:40 The reality 97 сл.
  6. 5:26 Sensors 117 сл.
  7. 6:17 Confidence 129 сл.
  8. 7:06 Scenarios 162 сл.
  9. 8:07 Human in the loop 123 сл.
  10. 8:52 Vision systems 95 сл.
  11. 9:40 Conclusion 182 сл.
0:00

Intro

I’m Aicha Evans, I am from Senegal, West Africa, and I fell in love with technology, science and engineering at a very young age. Three things happened. I was studying in Paris, and starting at seven years old, flying back and forth between Dakar, Senegal and Paris as an unaccompanied minor. So it wasn't just about the travel. It was really about a portal to knowledge, different environments and adapting. Second thing that happened was every time I was at home in Senegal, I wanted to talk to my friends in Paris. So my dad got tired of the long-distance bills, so he put a little lock on the phone -- the rotary phone. I said, OK, no problem, hacked it, and he kept getting the bills. Sorry again, Dad, if you’re watching this someday. And then, obviously, the internet was also emerging. So what really happened was that, in terms of technology, I really saw it as something that shaped your experiences, how you understand the world and wanting to be part of it. And for me, the common thread is that physical and virtual transportation -- because that’s really what that rotary phone was for me -- are at the center of the innovation flywheel.
1:23

Transportation and technology

Now, fast-forward. I’m here today, I’m part of a movement and an industry that is working on bringing transportation and technology together. Huh. It’s not just about your commutes. It’s really about changing everything in terms of how we move people, goods and services, eventually. That transformation involves robotaxis. Driverless cars again, really? Yeah, yeah, I’ve heard it before. And by the way, they are always coming the next decade, and oh, by the way, there’s an alphabet soup of companies working on it and we can’t even remember who’s who and who’s doing what. Yeah? Audience: Yeah. AE: Yeah, OK, well, this is not about personal, self-driving cars. Sorry to disappoint you. This is really about a few things. First of all, personally and individually owned cars are a wasteful expense, and they contribute to, basically, a lot of pollution and also traffic in urban areas. Second of all, there’s this notion of self-driving shuttles, but frankly, they are optimized for many. They can’t take you specifically from point A to point B. OK, now we have -- hm, how am I going to say this -- the so-called “personal, self-driving” cars of today. Well, the reality is that those cars still require a human behind the wheel. A safety driver. Make no mistake about it. I own one of those, and when I’m in it, I am a safety driver.
3:04

Computer vision

So the question now becomes, What do we do with this? Well, we think that robotaxis, first of all, they will take you specifically from point A to point B. Second of all, when you're not using them, somebody else will be using them. And they are being tested today. When I say that we’re on the cusp of finally delivering that vision, there's actually reason to believe it. At the core of self-driving technology is computer vision. Computer vision is a real-time representation, digital representation, of the world and the interactions within it. It has benefited from leaps and bounds of advancements thanks to computer, sensors, machine learning and software innovation. At the core of computer vision are camera systems.
4:04

Camera systems

Cameras basically help you see agents such as cars, their locations and their actions, pedestrians, their locations, their actions and their gestures. In addition, there's also been a lot of advancements. So one example is our vehicle can see the skeleton framework to show you the direction of travel; also to give you details, like, are you dealing with a construction worker in a construction zone or are you dealing with a pedestrian that’s probably distracted because they are looking on their phone?
4:40

The reality

Now the reality, though -- and this is where it gets interesting -- is that the camera and the algorithms that help us really cannot yet match the human brain’s ability to understand and interpret the environment. They just can’t. Even though they provide you really high-resolution imaging that really gives you continuous coverage, that doesn’t get fatigued, impaired or, you know, drunk or anything like that, at the end of the day, there are still things that they can’t see and they can’t measure. So if we want autonomous-driving robotaxis soon, we have to supplement cameras.
5:26

Sensors

Let me walk through some examples. So radar gives you the direction of travel and measures the agent’s movement within centimeters per second. Lidar gives you objects and shapes in the real world using depth perception as well as long-range and the all-important night vision. And let me tell you about this, because this is important to me personally and people who look like me. Then you have, also, long-wave infrared where you are able to see agents that are emitting heat, such as animals and humans. And that’s again, especially at night, super important. Now, every one of these sensors is very powerful by itself, but when you put them together is when the magic happens.
6:17

Confidence

If you see with this vehicle, for example, you have these multiple sensor modalities at all top four corners of the vehicle that basically provide you a 360-degree field of vision, continuously, in a redundant manner, so that we don't miss anything. And this is that same thing with all of the different outputs fused together. And looking at this, basically, and looking at what we see and how we are able to process the data, then learn, then continue to improve our driving, is what tells us that we have confidence, this is the right approach and this time it’s actually coming. Now, this is not, by the way, a brand new concept, OK? Humans have been basically using vision systems to assist them for a long time.
7:06

Scenarios

Let me back up the boat a little bit, because I know there’s a question that everybody’s asking, which is, “Hey, how are you going to deal with all the scenarios out there on the streets today?” Most of us are drivers, and it’s complicated out there. Well, the truth is that there will always be edge scenarios that sit at the boundary of our real-world testing or that are just too dangerous to test on real streets. That is the truth, and it will be the truth for a very long time. Human beings are pretty underrated in their abilities. So what we do is we use simulation. And with simulation, we’re able to construct millions of scenarios in a fabricated environment so that we can see how our software would react. And that’s the simulation footage. You can see we’re building the world, we’re putting in scenarios and we can add things, remove things and see how we would react.
8:07

Human in the loop

In addition, we have what's called a human in the loop. This is very similar to aviation systems today. We don’t want the vehicle to get stuck, and there are rare times where it’s not going to know what to do. So we have a team of teleguidance operators that are sitting at a control center, and if the vehicle knows that it’s going to be stuck or it doesn’t know what to do, it asks for guidance and help and it receives it remotely and then it proceeds. Now, none of these really are new concepts, as I alluded to earlier. Vision systems have been assisting humans for a long time, especially with things that are not visible to the naked eye.
8:52

Vision systems

So... microscopes, right? We’ve been studying microbes and cells for a long time. Telescopes: we’ve been studying and detecting galaxies millions of light-years away for a long time. And both of these have caused us, for example, to transform industries like medicine, farming, astrophysics and much more. So when we talk about computer vision, when it started, it was really a thought experiment to see if we could replicate what humans see using cameras. It has now graduated with sensors, computers, AI and software innovation to be about surpassing what humans can see and perceive.
9:40

Conclusion

We’ve made a lot of progress in this field, but at the end of the day, we have a lot more to do. And with an autonomous robotaxi, you want it to be safe, right and reliable every single time, which requires rigorous testing and optimization. And when that happens and we reach that state, we will wonder how we ever accepted or tolerated 94 percent of crashes being caused by human [error]. So with computer vision, we have the opportunity to move from problem-solving to problem-preventing. And I truly, truly believe that the next generation of scientists and technologists in, yes, Silicon Valley, but in Paris, in Senegal, West Africa and all over the world, will be exposed to computer vision applied broadly. And with that, all industries will be transformed, and we will experience the world in a different way. I hope you can join me in agreeing that this is a gift that we almost owe our next generation that is coming, because there are a lot of things that computer vision will help us solve. Thank you. (Applause)

Ещё от TED

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться