I believe it will happen, but my big question is when. I think it's really important to be prepared for the reality. There's a lot of people say, "Hey, this makes sense. I want this. We should have this soon. " But remember, there's a lot of cases where people have talked about that in the past. Fusion energy, nuclear fusion, sort of the technology is pretty obvious, but you have to contain this plasma. That seems like a technical issue. We can figure that out. 50 years later, we're still working on it. So there are problems that are extremely difficult and they take much longer than anyone expects and it seems like robotics is like that. I want to be a voice to say, hey, it might not happen and let's just think about that and be a little bit realistic because I know how a lot of people are thinking that it's inevitable. Before we started chatting, you sent over an article that I think is very pertinent to kind of set the stage for probably most of the conversation we're having today. And it was an article that was in the New York Times and it's talking uh Rodney Brooks has quote unquote said the field has lost its way. And for people that aren't familiar with Rodney Brooks, he's the former director of MIT's computer science and artificial intelligence. He's the Ramba inventor and you know running that entire company and product line and so for him to come out and say something like this is kind of a big deal and I just really want to kind of capture your take on what is he talking about the field has lost its way. — Well he's very respected individual. He's a good friend of mine and I agree with him very much and I think he's provocative. He's put it in his own words. I sent that to you because I think it's very relevant for us to start this conversation about what's real and what's hype — and in robotics and I have to be careful about the word hype but I want to say there's a let's call it inflated expectations — that are out there and I understand where they come from. You know I think people are excited about technology. I am too. We all grew up with science fiction and we love it and we love new things and there has been some breakthroughs. I mean there's no doubt that the advances in artificial intelligence in particular deep learning and then generative AI with the transformer model have been transformative. Okay, in the field that AI systems are doing things that no one thought would be possible by now. — So and I will be the first to admit they're capable of creativity. They're immensely valuable. But [clears throat] people then take the next logical step and say, "Okay, these systems have solved language, so therefore they'll solve robotics, too. " — And that is where I have a lot of concerns. I'm really we could get into the details, but Rod and I agree that there is not at all obvious that what the advances in language in AI will extend to robotics. What would you say is the number one thing that you're seeing that's just grossly out of touch with reality when it comes to the robotics piece being not as far along or it's not coming in, you know, the talking point is in 5 years we're going to have humanoid robots doing everything, right? So like what are the big chunk pieces that people that aren't intimately familiar with the space that you see are missing on that particular topic? Okay. So, let me tell you first of all where some of the advances have been made and one of them is in quadripeds that's walking dogs basically and bipeds that's walking machines and navigation and I would call mobility. — So, the ability to get around with robots with legs has made immense progress. That's been very exciting and there's no doubt about it. Those machines are capable of doing back flips as you know, side flips, parkour, all kinds of things that I certainly can't do. There's also a huge advances in drones. — And so the past decade, we've seen drone technology take off from something that was very experimental, but it's been a number of advances that have made that possible. In both cases, a lot of it has to do with motors and the hardware, but also advances in simulation and the ability, for example, for drones is to stabilize themselves and then to be able to control very accurately the motors on the four or six rotors that are there. And the same is true for robots that have legs or quadripeds or bipeds. [clears throat] — So, these are big, undeniable, and major advances. And if you just look at the field, you say, "Okay, all this is coming and now the next thing we're going to have home robot taking care of us and you know this is around the corner according to Elon Musk, right? " And I'm sure I'm going to get some push back from some of your listeners who are going to say, "I don't know what I'm talking about. " Okay, I've se I've had
Segment 2 (05:00 - 10:00)
that happen from a number of uh very confident expert quote experts from Silicon Valley tell me that. But I've been working in this field for 45 years and I've studied very closely and understand where the gaps remain for particular manipulation. — Mhm. — And manipulation is being able to pick things up — uh you know all kinds of things that just happen to be in your environment and then being able to manipulate them to uh you know do things. That skill is very nuanced and tricky — and it's not clear that the current methods for doing AI is going to get us there. — I've heard in interviews Elon in particular say that the hand mimicking the hand and the tendons and being able to have that tactile ability is extremely difficult is the way he has said it in interviews. But I think that there's a, you know, I suspect that when it really comes down to it, what you're seeing is a lot of demos online that you see like this video, somebody picks up a pencil and they and the robot did it. But what's actually happening there behind the scenes, whether that was, you know, a programmed publicity stunt or something that the robot can just do quite well seems to be there seems to be a large gap there. So talk to us about where you see that gap and what it is in reality in your humble opinion. — Okay. So, and this is understandable. Again, I don't want to say people are naive. I get where they're coming from. They see something and it looks humanlike and so they attribute human like qualities to it and skills. I understand that. And by the way, when Elon says the hands are hard, we can produce people are designing hands that look very much like human hands. that is they have 22 degrees of freedom and they can move all these joints independently very quickly and they look almost identical to human hands. So we can reproduce that. In fact, there's like a hundred different hands that are being produced by different companies in China right now. Okay. So the advances in the hand itself are very sophisticated but the control of the hands is where the challenge is. And this is where if you have this hand doing this but then get it to actually tie your shoelace that is where the challenge is. And this is because we actually there's so many nuances in the interactions that we have with these fingers with the environment that we are sensing the environment. We are exerting forces on the environment. And this is very subtle and very nuanced. And we perceive this through a variety of techniques. We have something like 15,000 sensors in our hand. In every hand. Yeah. I know. It's remarkable. We don't even think about it because it's subconscious. — Yeah. — But then we also have sensors in our joints. Every one of our joints. So we are able to perceive very subtle forces — slip and in particular one very nuanced thing is deformation. So if you look at your fingertips, they've evolved in a really interesting way that those pads are extremely helpful. If you put on, let's say, thimbles on your fingers, right? Like you're sewing. — That makes it much more difficult to do anything, — right? You can imagine or just actually heavy gloves as well, right? But we can do these things very subtly. We have learned this ability to interact with the forces of objects that the objects are constantly being moved and deformed. So if you think of the shoelace, right, that's the object is being deformed. The fingertip is being deformed as well. — This mutual deformation is something that's really nuanced and subtle and we don't even know how to simulate it accurately. M so we can't even simulate the forces and torqus and deformationations that are occurring and then we don't have the sensing capabilities to perceive these nuances like you can feel a shoelace if it's a little bit slipping out of your fingertip. — Mhm. — No robot can do that. What happens is that when you now have this hand and you actually try to execute something, sometimes it works but a lot of times it doesn't work. — And now you have the issue of reliability. Yes, — that's where we're seeing. And by the way, you can see robots all day long picking up stuff off a table and moving it somewhere. That's actually not so difficult. If you just want to pick up, especially a stuffed animal, by the way, stuffed animals are very easy because you almost can't go wrong. You just put your gripper anywhere near them and close it and you'll pick up that thing. — Okay, so those are they're sitting ducks, right? That's super easy, lowhanging fruit, let's call it. And that makes it very easy to pick up and move things. But now when you want to start doing things like inserting things like repairing a stuffed animal by opening it up and getting things, you know, pulling out the stuffing or sewing it back up. This is totally different and much more difficult. Yeah. Your example of a shoelace is really profound
Segment 3 (10:00 - 15:00)
because until you like take a step back and just think if I had to design or build a robot to tie a shoelace, I can't even imagine how incredibly difficult something like that would be because it is such a complex task. And I've never even thought about how difficult something like that is. — Well, here's what I know because shoelace we all do. We learn when we're young and we kind of do it without even thinking about it. It's a subconscious, right? that I can be on the phone tying my shoelace. Don't even think. But think about this one. I don't know about you, but do you know how to tie a bow tie? — Uh, a tie but not a bow tie. Yeah, — bow tie. Okay, cuz I thought you might cuz you seem like a fashionable guy. I have tried it. It's very tricky. — Yeah, — it's very tricky business and it's subtle. You have to be able to feel and pull in all these different directions. — Yeah, — forget it. There's no robot that's going to be able to do that for a long time. I would love to have it happen because that would be something I would love to have a robot tie my bow tie. And here's another one that's very simple is just buttoning your shirt. — Yeah, — it's actually a little tricky for humans if you think about it. How you have to kind of fiddle with it a little bit to get a button on and off, — especially a small button. — So that's way beyond robotics. You'll never see a demo of a robot buttoning up a shirt. We're actually working on it in my lab, but it's really hard. Wow. — Yeah. It's things that you just really take for granted. Now, when you get into solving that problem, it seems like and you mentioned this earlier that it's almost a sensing issue that we need a lot of developments on the whatever type of sensors you have in the fingertips or whatever you're using for the manipulation. Is that the biggest hurdle right now is in just kind of replicating how our fingertips can have so much sensing capability? — Okay, so that's one. But here's there's something that's somewhat encouraging for me at least which is that if you look in the realm of robot surgery and by the way there's a lot of misconceptions about that. I give talks where people say, "Well, robot took out my nephew's appendix. " — And I'll say, "That was not a robot. That was a surgeon using a robot as a tool. " Yeah. — To do that operation. Right. So, they call it a robot, but it's really a tea robot or more literally a puppet. — A very important and very useful and expensive puppet, but it's a puppet. — And so, that's very important to understand. So, surgeons can do remarkable tasks. They can sew up a wound. They can take out an appendix or a gallbladder with these tools. Now, they do not have tactile sensing. — Actually, the very latest versions of it, they started to introduce some, but for many years, they didn't. And surgeons are still able to do amazing things. — So, this is evidence that maybe we don't need to know how to do tactile sensing. M it's just a hypothesis but it says we have a an existence proof that manipulation dextrous complex manipulation with very complex deformable surfaces right I mean is harder than tying a bow tie to take out an appendix you can do that without tactile sensing now what's fascinating is the way surgeons seem to do it is by accommodating the lack of tactile and then using vision — their eyes they have cameras in there and they're watching what's happening and they're they're seeing what happens and that they have a feedback loop based on vision. So they can see very small deformations of the tissues and they sort of they infer what's going on. This is remarkable because this I think is the most exciting path to getting to manipulation which is rather than trying to reproduce tactile which is extremely difficult for all kinds of reasons but I think is interesting and worth pursuing but there's another path which is to understand the visual tactile interactions and that I think if we can do that we might be able to get away with just using cameras. — Interesting. So this week or last week I'm sorry I saw an article that was talking to the difference between Elon's approach particularly on the hands between him and what figure AI is doing where they put like right here in the palm they put a camera to your point right here in the hand and Elon is refusing to put a camera in the hand. The person who posted this was saying this is akin to him not using liar in the cars — because in the end it's going to come down to a cost thing and he wants to force his team to figure it out without additional sensors and for all intents and purposes costs for manufacturing and he's playing this longer game. — What are Yeah, what are your thoughts on that? — Well, that's a brilliant point Preston and I I'm really glad you made it. He's a very good the analogy really works there. His Elon is very in some sense, you know, he's very confident. He's done amazing things. Understandably, he should be confident, but
Segment 4 (15:00 - 20:00)
— sometimes that can blind you. So, in this case, you know, his decision not to use LAR has really, I think, put a limitation on the Tesla driving systems. the LAR. It can be very helpful for filling in the edge cases when certain conditions when vision cameras can be distorted or blinded by light flares or especially in rain. — So LAR actually is a great addition there and also the cost I don't know I think it will come down over time. Listen, I'm not a in the car business so I defer to his expertise there. But in the same way he has you know originally you might remember when he first started Tesla he wanted all the cars to be made in with robotic factories — and he had a decree we will have no humans you know everything must be done by robots and I remember engineers coming in from Tesla to my lab and saying can you help us we're trying to do this thing and we can't get it to work with a robot — and he was just unrelenting and then finally he said I was wrong. Yeah, — he was wrong. He said humans, he was always mistaken, humans are underrated. Do you remember that? — Yes. Yeah. — So, that was really interesting because it was one of the rare times he admitted. — But it was a great example of the idea that you can't do everything, right, with robots and even if you will it, you know, he can will things into existence, right, by demanding this, it doesn't always work that way. And so the LAR story is very analogous. And I think you're right about the cameras. Having cameras in the hand makes a lot of sense. It's not how humans or animals work, right? — Mhm. — They don't have eyes in their hands. But cameras are something we understand very well. We have very high quality cameras. They're very fast. They're very accurate. And they're really low in cost comparatively. — So I'm for more cameras. You know, put a lot of cameras in there because the other issue is when you walk, it's one thing you have a camera on the head. You can sort of see what's around you, right? or drive. By the way, driving, I should have mentioned this earlier, but driving is much easier than manipulation because driving, you're just trying to avoid objects, — avoid hitting anything. In manipulation, you must make contact with objects. You must manipulate them, right? So, it's very, very different. To your point, this is really fascinating because to your point, — when you talk about this idea of, you know, if I'm holding a shoelace and the tip of my finger is indented or I can see the compression of that, I can feel it. I'm relying on that touch, like I'm tying my shoe, I'm relying on that sense of touch. But if you were going to try to build a sensor that can do that and you're kind of hitting a roadblock or can't find something that can provide that tactile feedback, I could look at a camera and say, "Okay, it went in by a half a millimeter. Therefore, it's about this much pressure. " And you can substitute that sensing capability through an image or a video of being able to see it. So it's it is kind of interesting that — we have figure going that path and yeah — well okay so let me um add on to this. So you just made a very nice nuance point. You said if you went just looking at your fingertip and you saw the shoelace pressing into it by looking at the shadow structures and others by if you could probably figure out that it was — slipping away or it was firmly grasped. — Absolutely. That's what surgeons do. And they, by the way, work with surgical thread, which is really thin, and they have to use a needle. It's very complicated, right? But they're doing a lot of this with their eyes, with their intuition. Now, it's not just a matter of putting cameras around because it doesn't that doesn't solve it alone. You actually now you need to be able to understand that imagery, the video, and you need to interpret that — and that's also extremely difficult. — Yeah. — Because humans have this incredible ability, and we can't underestimate it. It's just amazing what humans can do. — Yeah. From an inference standpoint, as far as like if I hold the shoelace this way, I can also kind of just intuitively infer that if it was held 90° from that, that it's going to have this same slipping sensation. And that's something that's really hard to train a robot on versus humans can just kind of like figure it out like very easily. Is that what you mean by that? — That's what I mean. Yeah. And it's Here's the thing. We have we don't have good language for describing this, right? you know, we're trying to because it's all intuitive for us. You know, if you ask me, tell me how to tie a shoelace, right? I'd be like, — you know, it's not easy, right? We don't have language. And that's part of the reason, by the way, this is the other issue, and I'll come to is the data gap, — the gap between the amount of data we have for language versus robots. — Maybe this is a good time to — Yeah, let's talk that. Yeah, let's talk this. — Okay. All right. Well, there's a way of quantifying all this. And this is something that I call the robot data gap. and it's the following that if you put together all of the data that was used to train language models now it's vast but it's hard to wrap your head around how much data is that well my students and I were able to calculate
Segment 5 (20:00 - 25:00)
that if you actually look at it and actually there's another uh Kevin Black who's a researcher at physical intelligence very smart guy he had the first insight about this and then we've been taking it a little further but basically it's that if you added up all hours it would take you to read a human average human to read all the text that's available to train the language models right so it's all the books that are out there it's all Wikipedia it's everything that's on the internet if you add up all those tokens if you will and then to figure out well a human can read at the average speed of 238 words per minute right you can do the math and you end up with 100,000 years — okay so you could sit down and read everything that's used and be a 100,000 years later you'd be Okay. Now, we don't have such data for robot manipulation. — Oh, — it doesn't exist. It's not like we can just find it on the internet. It's the data is very different — there. We want to start with vision images and then end with control signals to the robot. — This doesn't exist. So, we have to start and basically generate this data. But what we're up against, right, is it's a 100,000 years. is we're 100,000 years behind the language model. — So again, I'm sort of exaggerating and to make a point which is certainly there's number of ways to accelerate that and I think we can eventually get there. By the way, I'm not saying this will never happen. Please don't get me wrong on that. I believe it will happen, but my big question is when I think it's really important to be prepared for the reality. There's a lot of people say, "Hey, this makes sense. I want this. We should have this soon. " But remember, there's a lot of cases where people have talked about that in the past. — Fusion energy, nuclear fusion, — right? Makes a lot of sense. Sort of the technology is pretty obvious, — but you have to contain this plasma. That seems like a technical issue. We can figure that out. — Well, 50 years later, we're still working on it. And it's hard. It's a very one of these very, very nuanced problems. Another one is curing cancer. When I was a kid, they used to say, "We're going to have a war on cancer just like we got to the moon. 10 years we'll solve cancer. — We haven't solved it. So there are problems that are extremely difficult and they take much longer than anyone expects. — Yeah. — And it seems like robotics is like that. We don't know. And listen, I'd be the first to celebrate if someone I wake up and I read someone is solved, right? It could happen. — Yeah. — And then you'll look back on this podcast and say Goldberg was completely wrong. — No, it could totally happen. But I want to be a voice to say, hey, it might not happen. And let's just think about that and be a little bit realistic because I know how a lot of people are thinking that it's inevitable it's going to happen and you know hopefully by next year according to Elon and many of his followers but they have to be ready for that maybe not to happen. And I'm worried about a backlash that people will say hey this whole robotics thing is you know hocus pocus and we're going to move out of this field in droves. I don't want to put words in your mouth so correct me if I'm stating this wrong. I don't think you're saying it's not going to happen. You're just really suspect on the timeline that everybody seems to be. — Yeah. That's it. Exactly. You know, that's where I line up with Rod Brooks because for very similar reasons, we have experience. We both been working in this field for like 40 He's been working longer slightly longer than me, 60, 50 years. But we have a lot of experience with trying to solve these problems. And they're much more nuanced than they seem on the surface, especially because a child can pick things up and manipulate it. — Yeah. Right. seems obvious. So why can't robots? It's very counterintuitive. But when you work with these things and you really see their limitations, you start to understand that this is a very complex problem. Ken, if you were, you know, you're a program manager, you're kind of looking at all the different swim lanes to get there. Uh the hand seems to be like one of the critical path, if you will, for getting there. Is there anything else that you would define as being on that critical path or is the hand so far out there as far as difficulty goes compared to everything else that that's really kind of the limiting factor? — Okay, great question. When you say hand, it's the manipulation ability. — Yeah. Right. Because by the way, I do have another thing to say here, another opinion, which is I think that we will get much more out of very simple grippers than we will out of hands. Again, if you look at surgery, the tools that surgeons use to perform an appendecttomy are very simple grippers like [clears throat] this — and they can do immensely complicated things. So, I believe you don't need complex hands. I'm not saying that's the path to go. I believe you can do simple grippers. In fact, my company Ambi Robotics uses an even simpler gripper, which is a suction cup — and you can do incredible things with them. So, it's not necessarily the hardware, but it's the software. It's a control [clears throat] of this nuanced interaction that is very challenging. I think many of the other
Segment 6 (25:00 - 30:00)
aspects are addressable. — We have the ability to tell a robot go pick up you know the orange jumper off the table. We can solve that now. Computer vision systems are good enough to know that a jumper is a sweater and there's an orange one and it will pull that out. No problem. — But it's being able to actually pick it up and maybe put it on you and then button it up for you. That's where it's difficult. — Yeah. I mean, maybe what we see in the interim is robots that go to market, humanoid market that have simplified the hands or have, but the range of activities or things that they can actually perform is very limited relative to a real human being in there and being able to do it. I don't know if that's how they go to market or not. — I want to talk a little bit more about your company, Ambi. So, this is really fascinating. So you guys have gone to market primarily focusing on logistics and warehouse type activities for robotics. Is that correct? — Correct. So this started about 7 years ago. We had a breakthrough in robot grasping and that [clears throat] was just simply ability to pick things out of a bin. — Okay. — So it's not manipulating, you know, doing surgery, but it's just picking things out of a bit. That was a very old problem. It's been known as the bin picking problem and people have been looking at that for decades. Mhm. — But we had it we made an advance and this was especially the work of Jeff Mer who was the PhD student of mine who was the lead researcher on this and we can go into more details on the technical aspects of this but the system was called DexNet — dexterity network — and it was based on collecting data lots of examples and it was somewhat it was analogous in many ways to imageet — which was a breakthrough for computer images. So we did something similar. We synthesized this data set. We added noise in a very specific way. But the system started working remarkably well. And so it could pick up almost any object that you put into a bin. It would just pull it out. And you would throw in a whole pile of objects. We were digging around in our garages and closets and throwing everything we could into it. And it was consistently just being able to pull these things out. And so that was a very exciting moment for us. We got some publicity. It was in the New York Times and other places and then we were approached by a number of companies and we decided to um form our own company. — That's awesome. — I'm curious where you've seen just good old-fashioned engineering matter more than additional data or larger models and then to the converse of that when did you have data actually really surprise you? — Okay, good. Well, that was a case where data really did surprise us. We were able to generate 6 million example grasps over because we had collected 10,000 object models and then we could generate grasps on those trained a network to be able to learn essentially where to grasp an object. So that was a datadriven approach. But I will tell you that when you take that and you have to move that into an experimental system or into a commercial system, then you need a huge amount of what I call good oldfashioned engineering. And this is where you have to really sweat the details. You have to make sure that the sensors are calibrated correctly, that your robot arms are calibrated and accurate. You have to be able to do the computation to move the arms very quickly. You have to control the surfaces of the grippers, the suction cups, myriad of details like that. The lighting, the we had a little scale underneath the system that would recognize when an object was removed from the bin, right? A digital scale. It's just another piece of engineering that had to go in there. So, lots of all that was just in our demonstration system in the lab, the experimental system. But then when we moved into ambi robotics and by the way I should I want to give credit also to the other students who are involved. Matt was another computer scientist working closely with Jeff Mer and then I also had two other PhD students from mechanical engineering and one of them uh Steve McKinley and David Gilly brilliant all four of these guys extremely brilliant engineering students and so they really knew how to they were very good friends they remain good friends they all worked very closely and spent a huge amount of time camping and hanging out together too but they were perfectly complimentary because we had the computing skills and the mechanical — and the mechanical guys knew how to design machines that could work reliably over a great period of time and that's when we moved into building the ambi sort system — which sorts packages for e-commerce — we didn't go in with this plan but what we saw very quickly was that e-commerce was growing and we needed there's a huge demand for sorting packages right it's very challenging to get packages out to the customer fast. — Mhm. — So we started using that technique
Segment 7 (30:00 - 35:00)
DexNet. We evolved it, commercialized it, and then we could make it work very fast. And then all kinds of other elements had to come in. We had another gantry system that would drop it into um pick an object out of bin, had to be scanned for it, zip code, figure out which bin to go into, then put it into the right bin, avoid jamming the whole time, make the system reliable, safe, and easy to use. Right? All this is what I call good oldfashioned engineering. Yeah. And so I become a big advocate for this because after all this is a body of research and ideas and insights that have been developed over 400 500 years in engineering and still what we teach — at Berkeley and all the major universities. We teach the engineering principles. And my point is let's not forget about those. Those are still extremely valuable for engineering and for robots and getting them to actually work in practice. — Yeah. — And anyone working in robotics, I think, will acknowledge that. Although the public perception is, oh, it's just now, you know, we're using AI and that's solving everything. It's not. It's solving certain little pieces of it. And as I said, there's certain pieces that are very, very difficult that still remain very difficult. And this comes back to what I was saying earlier, Preston, about my fear, which is that because there's so much expectation around humanoids right now, that if that doesn't that companies can't deliver on that ability, — then there might be a big backlash and that's going to hurt companies like Ambi who are not trying to do that, Ambi is trying to solve a real practical problem and do it efficiently and cost-effectively and actually, you know, basically something that's very valuable for everyone who shops at Amazon or online companies, right? We've sorted 100 million packages — so far. — And I'm very proud of that because these machines as we're talking are out there sorting packages and they're very reliable. They're not featured in the videos about there's no humanoids doing this, — by the way. Although some have said that, you know, we'll have a humanoid doing that. But humanoid with hands, it's going to be a long time before that's — even close to the efficiency of the systems that we have with suction cups. — Yeah. After shipping robots that work every day in the warehouse, what's one belief that you held earlier in your academic career that you've had to revise based on that? Lots. I would tell you one of the things that is very interesting is that you think, okay, I have this technology that's the breakthrough that really solves an important problem. Therefore, I can rush out into the commercial world and build company around it. Well, it turns out that technology is only a very small core part. It enables, but then there's all these things that have to come around it that are equally, if not more important. And actually, when you go to customers and you say, "Hey, we have this new AI thing. " They're like, "Wait a second. I don't care about that. How much money is it going to save me? — That's all they care about. " And that's — that's business. You know, my both my grandfathers were entrepreneurs and so was my father. I grew up in these kind of environments and it's tough. Mhm. — Um, one grandfather was very successful in electronics and my other grandfather was in the housing business building homes. But my father struggled — in his business. He was a metallergist and he had a company doing chrome plating and it was very difficult and you know he was buffeted by things way behind his control like you know recession of the 70s actually really hurt his business very badly. So he struggled. There's a lot of factors and it has to do with competition and timing. There's many factors. What I would also say in industry is that and this is going to come back to the data aspect which is that you can do things in a lab that we've really explored the full range of a problem. So let me give you this example. We were addressing the bin picking problem remember — and we were dropping all kinds of objects in there. In fact when people would come to the lab they would visit and I'd say well you have your car keys. Drop them in here. I said uh if the robot will pick it out we'll keep the car. And then uh but it would always do that. It was no problem picking out someone's car keys, right? And we tried it on all kinds of things. Again, toys, we made 3D printed weirdly shaped objects, all kinds of things we could think of. We try to basically consider everything. And we were just trying to push the envelope, right? — Mhm. [clears throat] — Well, the envelope was the key word because it turned out that one thing we didn't ever really experiment with was bags. — Oh. — And bags are extremely common in shipping. You know this if you receive bags from your from e-commerce, from Amazon or others, you get bags of all kinds of forms. Now, bags are often plastic or paper, but the issue with bags is they're loose — and so they have objects in them, but there's a lot of slack and they f tend to fold in interesting ways. — So, we didn't we weren't testing those really in the lab. — That wasn't something that we would have thought about too much, but that's so much more common in real shipping. So my point is we had to adapt all of our
Segment 8 (35:00 - 40:00)
systems to the reality of the consumer market — which in this case is bags — and that was something we didn't have a lot of data on. So we had to adapt our systems to work on data on real bags and real bags are very difficult to actually even simulate and model because they fold again. And by the way, the folding matters because if you go to pick up with a suction cup right on top of a fold, as you lift it, the fold will unfold and the suction will you'll lose the suction and drop the object, right? So we started collecting data as we started putting these robots to work. So as our customers were putting these systems into production at Ambi, right, we also [clears throat] had an agreement that we would maintain these systems very at high performance levels because we were constantly monitoring them. So we have a dashboard at the central headquarters in Berkeley where the team keeps an eye on every machine that's in operation out there. And so what we do is we get data on every single pick operation. What happens, how long it takes, whether it dropped the object, whether it was classified correctly, all kinds of things like that, right? [clears throat] — And we use that so that we can immediately tell when the performance, let's say the pix per hour performance, that's how it's often measured, drops. We can spot that early and we call the company and we say, "What's going on? Did something change? Did the camera get knocked? Is the suction cup getting worn? " — And so we're constantly on top of it. Part of it is that that's a source of big pride for us. We really customer focused and we want to make sure our machines work completely reliably. [clears throat] — The nice amazing side effect of this is that we've been able to collect data from all this real systems in real environments over the last four years and we now have a labon 22 years of robot data. Bitcoin mining has a reputation for being impersonal, risky, and full of hidden fees. But one company is flipping that script, and it's Abundant Mines. Abundant Minds was founded by Bo and Christine Marie Turner, two Bitcoiners who lost over a half a million dollars to mining providers that overpromised and underdelled. Instead of walking away, they built the company they wish had existed when they first started mining. With Abundant Mines, clients actually own their machines. And in Oregon, they come with no sales tax. There's one flat monthly fee for hosting, no surprise repair invoices, and if a rig ever goes down, their system redirects hash power so your earnings don't miss a beat. But what really stands out is how personal the experience is. Every client gets direct support, ongoing education, and guidance from a real human being who lives and breathes this mission, so you never have to be left guessing. In addition, through a 100% bonus depreciation, mining can offer major tax advantages that you don't get by just buying Bitcoin. Their clients describe it like acquiring Bitcoin for half price when factoring in the returns and the write-offs. They've put together a thoughtful gift just for listeners of this show that could potentially save you thousands of dollars. There's no pressure, just something to help you think through if mining is actually right for you. So, if this is something you're curious about and you want to learn more, you can check it out at That's abundantminds. com/preston. You know, most people don't realize this, but almost 60% of the average American homeowner's net worth is tied up in their home. Home prices are up more than 75% since co. That looks great on paper, but most of the wealth is locked in your home and not diversified. If you let your equity sit there, you could miss out on better growth opportunities elsewhere. Now there's a way to put your home equity to work in Bitcoin thanks to Horizon. Horizon helps homeowners buy Bitcoin with their home equity without taking on a loan or adding monthly payments. Here's how it works. You unlock tax-free cash today to stack Bitcoin by selling a small slice of your home's future value. You stay in your home as usual while Bitcoin does the work in the background, and that's it. Later on, when you sell or refinance, Horizon's providers take an agreed upon share of your home's future value. And that's the trade. And what really sets Horizon apart is that there's no term limits and no risk of forced liquidation. You keep 100% of your Bitcoin upside, and you custody the Bitcoin however you want. With money printing inflating home prices to their highest level ever, it may be wise to take some gains off the table. If you want to diversify into Bitcoin, an asset with a proven record of beating inflation, Horizon may be the product for you. Head to join horizon. com to see if you qualify and see your home's Bitcoin potential in just 2 minutes. Their team of experts will work with you oneon-one from start to finish and help you unlock your home equity to purchase Bitcoin. Transform your home equity into Bitcoin today with Horizon. So — remember I talked about the 100,000
Segment 9 (40:00 - 45:00)
years. — Yeah. — We have 22 years though in the start. [clears throat] Okay. But it's real robot data. It's extremely valuable. It's a high quality. — This the gold standard for data. — And so we're now using that to refine our systems — that make them better and higher performing, more reliable, but also allowing us to now branch out into new related types of products. So we now have we introduced a new product called Ambi Stack that stacks boxes very efficiently, very densely — and that's a new product that we sold out our first batch of these uh systems this year. — H amazing on this idea of robot data or covering this 100,000year gap that you're talking about — for a company that would be trying to overcome this and because the data just isn't there. Are they just having to construct a bunch of physical real world? Going back to the hand example, would they have to have a bunch of physical hands with just a bunch of physical objects to then just be doing this or is this something that you think we could simulate in a virtual environment to accelerate that speed or kind of a combo of both? — Good. So for grasping, it turns out simulation is it works pretty well because there — you just need to know the geometry of the environment fairly accurately and then object and the gripper and then you can actually model that fairly well. Now grasping is just lifting an object off a table, okay, or out of a bin. — Mhm. — That's very different than tying the shoe that we talked about earlier. — Right there. It turns out that we can't simulate that so well. As I mentioned, we don't know how to model and simulate the deformations, — the minute forces that are going on in the process of interacting with that object. — So, that's a challenge. — This is a little nuanced and I know that your audience might say, what is Goldberg talking about? He said this couldn't be solved. Now, he says it can be solved. Well, it depends. There's certain categories of problems that can be addressed. And I think that picking objects out of a bin is something we've made an enormous amount of progress in the last 5 years. — So I'm very optimistic about that. We're getting faster, more reliable. That's the cutting edge of robotics and it's real. — Mhm. [clears throat] — But then tying shoes and doing things around a home, by the way, or in a factory where you're actually trying to put together, you know, electronic parts or car bodies or installing upholstery and wiring inside a car. These are extremely difficult by the way and there even in Detroit or anywhere in the world there's still humans doing those jobs because they're very hard. So those are hard to simulate and I do think everything is pointing toward this deformation is a key obstacle — to doing it. And I've talked with people who are physicists and experts in deformation and they agree this is a very hard problem. We don't even understand the physics of friction and deformation very well. — Interesting. you've said uh your views on AI creativity have changed. Walk us through some of the timeline and what's changed and just kind of your overall opinion today. — Okay. Well, on a very different note, I have been working as an artist in parallel with my work as a researcher and engineer. So, I like to say my day job is teaching at Berkeley and uh and running a lab there. But I have another passion which is making art. And I've worked on this for almost the same amount of time. And I make installations and projects. We did a project called the Tel Garden where we had a robot that was controlled by people over the internet. And the robot could tend a garden, a living garden. We put this online in 1995, which was the very early days of the internet. And I'm very proud of that project because it's stayed online for 9 years. Huh. 24 hours a day people could come in and explore this garden and plant seeds and water them. So it was a very interesting it was an artistic project but it was also a engineering proof of concept and it had to work reliably and so you know it really pushed us. I sometimes say people think doing engineering is hard try art it's really hard — because you have to deal with the public and they're going to interact and do all kinds of crazy things. So, we had to really spend a lot of time designing that system. But I continue an interest in art and I have a new show coming up. It's a joint project with my wife Tiffany Schlane, who's an artist, and she and I are collaborating on an exhibition that's going to open in San Francisco in January 22nd. — Okay. All right. — So, so this is a big passion of mine and it's using technology like AI and robots to ask questions about technology. Uh-huh. — And I'm very interested in this contrast between the digital and the natural world — when [clears throat] they seem very symmetric and similar but there's very profound differences between them. — So that's what I think about or I try
Segment 10 (45:00 - 50:00)
to express in my artwork. And so your question about the creativity. So yeah, I've always said AI won't be creative. You can ask it questions but it's not going to actually come up with original ideas. But I actually have shifted my view on that. — Yeah. And I give this example where I asked Chachi PT in the early days, hey, um, give me a hundred uses for a guitar pick. And I just thought it would get, you know, it would start repeating the same thing over and over again. And it did, you know, it started with like a screwdriver to scrape ice off a windshield, things like that, which are all made sense. But then it started listing these like as fast as I could read them or faster. And then it came up with one that I was like and it was um it said a miniature sail for a toy boat. And when I saw that I was like, "Oh my god, that is a genius idea. " And I would not have thought of it. And you know, immediately when you see something that's original and creative like that, you spot it and you say, "Ah, why didn't I think of that? " — Those are those rare ideas. And AI is capable of that now. And so it's very — Yeah, it is exciting. — It's super exciting. So, I'm not negative about AI at all. I love it. I use it. I advocate for it. My daughters, my wife, everyone uses it. And so, I'm 100% for it. I do think it's going to help with robotics. But the question is, is it going to do everything that people are hoping? And that's where I hope that this conversation, Preston, will put things into context for your audience. — So, I don't know if you're going to like this question or not, but I'm going to throw it over because I'm curious what you would think of this. So, Figure AI recently sold their humanoid robot to put into the home and there was a lot of push back from, you know, at least the comments that I saw online that this was just a giant marketing scheme or maybe they're trying to raise their next round. I don't know. But for the audience, I'll just kind of frame it. It's a humanoid robot. All the demos that I've seen to date are extremely suspect as to its ability to like actually do anything in the home. When you dig into it more, they were using you're putting this thing in your house and then I guess that it has this ability to like go back to a human that would actually be manipulating the robot inside the house which I think has all sorts of security privacy issues and everything else, right? But the reason I bring this up is because I'm pretty sure he's the founder and CEO of the company. He was suggesting that in order for AI to really start to accelerate its learning that it needs to start being embodied into the physical form and to put itself into a challenging learning environment. And what he means by some of this, and Ken correct me if I'm wrong in the way that I'm describing it, but what he's getting at is there's all these ambiguous situations that happen in the household with respect to social dynamics. the way that the family would interact and what they would ask of the robot like, "Hey, go get me a cup of water. " And then the person who's asking for it always likes it half full or they like it warm cold or whatever. And so that learning that the robot would be forced to kind of undergo from a social dynamic would assist in its ability to get smarter collectively because I'm sure all this information is then going back up into the mothership and gettingorked. But his argument is the point of my question, which is, do you also agree that for AI to kind of take this next step or this next quantum leap from where it is today that it really needs to be able to immerse and basically partition itself into the physical form? M well I think that is helpful and certainly understanding the dynamics of human interaction especially in a home the social dynamics are very subtle and as you said very important just understanding tone of voice um gestures like my daughter will say you know does this right which is don't bother me I'm a teen she's a teenager okay — or just rolls her eyes right like that rolls her eyes oh yeah the subtle body language is super complex when you think — super complex and we pick up on it in a myriad of ways we don't even recognize right like I can pick up one thing I always notice is when I'm teaching I can pick up if students are starting to lose interest or get tired or bored — right yeah I just feel it you know I look around but I'm always watching that's why it's tricky to teach online — but all these things are very nuanced um body language can tell you a lot of what's going on and just interpreting you what's the dog doing and it's the dog, you know, how's the dog feeling, right? There's a lot of nuance there. So, all that is you're going to you have to be in real homes to be able to do that. And I think that's actually that makes sense. And I I'm not opposed to having, let's say, a humanoid in a home that might be helpful for doing certain
Segment 11 (50:00 - 55:00)
things like maybe fetching water or being able to pick up things around the house. — Remember grasping? I said that is actually something I think robots can do. So if you said, "Hey, pick up all the things that are on the floor, — right? " We would all like that. We have a Roomba, you know, you mentioned earlier the vacuum cleaners, but the next step is to be able to actually pick things up and put them away. — Yeah. — And that I think we can get there. I do. I actually think that's going to come — that can happen in the next decade. — And it's very valuable because by the way, if you're a senior citizen, you really want things off the floor. Mhm. — And if you're a young parent, you have kids, or if you have a teenager, there's like, can you clean your room? You know, be great to have a robot go in and just pick up all their clothes. We call that the teenagers problem. By the way, we have a paper on this, — okay? — Which is how to get a robot to efficiently pick up clothes. And it's not the obvious thing because if you just program a robot and go in, pick up, it'll pick up one sock, take it to the bin to pick up that X sock, take it to the bin. You actually need to be able to pick up lots of socks together. And so, how do you do that? That's called multiobject grasping. It's a very complex and nuanced topic and so we're studying that in the lab. So just coming back to your bigger point, I think that robots will do something that's useful in the house. I think that's possible. — Mhm. — They could be useful for security also maybe in some form of companionship somewhere down the road or as you get older and I can appreciate this more and more that I might want to have a robot that might help me, you know, shower or get changed or help me get out of bed in the morning. I think that would be nice. I'd rather have that than a stranger in my house. Let me put it that way. — Right. I think you can relate. — Just not one that'sworked back to some other person on the controls. — Yeah. Right. I mean, well, that the privacy issues are huge. You're right. And that's something also a lot of engineers don't appreciate that. You know, I faced this at Berkeley a few years ago where I was talking about privacy and I made an art project about privacy and we had cameras, surveillance cameras. We did a whole installation about this and some of my friends were like, I don't care about privacy. I have nothing to hide. I said, "Oh, really? " I said, "Okay, can I see all the um letters of recommendation you wrote for the last 10 years? " "Oh, no. " You know, I'm not going to share those. I said, "Okay, then how about all the research proposals that you're working on? " "Oh, no. — I can't share those. " Right? So, all kinds of stuff that you don't want to share. It's not that you're hiding. You know, it's not you're doing something criminal or embarrassing, but you just there's a lot of stuff you don't care to share because it's important and it's confidential. So, same as in your home, you know, you don't just it's not that I'm going to, you know, it's going to catch me naked, but in the morning, you know, a bad hair day and I don't necessarily want that to be, you know, transmitted widely. So, it's all kinds of things like that. And I think so, I'm not opposed to humanoid robot. I think it's going to be interesting to see what's happens in the next few years that we will probably start to see these. It'll be very interesting to see that roll out from figure. And Burnt is very, you know, he's a very compelling businessman like Elon. You know, he has a lot of optimism, a lot of confidence, and he's definitely building something that's working to some degree. So, it'll be interesting to see. — And I'm not a naysayer. I'm not saying that all this is going to fail. I just say that be patient. — Yeah. — You know, it's going to take longer. The real science fiction stuff is going to take longer than we think. — Last question I got for you, Ken. What's the most exciting or surprising thing that you've seen in the lab or just in the space in general that you almost gasped when you saw it in the past call it year? — Okay. So, actually I have a good answer for that. You know, I'm so proud of Ambi for being able to sort packages around the clock at very high speeds, right? But I recently saw a company called Dina Robotics. — Okay. And I'm friends with the founder, so I'm maybe slightly biased, but I have to tell you, they demonstrated folding napkins with a robot. — Okay? — They did it for 24 hours. So, they just had the camera set up and they had, and now this is, by the way, just two — Two grippers. Okay? — Right. No head, but it has cameras, but there two grippers basically folding a stack of napkins over and over again. And they did it for 24 hours, and they showed you the whole process. Now, that to me, as a roboticist, is a big deal. because they were able to do it fairly fast, reliably. The napkins were, you know, often in get tangled up and it would figure out how to untangle them and keep going. And the folds were actually pretty nice. — So, that's impressive. And then I got to see a live demo of the new version of that which can now fold shirts — and it worked really well. I saw this in a they had a booth in a conference in Korea in September and it was folding shirts just round the clock — and it was fantastic. Even you could bring your own t-shirt and it would fold — so that to me that's very exciting and again it's a specific task. I do think we're going to make progress there. — And by the way, everyone wants to have something fold their clothes. — That's for sure. That is for sure. I'm so bad. — I'm so bad I got one of those little folding uh — Oh, you do? Yes, I do. Cuz I'm so bad at
Segment 12 (55:00 - 57:00)
folding it. But when I use that, I'll actually, you know, do it. Okay. — You have a folding board. That's great. — Yes, I do. And I'm sure — you're going to be a great customer for this. But, you know, you have high standards, right? Cuz you want the things just right. And that's where it gets tricky, right? But the Dino Robotics guys, it's Jason and Lyndon who are the leaders of this company. They really are pulling something off. And I think it's very interesting to keep an eye on. And a lot of the other robotics companies are now trying to emulate that which is to show one task, one special task doing it very reliably. — Making coffee or folding boxes. That's really exciting and I think that is actually going to be important rather than trying to do general robotic do everything in a home which is I think going to take a long time. But if you can get it to do certain tasks like folding laundry or maybe making coffee, certain things like that, that's a way sort of bottom up from certain tasks, learn other tasks rather than top down. — I think that's going to be a path to getting progress. But again, it's going to take longer than most people think. — Ken, I can't thank you enough for making time. Your expertise is just off the charts and uh I know the audience is going to love this. If you have anything [clears throat] else you want to highlight or point people towards that we can put in the show notes, just let us know what that is. — Okay. I'll send you some links because I have um a bunch of things online I can link to that follow up on this in various ways. And no, I think it's great. Thanks for doing this. I'm really glad you're also going to connect with my good friend uh Rich Wallace. — Yes. Because he is he's fascinating. You know, he's a very original thinker and he is sort of an I would say very much an unsung hero around chat bots. people don't know, but he was really a pioneer in doing this, — you know, very early and he still has a lot of great really interesting insights and ideas. So, you you'll appreciate. — Amazing. All right. Well, Ken, thank you so much for making time and coming on the show. — I think some of these like AI tutors are going to profoundly change the way kids are able to be curious because imagine being in a classroom but essentially having a one-on-one teacher at all times being able to support you. And so if you're getting a question, uh, I don't know, a math question or a physics question or a biology question, and then they're able to phrase it in line with your interests, that is hugely going to increase your ability to learn.