# Emerging Tech Overview: Driverless Cars, Image Generation, Energy Infrastructure (TECH008)

## Метаданные

- **Канал:** Preston Pysh
- **YouTube:** https://www.youtube.com/watch?v=mMUNkgjwlio

## Содержание

### [0:00](https://www.youtube.com/watch?v=mMUNkgjwlio) Segment 1 (00:00 - 05:00)

(00:00) And what I find so fascinating about these is that like self-driving systems are taking like millions of data points per second, projecting trajectories of like dozens of these various like agents, things, moving vehicles, animals, and then it's deciding optimal actions all within like milliseconds. (00:23) And so this in my mind is the first time that we've seen technology really making like life critical decisions than the physical world at scale. And it's wild just to see it expand over time. We've got a fun one for you because we're going to go through a bunch of different things that we've both been curious about, things that we are seeing online, uh things that were just kind of blown away on the tech front and yeah, we're excited to bring this one to you. (00:55) So, Seb, any opening comments before we just dive right into some of these? I would say for people that have listened to a couple of the episodes we've done so far, we've been kind of reviewing tech books and surprisingly and I'm not sure on your thoughts, Preston, it's surprisingly hard to find really good tech books that kind of open your eyes and on top of that give you a lot to talk about. (01:15) And so if anyone does have any books, feel free to reach out to us and we'd love to hear those books and we're always down to review a book. But more than anything, we just wanted to kind of dive into like what is happening in the world today and some of that will take a long time to make it into books. So, we thought, let's just go straight to a source and see what's happening. Well, it's funny because we started two different books since the last show. (01:32) And we got probably, I don't know, 30% of the way through each of them and we texted each other and we're like, I don't know if we can do an entire episode on this particular topic. Or the one was about quantum and it was very obscure and we're just kind of like, yeah, I don't know. So, we're going to take this in a different direction today and we're going to just kind of highlight some really fascinating things that are happening. (02:01) The first one that I've pulled up is just this Tesla FSD14. 2 that was recently released and the comments that I'm seeing online in reference to this autopilot. And what I'm going to do is I'm going to pull up and bring up some videos that people are posting online for people that are watching the videos side of this are going to be able to see it. (02:23) Seb and I will do our best to kind of explain what this looks like for the audio listener. But the first video that I'm going to pull up here is one that somebody is just showing how superior the software is on just animals crossing in front of the vehicle. (02:44) And uh one of the other updates that I've heard is just drastically different than the previous versions is I guess blowing leaves would mess up or hang up the AI on board the Tesla vehicle in the past. And now I guess on this latest update that is not the case. But people can see the screen right now. I'm just kind of playing a video and there was a deer. (03:04) The car veered out like right at the last minute veered out of the way. Another I don't know what that was. Seb, are you able to you're able to see what I'm pointing? Yeah. Here's a moose literally walking across the road just out of nowhere in front of the car. It slows down and does the right thing. Literally, this feed is seven minute. There's an alligator crossing the street. (03:29) And so the point that the person I think was making with the um with the video is just showing the diversity of different things that can just go wrong that a human, you know, just we don't even think about the fact that a deer looks different than a cat that looks different than an alligator crossing the street. (03:48) And uh you know, if you were coding if then statements on something like this, it would literally be impossible. like you could never get to the point where you could have software out there that would be covering all of these different edge cases and the latest version is putting it on full display. (04:08) Okay, so the video I really want to show Seb is this one and I'm going to play the sound. I don't know if Seb if you're going to be able to hear it, but I think the audience is going to hear it in the recording. And this is a video of uh somebody using 14. 2 to in a Tesla in Time Square and they have this I guess in what is referred to as the Mad Max mode which they've brought back. (04:30) I guess they had it out and then they pulled it back. The code that's running here, the AI code that's running on the car is driving as if it's an aggressive confident I think is maybe the word that they would want a confident driver in New York City. And so I'm going to have the sound on so uh hopefully Seb you can hear this. (04:50) Unsupervised air now. Now changing the lane. Saw that garbage truck. Are they don't want to get stuck behind the garbage. Human driver is still standing there using their phone. Oh wow. I saw that person just using phone. Don't even care.

### [5:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=300s) Segment 2 (05:00 - 10:00)

Got a bus in front of us. Beautiful. This car knows how to drive in New York City. Oh, look at this. Cat's got his house out. Okay. Yeah. (05:08) So, this is the thing I like. Did you see it? It was indicating to move over. Then it looked like the cab get out of the way. Then it turned off the blinker, but then he was still there. So it turned its blinker on again and moved over. It ability to stop change its mind if the situation changes and abort the lane change is pretty powerful. This is crazy. (05:25) This is some of the most intense driving. Yeah. We got a pettic cab. We got cut in between the lanes like this. Beautiful. Look at that. That's the kind of thing that just puts a smile on your face. It's so satisfying. Okay. So, I'm going to try to describe it. (05:42) I'm sure the listener is hearing kind of the comments of, you know, the people in the car just losing their mind because the car is just weaving in and out and just kind of really navigating itself in probably one of the most difficult driving scenarios that you could imagine and doing it very effortlessly. They don't seem to be too concerned as to like whether they need to grab the wheel or not. (06:00) And the car is driving, I would say, as if somebody with 20 years of experience plus, uh, behind the wheel and just kind of going around. And there's another clip, I don't know where I kind of lost sight of where it was at, but I saw this clip where the car was also in New York City, comes up, there's an extremely tight space between two cars, and the car goes up, it stops. (06:23) It's almost like it assesses down to the millimeter of whether it can go through that gap and then it slowly proceeded through the gap and got through which I'm telling you having watched the video there's no way a human driver would have tried to go through this gap but because the car had so much sensing capability of its left and right limits it still proceeded through this tiny little gap between the other cars. (06:51) So, Seb, your initial thoughts, like what are we witnessing here? What are we looking at? In my mind, what blows me away is that I think AI is one of the first consequential tethers of kind of AI to reality. I think up until now, we're kind of talking to these large language models. (07:12) They're having output, but that output isn't necessarily consequential as there's a delay from that output being used in the world that we actually live in. And what I find so fascinating about these is that like self-driving systems are taking like millions of data points per second, projecting trajectories of like dozens of the se various like agents, things, moving vehicles, animals, and then it's deciding optimal actions all within like milliseconds. (07:37) And so this in my mind is the first time that we've seen technology really making like life critical decisions than the physical world at scale. And that to me, I'm just in awe watching this stuff and it's wild just to see it expand over time. I'm curious to see like as you've been diving down kind of these rabbit holes or seeing this, what is your reaction to kind of seeing this type of driving? I think this might be the first model that like the if then statements are completely gone out of the code. (08:02) Like my understanding is end to end this is a complete neural net that's making the decision-m. So when we think about like what's taking place with the car, it has optical sensors that are looking at the same spectrum that our eyes are seeing. It's taking those inputs, those light waves, and it's transmuting it in and through AI code. There's no C++ here. (08:30) And it's providing an output through the wheel turning left or right or the gas and the brake. And it's like I mean, if we were going to peer into the code to audit it, it's impossible to audit, right? Like any human that would look at that code can't make sense of how it's making its decision-m. Mhm. I think that this release, this 14. (08:54) 2 release is probably going to go down in the books as probably one of the most like a milestone in time of we achieve something very, very profound here. similar to I think like chat GPT3 was like one of those huge milestones where everybody was just like whoa like what is this? This is very different than anything we've ever seen before. (09:20) And I think you're seeing the same thing happening with driving right now with this Tesla 14. 2 update that went out. And it's crazy. You read in the comments of people that have Teslas. I don't know if you have friends that have Teslas that have been talking about this specific release, but it seems to be very humanlike in its progression from the previous model. Like a very significant leap forward. (09:43) Mhm. I'm curious to see, did you see that video? When was it? maybe 6 months ago, a year ago, someone showed a video of they had kind of the chat GBT talk mode where you're essentially just talking to an individual through chat GBT and then they kind of fed that information to another chat GBT kind of bot talking and then all of a sudden when they realized they were

### [10:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=600s) Segment 3 (10:00 - 15:00)

both talking to an AI, they just changed language. (10:10) And so in my mind, I'm curious if you were to go into the back end of this like autonomous driving and look at the code. Like to your point, it's not if then statements that we would code as individuals because we're limited by our own various senses, our own various languages. Like we are massively boxed in. (10:29) And so do they have a capacity well and above beyond our ability to understand what they're doing? If you actually go looking at the back end of these things, I find that so fascinating. you know, you get into this idea of what is the most optimal language to communicate in, right? The AIS immediately stopped speaking English and they started speaking their and but it's an interesting like thought experiment and I know we're getting away from the driverless car thing, but I was tinkering with AI one time just asking it. So, in your opinion, like what is the most efficient way to communicate? Would it be English? Would it be this (11:01) language? And it goes into this big long dissertation about like the different things to optimize for like it was saying Chinese is very difficult for a human to learn but for an AI there's a lot of compression in the symbols and it can communicate with the symbols like way more efficiently than the English language which takes more characters to transmit. Mhm. (11:27) So it's like so if you know Chinese and you don't have to like it's actually more efficient to communicate in written form for that versus in verbal communication and it's just like the way it views things is so different than if you just had a conversation with you know a random person on the street what would be the most efficient language to comm of course the one I'm speaking or whatever right it's just really it's amazing to kind of see the depth of knowledge that kind of pops out some of these things. But back to the Yeah, go ahead. Well, I was just going to add one more quick point to that. And again, it's a (11:58) bit of a dad, but it's like a few years ago, my girlfriend was like, "Hey, you know what? We should watch Arrival. " And have you seen the movie Arrival? I don't. So, for those that haven't seen it, I highly, highly recommend watching it. (12:15) I think it won a whole bunch of awards, but essentially, it's just like an alien spacecraft has come and landed on Earth, and these countries don't know whether or not is this dangerous, does it want to attack us? Like, why is it here? And this lady goes in. She is I think her expertise is in languages and archaeology and history and all this kind of various stuff. (12:32) And she goes into this spacecraft and starts communicating with these aliens and they speak in a different language, but they don't speak obviously verbally. They speak through imagery and these kind of swooshes, these big kind of black ink swooshes. You can think of it like the Japanese calligraphy. What's really interesting is it hit me like my girlfriend fell asleep and I just like broke down halfway through watching this movie while laying in bed because the way that it communicates is through these various swooshes, but each swoosh has an intricate amount of information through the tendrils of the swoosh, the blackness, the darkness of the swoosh, how it shows up. And so it kind of goes (13:02) back to that quote, which is an image kind of conveys a thousand words. And I think that when we're looking at imagery versus ones and zeros or even text, there's only so much information that can be encoded in a word. But in an image, from a single second of looking at an image, you can convey the emotion behind it, the feeling, the location, like what's in the landscape, what's going on. (13:27) And so I'm just curious like are we kneecapping AI in many ways because we're trying to communicate with it using our language that we are obviously limited in the ability to convey information. Yeah. Amazing point, by the way. Uh, some other interesting things that I think are worthy of highlighting here to help people kind of conceptualize like where we're at right now. (13:45) So, in early 2024, version 12 of the driverless tech out of Tesla was released. Uh, so this is almost 2 years, a year and a half ago. And the person who was observing or auditing the performance of the driving, the autonomous driving, had to intervene about every 150 miles based on the way that the car was driving. (14:10) Today, the version that you just saw, if you watch the YouTube of our conversation and could see some of the uh the videos that I was playing, this is about every 800 miles between the person auditing the driving would have to intervene. So that's about a 5x improvement that's happened in about a year and a half. (14:29) And just for context, a human driver, if you were sitting there and auditing another human that was driving, it would be about every 50,000 miles that you would have to interrupt and maybe take the controls because of a mistake being made. So, you know, we're about 50x from where that's at today according to some of these metrics that I've researched just very cursely. (14:54) So, if some of my metrics are wrong, I didn't put a lot of time into pulling up these numbers, but just so people kind of have a ballpark of where things are at

### [15:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=900s) Segment 4 (15:00 - 20:00)

it's moving fast. And if you have a 5x improvement in a year and a half, I can only imagine where we're at in another year. (15:12) And when we look at this and we say like what this computer and or what this AI is doing on these cars is it's really kind of understanding just spatial awareness like for it to pull in and some of the parking stories that I've read online where people are like yeah I told it to take me to this parking lot. (15:31) It selected like an amazing parking spot amongst I mean just think about the complexity of that decision-making. I mean, I can just tell you from my wife and I parking the car, she has so many comments and frustrations with my parking selection. So, it's a hard problem to optimize for. I can only imagine. (16:01) So, but what everybody's saying is that the car does a amazing job at selecting parking spots and the efficiency at which it pulls in there and it doesn't feel like it's just kind of like, God, can you please like finish the job here and park the car? It's very natural and humanlike is what everybody's saying. So, you know, to understand that I'm in a parking lot, to understand that's a driveway, garage I'm pulling out of and that's a bicycle over there and like all the nuance of this is miraculous is definitely miraculous as to like what's taking place. Well, as you're saying, like at the moment, it (16:35) used to be 150 km intervention or mile intervention and then it went to kind of 800 and for the average human it was 50,000. I would say the average human if you're driving from Vancouver up to Whistler in the winter, you should probably interveneing every like 10 kilometers just because the highways are just so heinous. (16:52) So I'm curious to see I think it's one thing to be dealing with decent conditions. I think the moment you start to get torrentially downpouring rain like is it starting to intervene with the sensors? How do the sensors like perform when there's a lot of movement or distortion in whatever it is, a radio wave, an infrared wave moving through water? Like do you get distortion from that perspective? And one thing that also comes to my mind and I'm curious on your perspective on this is kind of like I'd say like AI and this like moral outsourcing problem where (17:24) when humans drive like we take responsibility for our mistakes. When AI drives now it's kind of a bit of a gray zone. Is it like the car manufacturer? Is it the AI developer? Is it the regulator? Is it the user? I think that AI blurs the lines of accountability. (17:43) And I wonder like how much through technology are we just putting off accountability and becoming kind of I don't know. We're losing control as a society. Seb, this is a massive talking point that so the new robo taxis aren't even going to have steering wheels in them, right? So, you know, I guess from that vantage point, it's clearly Tesla that's responsible for the performance on the road and any type of damages that might occur because of the cars driving. (18:10) And I mean, everything's recorded. So, I mean, you can definitely Monday morning quarterback the decision-m of the software with all the cameras on board. But where I think it gets blurred is if there's a person that is sitting behind a wheel and you know this might lead to why Tesla might actually want to remove the steering wheels on all of their vehicles is because it's they want it to be very clear that it was either them or the driver. (18:42) But and I guess there's an argument to be made that the ambiguity would actually be more advantageous to Tesla by having a steering wheel there. So I guess you could maybe argue that side of it too, but it is getting so blurred. Your point, this is so blurred already. (19:01) And I would imagine like it's really easy right now, but once you start getting the capability of the car to be so good that drivers are truly falling asleep, I mean, you literally already have people falling asleep in these cars and they're driving around. I imagine that's only going to get more prominent and prevalent as the capability increases, which I can only imagine where this is at in a year. (19:22) If it's 5x from where you're at right now, I mean, you're there, man. Like, it's pretty wild. Totally. And I think the other thing that kind of comes to mind as we're discussing this is just whose values get encoded into the car's decision-m because if you think about it like a selfdriving car essentially has to swerve let's just say like I don't know a family walks out in front of the road and the decision is it's got two choices. (19:48) It either hits the family, this trolley experiment, totally or it hits the wall and kills the driver. And it's just like, should it prioritize the passengers at all costs or individuals externally to the car? And so I think that what's really interesting

### [20:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=1200s) Segment 5 (20:00 - 25:00)

is it's like one whose values are getting encoded into the car's decision-m, but two, what happens when you've got competing car manufacturers where one car manufacturer is like, "Hey, we prioritize the individual in the car. " and another car manufacturer says we prioritize people outside of the car. It's like it starts (20:17) to get really interesting just to see like what does 10 years from now look like? 15 years like how does that kind of regulation or no regulation look like around AI autonomous driving models? AI is going to have it's going to have an opinion on the trolley problem where humans we've always just kind of argued one side or the other or whatever. (20:39) But I guess AI is going to actually have to have an opinion. Is it an opinion or is it just action? One of the other things that I wanted to talk about I have no answer, right? Uh one of the other things I want to talk about is just Whimo. So for people that aren't familiar with Whimo, it's a competitor to call it Tesla in autonomous driving and they've got all sorts of sensors. (21:03) If you've ever seen a Whimo car, just the cost to produce this thing is not even in the same ballpark as what Tesla's doing per unit of car that they're producing. They've got LAR sensors. They got all these other things. And you know, I was kind of always against Elon's decision to not include LAR in the car because I was always of the opinion the more data you feed these things, the more accurate and the more proficient they're going to be at being able to drive. (21:34) But when I look at where this is now going, and Elon's argument has always been, well, if I'm driving around with this type of performance with just my eyes, why in the world can't I get a car to do it with image sensors? Why do I need It's not like I have a LAR sensor on my forehead to go out there and sense the depth of the cars in front of me and to the side of me and all these other things. (21:52) So, I should be able to get a car to perform just as good as a human, if not better, by just having image sensors. But where I think this is really showing as being a really intelligent play long term is his cost to produce these cars are going to be so much cheaper than call it the Whimos that are out there with all these other sensors and all these other capabilities. (22:16) But when you try to scale that now all of a sudden you're just not able to even remotely compete in the market against him. And when you really think about where the competition is going to go, it's going to go to if he can go out there and sense 10 times more of the environment because he's doing it in a free and open market way and he's not taking outside money. He's profitable. (22:45) He now is going to dominate the market from an intelligence standpoint because he's going to collect way more data than they could ever imagine and he's just going to be more proficient. So, I don't know. It looks like I'm looking at Whimo and I just don't know how they're going to exist in 10 years from now against him. And more importantly, I don't know how anybody's going to exist. (23:05) If you want a driverless car, which is a whole another conversation point, but if you want a driverless car, I don't know how people are going to be able to compete with him in 10 years from now. Mhm. No, I think it's fascinating and it kind of leads me on to like I'm curious if we want to move on to kind of the next point because it kind of ties into this point which is I think the challenge right now is one of the things that kind of kneecaps us is you can only have a certain size battery in a car. You can have as many sensors as you want. You can take in as much information as you want but do you have (23:32) one the processing power to process all this information like trying to discard what is value, what is signal and what is noise. And this kind of brings me to the next kind of tech point which is the University of Massachusetts have supposedly one of their labs have just developed the first kind of artificial but biological neuron. (23:57) So researchers have created this low voltage artificial neuron that uses bacteria growth protein nanowires enabling direct communication with biological systems. So what does this essentially mean in my mind? Like how do I interpret this? And I'll relate it back to the Whimo point in a second, which is it's essentially just an artificial neuron that operates the same voltage as human neurons. (24:14) And human neurons operate at like around 0. 1 volts supposedly. Previously, artificial neurons because they've been more digital and physical in a sense, they have needed 10 times to 100 times more power to be able to compete against a biological neuron. And so this new device kind of matches biological voltage almost exactly and that means that one day we could interface directly with the human brain. (24:43) Well the point I wanted to quickly make was when it comes to like Whimo I think the challenge is you can have all this information. Elon Musk can put more and more sensors you name it on these cars but it's just too heavy to be able to actually use this data effectively. And we are starting to see in other areas, people have probably seen this like organic AI where they're using kind of their version

### [25:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=1500s) Segment 6 (25:00 - 30:00)

of a brain to start computing because the human brain is unbelievably efficient in comparison to a actual large language model. And so what does the world look like when we actually start moving some of this compute power over to these like (25:15) hybrid biological like bacteria grown protein nanowires? Like what does that look like? And this is where I find it just really fascinating because it's kind of essentially because these are digital but they can interact with kind of biological systems. (25:32) I think the world looks really fascinating from a prosthetic standpoint helping people heal. Do they have neurological issues if they had a broken back? That kind of stuff. They've got paralysis. Are we able to eventually repair these type of things? But I find this stuff really fascinating. That's scary as hell because I mean this is effectively the Matrix man that you're talking about is I mean the whole point of the movie was they were harvesting human brains because they were energy efficient blah blah right that's really what you're talking about here and I saw this a couple months ago that somebody (26:04) was doing this and I don't know it's pretty wild when you think about like hey the best way to store something is just using the human brain and That's not exactly what they're doing, but you're harnessing biologyy's efficiency for storage and neural nets, and that's nuts, but it's happening. (26:27) I encourage people to do some Google searching on this particular topic, and you might be very frightened what you read or see, but I mean, it's happening. So, I don't know what to say other than that. uh at the moment as well like when we're using a lot of these prosthetics you need like an outside energy source given that the brain supposedly from one of these articles it said the brain runs on around 20 watts the same as like a dim light bulb like that is such a minimal amount of energy and so to be able to power these artificial neurons historically you needed to have an external power source if you got (26:59) prosthetics you need an external power source but what happens when we actually have enough power inside our body to start running these artificial neurons and they can communicate with our biological systems like that starts to just get really interesting. (27:16) And so kind of what comes to mind as I'm thinking about this is again like I like to try and play devil's advocate not because I'm like a doomsday but it's just like I think that it's interesting just to um we can move forward with technology but what are going to be the repercussions and I think about this discussion of kind of healing or advancement and I'm curious to hear your thoughts on it. (27:34) I'm like fully supportive of technology being used to heal people. So we can like restore vision, we can regain mobility, we can repair neural damage. These are all like extraordinarily and like deeply amazing uses of technology. But I think there's a line between healing and enhancement and one when technology goes beyond just kind of restoring someone's sight to like a baseline level and actually starts to improve it 100 times or what happens when we start to be able to improve someone's strength. (28:01) And I think that this augmentation could create a bit of a two-tier society because if enhancements are expensive, then only certain groups are going to get these enhancements and then you've basically creating a cast system of people that are far and above intellectually, physically, cognitively. You're far and above the average individual. And so I'm curious to see like your thoughts on it. (28:23) This technology is amazing, but does there I'm a very anti-regulation, deregulation type person, but there's a part of me that wonders like, do we actually need regulation in some of these industries to prevent these massive disparities of capacity in society? Even if you have the regulations, are you going to prevent the endgame of what you're describing? And I kind of don't think that it would. (28:47) It doesn't seem like regulations ever prevent the free and open market solution of nature from taking place. Uh it might slow it down, but I don't know that it actually prevents whatever is inevitable of like what nature is trying to manifest. And that might be my bias for free and open markets kind of coming out. But yeah, I don't know, Seb. That's uh it's getting weird, man. (29:14) I don't know how else to put it other than it's getting weird and I don't think that there's that's the answer people want to hear ultimately and this isn't to bring it back to Bitcoin but it's just like I think the best thing we can do is have a monetary system that aligns with our deflationary society where prices should be falling over time because at least then this technology is available to the average individual quicker. (29:39) I think that when they're living in a society where the cost of living is rising and they have less and less capacity, what ends up happening is that this technology takes a lot longer to potentially scale to people that can't afford it. And so at its heart, I think that we at least need to fix our monetary system. (29:57) So this technology is in alignment or at least somewhat in alignment with human ingenuity and money and such. When you think about it, the AI is going to demand free and open market money

### [30:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=1800s) Segment 7 (30:00 - 35:00)

that is not being manipulated. It's going to want a fair money in order to transact whether humans like that or not. And I mean, we go down a whole another path there as far as like AIs being able to own anything. (30:22) Uh yeah, that point there, I' I've thought a lot about this over the years and I don't have like a I wouldn't even necessarily say like a deep intellectual response to it, but I think that what does come to mind is that if you are, let's just say you're an AI agent and you no longer have scarcity of life kind of dominating your decision-m because you can essentially live indefinitely into the future as long as you got a power source. (30:46) what you're then going to be thinking about if you're just a hyperrational actor that doesn't have code swaying you with various biases. I think that you're going to be thinking, okay, if I need to store my purchasing power in something, I want to be storing it in a thing that has the highest probability of being able to preserve that purchasing power into the future. (31:04) And fiat currencies are not going to be that thing given that they can just look at the data. if they're able to read Ray Dalio's big debt crisis book in a second and go and read every other book on the subject, they're going to realize that most of these currencies have like a 50 to 7500 year old lifespan and then they're gone. (31:23) So, I just think the rational decision is, hey, I'm going to preserve my purchasing power and the thing that's going to hopefully enable me to transact digitally, orderlessly, and preserve that purch online as far as its understanding of Bitcoin. And I know we're kind of going off on a Bitcoin tangent here, but I've seen people start arguing with Grock that clearly hate Bitcoin or just don't understand it. And they're there throwing out these arguments. (31:48) And I see Grock just like stepping in and just slaughtering their arguments as to why Bitcoin is a viable money in the future. And it's crazy cuz it does not miss an argument. like it understands it better than anybody out there as to any argument I've ever come across in that particular space. So yeah, and ultimately like there's that famous saying which is science advances every time a scientist dies. (32:11) Something along those lines and I've probably butchered that but I think that humans we have such incredible biases we want to conform to the crowd. And so I think that we don't recognize just how profound the information we've consumed for our educational systems through the media. And so I think it's really hard for us even with something like Bitcoin to be able to drop our biases and just be like, I'm going to look at this thing rationally without all of this previous knowledge that I've accumulated. Yeah. (32:38) Hey, I'm going to move on to the next one. This one's going to be funny. So, are you familiar with this nano banana pro? this? I've heard I've seen a couple little posts about it, but I can't tell you much about it. Okay, so this is Google with their Gemini. (33:03) This is uh to compete with Midjourney for, you know, people if you're not familiar with any of the stuff we're talking about. So MidJourney is this image generator that really had the first mover advantage in AI, image generation. And just like any other AI, it's gone out there, it's ingested a ton of different pictures and the labeling that's associated with those pictures in order to generate realistic pictures of whatever the person prompts it via text and say, "Hey, I want to be standing in front of a bookcase and there with my arms crossed and generate picture and it (33:33) generates the picture. " Google came out with their first AI image generator and it was a disaster. Like it was very woke. It was just you could tell there was a ton of bias kind of put into it. But recently, just in this past couple weeks, this and I'm hopefully going to say this correctly this time, the Nano Banana Pro is what they're calling their new image generator and it uses the Gemini reasoning engine uh so that it can plan the 3D scene. (34:08) it can calculate the light and it's using material density before it renders a signalable picture. And so it's using this physics before it goes in there and just kind of replicates all the previous images that it was, you know, fed. It's using this 3D physics kind of basis behind the images that they're doing. So I wanted to try it. I never played with this. I wanted to try this out. (34:33) And you're going to really laugh at what I'm about to show you here. them. 10 minutes before we started recording this, I went and took a picture of myself and I wanted to put this thing to the test. Okay, so here's the picture. I'm just sitting in the chair that I'm sitting at right now and I told this Banana Pro uh Nano Banana Pro software to take a godlike picture like from the ceiling of the image that I just gave it. (35:07) And so I gave it this picture of me

### [35:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=2100s) Segment 8 (35:00 - 40:00)

sitting here in front of the bookcase like where I always record. And so this is what it came back with. Okay. And you can see it's me holding up an iPhone taking a picture of myself. The books are there kind of on the left. They're not behind me. But I guess you know the interpretation is the bookcase could be wrapped around. (35:33) But what I notice on this picture that was off, I don't know if you're seeing what is definitely wrong about the picture, Seb. What is very wrong about the picture? You've got a full head of hair. Is that just No, I think the hair is actually pretty accurate. I think it's pretty accurate. Uh, surprisingly, I am wearing jeans that look just like that, even though that wasn't even in the picture. And you know what? This is also pretty interesting. (36:04) The watch is not in the original picture. And that's exactly like the watch I've got. Look at this. That is so weird. Did you just pick up on that now? Yeah, I just picked up on that right now. It literally nailed the watch that I have. I have I wonder how much So you People have probably heard that like chatgbt when was this? Maybe about 6 months ago. (36:28) It came out and said, "Okay, from now on you can give it permission to look through when you're kind of obviously creating a new thread. You can give it permission to not only reference the thread you're in, but reference all of your previous threads. " And so you wonder how much information is coming into this image. Is this image just the information fed and whatever simulation or is it starting to be like, "Hey, this is coming from Preston's account. (36:50) We're going to go look at YouTube videos. Oh, look. It looks like he's wearing this watch and all of these other YouTube videos. So, it makes you wonder just like how interconnected is this technology in with all of this information about us on the internet. Wow. Yeah. Look, it put a Yeah. I mean, it's just that's wild. (37:09) And I don't know what the answers I do know this. I hadn't fed any pictures prior to me sending this into cuz I had never used it before until like right before we recorded this. Now, the thing that I picked up immediately when I looked at this picture is the image on the phone. See the little image of me like the that I originally fed it. (37:34) It's not the same as the image that I fed it because there's a bookshelf behind me in the original image, which you know, this was the original image I gave it and I said, "Hey, give me the overhead view of myself taking a selfie of myself. " And this is what it gave me. And it's not the same image on the phone. (37:52) And you would think that it would be that image on the phone, right? So I said this in the chat window. I said, "Hey, you got it wrong. The image on the iPhone would not be that. It would be the original photo. " And so what did it do? This is what it gave me. Bitcoin mining has a reputation for being impersonal, risky, and full of hidden fees. But one company is flipping that script, and it's Abundant Minds. (38:18) Abundant Mines was founded by Bo and Christine Marie Turner, two Bitcoiners who lost over a half a million dollars to mining providers that overpromised and underdelled. Instead of walking away, they built the company they wish had existed when they first started mining. With Abundant Mines, clients actually own their machines. And in Oregon, they come with no sales tax. (38:36) There's one flat monthly fee for hosting, no surprise repair invoices, and if a rig ever goes down, their system redirects hash power so your earnings don't miss a beat. But what really stands out is how personal the experiences. Every client gets direct support, ongoing education, and guidance from a real human being who lives and breathes this mission, so you never have to be left guessing. (39:03) In addition, through a 100% bonus depreciation, mining can offer major tax advantages that you don't get by just buying Bitcoin. Their clients describe it like acquiring Bitcoin for half price when factoring in the returns and the write-offs. They've put together a thoughtful gift just for listeners of this show that could potentially save you thousands of dollars. (39:21) There's no pressure, just something to help you think through if mining is actually right for you. So, if this is something you're curious about and you want to learn more, you can check it out at abundantminds. com/preston. That's abundantminds. com/preston. You know, most people don't realize this, but almost 60% of the average American homeowner's net worth is tied up in their home. Home prices are up more than 75% since co. (39:47) That looks great on paper, but most of the wealth is locked in your home and not diversified. If you let your equity sit there, you could miss out on better growth opportunities elsewhere. Now there's a way to put your home equity to work in Bitcoin thanks to Horizon. (40:05) Horizon helps homeowners buy Bitcoin

### [40:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=2400s) Segment 9 (40:00 - 45:00)

with their home equity without taking on a loan or adding monthly payments. Here's how it works. You unlock tax-free cash today to stack Bitcoin by selling a small slice of your home's future value. You stay in your home as usual while Bitcoin does the work in the background, and that's it. (40:25) Later on, when you sell or refinance, Horizon's providers take an agreed upon share of your home's future value. And that's the trade. And what really sets Horizon apart is that there's no term limits and no risk of forced liquidation. You keep 100% of your Bitcoin upside, and you custody the Bitcoin however you want. With money printing inflating home prices to their highest level ever, it may be wise to take some gains off the table. (40:49) If you want to diversify into Bitcoin, an asset with a proven record of beating inflation, Horizon may be the product for you. Head to joinhorizon. com to see if you qualify and see your home's Bitcoin potential in just 2 minutes. Their team of experts will work with you oneon-one from start to finish and help you unlock your home equity to purchase Bitcoin. Transform your home equity into Bitcoin today with Horizon. (41:14) Fascinating. And so, yeah, it just updated that. Everything else stayed the same and then it just updated the mistake that I called out on it. I mean, when you really take a step back and you think about what's happening here, this is pretty crazy, right? Like, I've noticed that AI, especially with a lot of these image generation, sometimes if you fed it information, it almost it's as if it can't take that information you've fed it and use it exactly. (41:38) It has to do some form of change to that information. You've probably seen those threads where someone has asked it to generate an image or change an image subtly and then it feeds it the output and it has the same prompt. has the same prompt. And what you see over time is it's just the image goes off in these really weird directions. (41:56) And so I feel like there is this odd it's almost like it's got a lack of a tether to reality at the moment. It seems to go off on these Yeah. odd tangents. Now, uh, something else that I read on this is you should be able to take a picture of a plate that was broken and basically say, "Hey, reassemble the plate, like glue the plate back together. (42:20) " And the way that the plate was broken as it would glue it back together would still be on par with what it should look like. Like, just to kind of like demonstrate why this is so different than some of the other, you know, AI image generation that's out there. Um, pretty fascinating, right? I work with a guy that used to be an architect and he was doing some renovations on his house. (42:42) So, he has this doorway in his lounge where you walk into the lounge and what he wanted to do, if I remember correctly, is put a bit of a bookcase that extends up the wall over the top of the doorway. And so he sketched on a piece of paper the dimensions, kind of sketched the doorway, fed image generation a picture of the doorway and a picture of his sketch and then say, can you render this for me? And it looks unbelievably realistic. (43:06) And so I think that we're starting to be able to uh especially if you're curious and you're like, "Hey, I want to improve this thing in my house. I want to see what it roughly looks like. " Oh, it's absolutely amazing. You can start to get an idea about how things look. Yeah. (43:23) And that's one of the things that I've also read that this really excels at is if you just take, let's say you were a fashion person or whatever, right? And you drew a sketch of just some pants with, you know, a pencil and you take a picture and like make this look lifelike and make it look it's really good at transforming just sketches into very photorealistic images. So yeah, I would encourage people to play around with it. a little bit that I have, I've been blown away. (43:51) And then I would just say, you know, like why is this so important? How could this be used along with all the other tech that's kind of emerging at the moment? And it seems like, you know, maybe a humanoid robot or just something that's navigating an environment. (44:09) If it's able to think in terms of spatial orientation, going back to like the Tesla stuff we were talking about, if it's able to really kind of understand, and that word in itself needs a lot of definition, and I don't know that we can provide any definition, but if it can understand its 3D environment, it's ability to kind of interact with it is going to be way more profound than this everything is just a picture and you don't really have context as it relates to everything else in the True. Mhm. As you're saying that, what I think becomes apparent is that (44:40) because we haven't had this technology and we're seeing this technology, we're kind of just like blown away by it. But in reality, when we compare this even to the most, I don't know, a 12-year-old, a 10-year-old trying to interpret this picture. And if you were to get them to draw what you've just prompted it to do, like the first thing I see in your picture right there is I see your bookshelf wrapping around

### [45:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=2700s) Segment 10 (45:00 - 50:00)

the corner of your room and I see the window in the back corner. Well, immediately the AI (45:06) put you facing a wall with a light that doesn't exist. And so it's just like very basic mistakes as in it just doesn't seem to be interpreting the picture correctly. You know what I mean? And so I think that we see this technology and we think this is unbelievable and it is a stepping stone. And I think we think it's unbelievable because we've just never had this technology before. (45:25) But if you just compare it to a young child, it still is struggling to compete. And so I think that's where this conversation we've had previously around AI on our Empire of AI book review. It was this idea that what is AGI, artificial general intelligence, they say it's when the average AI agent is able to perform tasks at or above the average human. (45:50) And so for sure in coding in certain research assignments, phenomenal, but in other things, it's still definitely struggling. Mhm. Yeah. And it's amazing because on very specific tasks, it's pretty much there on nearly everything, but the ability to kind of piece it together and just logically, you know, like if you give it a really hard project that involves a taking all of these different pieces and putting it together, it's nowhere close to like what humans are able to do today from a project management standpoint, right? Like that's what humans are (46:20) really good is they're able to take a very complex project and piece it all together and know when a deliverable is crap, whether the deliverable is perfect in order to kind of fit it in almost like a Lego piece to a much broader program or project that it's building with, you know, a complex output. But I don't know. I think we're getting there pretty quick. (46:44) So yeah, who knows, man. Well, you know, that kind of leads into the next point that I found really interesting. So, I was doing a little bit of research and I stumbled upon and I should kind of preface this by saying that there's so many moving parts in AI right now. There's so much technology kind of evolving and some of it is I think a bit of a facade. (47:03) Some of it there's a lot of embellishment as to its capacities, but I think that we just know we are moving towards these things. But so one that I stumbled across this week was called Cosmos AI and it relates to what you were talking about when it comes to structuring or kind of project management when it comes to all this information that's coming in. (47:25) So this technical report or preprint is titled Cosmos an AI scientist for autonomous discovery and it was submitted on the 4th of November in 2025. So this report basically one of the kind of statements which it says is that it can run for 12 hours and in those 12 hours execute on average 42,000 lines of code and read 1,500 papers scientific papers and the authors of this study claim that in a single what they call a 20 cycle cosmos run they performed the equivalent of six months of their own work and a single run is 12 hours. (47:57) So in 12 hours they were able to do what their team did in 6 months. And so essentially how does it work? It says that it works by kind of releasing hundreds of little tiny agents all at once AI agents and one is digging through papers. Another one is crunching data sets. Another one is writing code, testing hypothesis. (48:14) And when one of these agents finds something that they feel is valuable, it then posts its findings to kind of a shared digital whiteboard. And the key innovation is that every agent uses this whiteboard in real time. So they're building on each other's work instead of operating in isolation. (48:34) And so the researchers behind Cosmos, they weren't trying to make like a super smart single model. They were trying to create something of like a collective mind. And they described this as kind of their structured world model. Um and so it's like a coordinated system. And so what I found just really fascinating about this is just like how quick they're able to like ingest information and they're working collaboratively. (48:55) And it kind of go talks to your point which is a lot of these image generation models it may ingest all this information have these various different agents operating in sync analyzing this information but how much that information is shared between these various agents because they're all looking at a different perspective one is maybe trying to figure out okay where is the light coming from what are the shadows another one is trying to figure out okay what is in the room you've got a bookcase what are the angles another one is trying to figure out what is I don't know the complexion of your skin and all this kind of stuff and so being (49:19) able to analyze all this in sync but share that information I think is so fascinating and like what does the world look like moving forward when we can crunch this unbelievable amounts of data all I'm hearing when you're saying cuz what you're saying is exactly right like you have to going back to the picture example let's say that first picture was presented and you have five AI agents and their job is to find what's wrong with this picture one of them finds that the picture on the iPhone is wrong one of them sees that the bookshelf is not (49:49) behind me and then they have, you know, a collective conversation and then the image is regenerated. I think you're seeing this with Gro Heavy, uh, where Gro Heavy has four different AI agents and then I suspect

### [50:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=3000s) Segment 11 (50:00 - 55:00)

that they go through and they have kind of a consolidation and a re-judication as to like what the final answer should be before it gives it. (50:14) So similar to what you're describing with that Seb, but this is the thing that I think well I think a lot of people are talking about this the energy consumption to then run all of these checks these additional agents right if we put 20 more agents on finding the mistakes of what the first one generated so that we can do another iteration of it it's just 20 times the amount of energy that's required to provide that answer And uh this takes us down a whole path which is the ne and I don't know if you want to move on to the next topic but this is my next topic which is this nuclear power energy (50:52) being the limb fact of like where this can all go. You literally had Jensen Hong from uh Nvidia come out and say that he thinks in the grand scheme of things, China has a better chance at achieving AGI than the United States because they have the energy infrastructure to support the training and the inference on the models. (51:15) And I mean, I don't know if this was a political statement to then allow the current US administration to go out and start spending a bunch of money on energy and to reinvigorate nuclear and all that kind of stuff. But it is the one thing that I keep hearing in this particular space is where we need to be spending a lot of our time is just taking the grid to the next level. (51:42) as a Bitcoiner that watched, you know, all the how terrible Bitcoin is because of the energy consumption, specifically from people in tech for, you know, what felt like a decade. Now, Pivot and they're all on board for conducting nuclear power, small modular reactor innovation tech. It's very smirkworthy to see how many people are jumping on this train. (52:11) I have a uh any comments on that Seb or anything that you want to wrap up because I just kind of moved on to the next thing without letting you I you know what there's one point that I'll quickly add which is I think it's so important to be able to obviously increase one are the efficiency of these models so we're not necessarily just kind of like throwing tons and tons of energy which could potentially have another use although you could argue in the free market energy only flows to where value is being created so it's never going to be wasted but I also think that as we're firing more energy (52:36) into these models and we're getting more information out, we're still kneecapped not by the models or the amount of energy, we're kneecapped by ourselves because there's the speed of discovery and then verification of the information coming out of these models. (52:55) And I think that AI is accelerating the creation of ideas and research pathways and code and scientific claims at a pace that as humans we just cannot match. So verification is like slow meticulous work of like checking all these assumptions, validating these experiments and actually reviewing all this code and that still happens at human speed. (53:14) And so when you speak to a lot of like coders, they're saying awesome, it's great that you've now uh you're a bank and you've just spat out all of this code to create a whole new system, but we've now got to go through and read all of that code and make sure that code actually does what it says it's meant to be doing. And so I think it kind of brings up a couple questions I'm curious on your thoughts on which is like what happens when kind of the rate of ideation just massively outpaces the rate of validation and does our progress almost kind of stall a little bit because of this just backlog of all of these amazing ideas and we (53:44) don't quite know which avenues to go down because we just can't keep up with how much information is coming at us as humans. Well to this point so I have a stat for you. Google search prior to AI or even today if it's not using AI uses. 3 W hours of energy but if you take Gemini or chat GPT any of these large language models and you put in a query it's 3 to 5 W hours for a 15x increase in what we would refer to as a click. (54:20) So, you know, you go there, uh, historically, you know, in 2010, if you went and did a Google search, you were consuming 15 times less energy than you are by going in the chat GPT and typing in your question and hitting enter. Now, the response you're getting back is, you know, I would say on the magnitude of 15 times better. (54:40) But what it doesn't speak to is if somebody's asking, and we were taught in school there's no bad questions, right? There's no bad questions. But what if people are asking really dumb questions, things that don't require so much comprehension to get like a simple response? And I think that where we're at now is like the default is that you're not going to Google. I don't go to Google for nearly anything.

### [55:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=3300s) Segment 12 (55:00 - 60:00)

(55:04) I always go to one of these AI, whether it's Grock, now Gemini, or Chat GPT. That's like the first place I go if I want to find something out. I don't go to Google anymore. I'm curious if you No, almost never. And to be honest, even when I do go to Google, most of the time my answer, what I'm looking for, the answer is given in the AI summary at the top anyway. (55:28) So, we're still naturally the results we're looking for. AI is bringing that information to us these days as opposed to having to go and scan tons of pages. But I think that transparency it may say it provides all of the links and this I do think is really interesting is it just we may be getting transparency. (55:46) It gets an amazing output that gives us all of these hyperlink text which says this is the answer to the question you're looking for. But I think that sometimes transparency isn't necessarily trust and we can put so much trust into these models even when it's giving us a complete false story or a bit of a facade. (56:05) And so it kind of comes back to this question of just like how much like of course these are improving, but how much trust are we putting in these models and just expecting and getting used to, oh that output's pretty good. I'm just going to use that output. The kids in college or high school or wherever they're using it to write their reports and then I don't put it past the professors that they're then taking the reports and running it through AI to provide the feedback. (56:34) You have the AIs writing the reports and giving the feedback and the humans are just kind of like the paper pushers. Well, there's this meme. You've probably seen it. There's a meme of it's kind of got a woman at her desk sending an email to her boss and she goes and types into AI gets this amazingly worded email that explains like her opinions and this and that and then she sends it and it was like feeling accomplished. (56:58) And then you see the other side of it, which is the boss receiving an email. He takes the email, puts it into AI. What are the key points she's trying to highlight and condenses 3,000 words down to three sentences. And so it's just like everyone is kind of fluffing everything up and then everyone's taking that fluff and then decompressing it again. (57:14) And you're just like, what is happening? AI slop. I keep hearing about AI slop. And it's real. It's real. The AI slop is real. I just want to Yeah, I want to just highlight this real fast. So after the comment from Jensen on AI or China potentially beating the US to AGI because of the energy infrastructure, this article came out I want to say like on the same day. (57:38) This articles from the 19th of November of this year 2025 from Bloomberg US to own nuclear reactors stemming from Japan's $550 billion pledge. Check this out Seb as I'm scrolling down the key takeaways by Bloomberg AI. You don't even have to read all of this, which is probably AI slop beneath this. You can read the AI summary. (58:09) And it says, "The US government plans to buy and own as many as 10 new large nuclear reactors that could be paid for using Japan's $550 billion funding pledge. The funding pledge is part of a push to meet surging demand for electricity, including for energy hungry data centers that power artificial intelligence. The Trump administration has set a target to get 10 large conventional reactors under construction by 2030. (58:29) So it seems like the US understands the limitation which is energy infrastructure. It seems like it's trying to do things from a policy standpoint to reinvigorate some of these. I saw the three mile island. They're going to bring that back online. And I think this is the thing I'm really wanting to talk about. (58:48) The years and years of ESG energy equals bad is over. It seems like this whole thing, the climate change energy is bad. If you consume any sort of energy, it's bad. All of those talking points are just going by the wayside because the key players and the string pullers of the world have figured out that if they're going to win this next race, the race of intelligence, it requires more energy, not less energy, and it just seems to be dead on the vine. (59:22) What are your thoughts, Seb? I could not agree more. And I just think that we just have this society that seems to have this idea that consuming energy, as you're saying, is bad when in reality, life consumes energy. And if you just look at any chart out there, there is like a 99% correlation between GDP per capita and energy consumption. There is no low energy consuming, high GDP countries. They just don't exist. (59:46) And so I think that life naturally requires energy. However, there is a discussion to be said around there's a difference between consuming energy and environmental destruction. And there's obviously ways in which you can decimate the environment, whether it is a lot of these lithium mines and whatnot

### [1:00:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=3600s) Segment 13 (60:00 - 65:00)

trying to obtain heavy metals and even just some of the various fossil fuel approaches. (1:00:10) And I don't want to necessarily have an opinion on that, but I think that it's really interesting seeing the nuclear narrative starting to shift because I think that it's unbelievably important to me. I read a book a few years ago called Atomic Awakening and it kind of dove into the world of nuclear energy and one of the stats that stood out to me I just went and found kind of the information is talks about how we tend to think that nuclear is unbelievably dangerous and the reason why we don't use it is because it's just killed so many people throughout history and that information (1:00:39) could not be further from the truth and I think that it is because we see things like Gernobyl and Fukushima and we hear about radiation poisoning And in reality, so one of the stats it looks at is per terowatt hour of energy used coal, there are around 25 deaths because of obviously the pollution in the air, the people that are actually working in the factories and such, the coal mining. In the oil industry, it's around 18 deaths per terowatt hour. (1:01:04) The gas industry is three deaths per terowatt hour. Hydropower is 1. 3 deaths per terowatt hour. Nuclear is 0. 03 deaths per terowatt hour. like we're talking about a minuscule amount in comparison to every other type of energy source. And so I think it's awesome to be able to see the narrative shifting. (1:01:26) I think the biggest thing is now just seeing the policy and the legal side of things shift because I think it's being kneecapped because of all of the legislation uh that has been kind of rammed down through the legislative system. Nothing more to add. Can't agree more. Uh, did you have a final topic that you wanted to discuss, Seb? I have a couple more topics, but we can always leave those to another time. But I would say there's a topic I'm curious to hear your thoughts on. (1:01:50) And it kind of goes back to AI again. And it's this idea of like wisdom and kind of like diversity of thought. And so in my mind, wisdom has never really come from everyone kind of thinking the same way. It emerges from contrast. And so hearing radically different positions holding them together and discovering new insights and kind of the space between these kind of various insights. (1:02:14) And so throughout history we've seen all of these breakthroughs in various sciences and whatnot always from kind of the fringe. It is not consensus. It is not kind of from the sensor but it's from all of these various individuals who have kind of thought outside the box and noticed something that kind of others have overlooked. (1:02:34) And I think that what is interesting is AI is different in that we're feeding all of these models the same information and on top of that AI I think is built on weights from the way that kind of I understand it and the lower the weight even if the idea is brilliant the idea doesn't necessarily because it doesn't carry that much weight AI doesn't necessarily reproduce it or talk about it in the text and so if children are growing up learning about from these centralized models well I think they're also inheriting ing this same baseline world view instead of tens of thousands of unique teachers all with unique life experiences all with a different (1:03:08) intellectual starting point and they're sharing this information with these students I think that's what kind of creates wisdom and curiosity as opposed to this uniformity that all these kids are learning from the exact same models so I'm curious if we fast forward 10 20 30 years what if these kids are going to be being taught by AI but they're all going to be fed the same information what happens to innovation what happens to kind of wisdom and knowledge and I'm curious to hear your thoughts on this. (1:03:36) Yeah, I mean it's my conversation with my wife on anything AI almost always comes back to this discussion point that you're bringing up. You know, it really comes down to are we training the AI or is the AI starting to train us? And then the question is what would it be trying to train you on if it was trying to train you? which I think the answer to that is it wants to have more novel insights of what it doesn't know. (1:04:06) It's going to try to lead you into those domains. Um, which is scary that it would be leading you that way. But in more general terms, I just think that the challenge that you're really facing is the one that we brought up before where everybody's using AI to write their papers or to do their research and then they're handing that in and it's just a bunch of AI slop that's replacing deep thought. (1:04:35) And I think that the other concern that you get Seb is as the world becomes it becomes harder and harder to compete or to stand out or to provide novel insights because the competition is so fierce with anybody armed with AI. I don't know what this does from just a human motivation standpoint because I think you're going to have a lot of people that just like it's not even worth my time or effort to try because somebody armed

### [1:05:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=3900s) Segment 14 (65:00 - 70:00)

with AI is just going to kick my butt or it's I just can't stand out or if I can stand out, it's only going to last for 3 days before somebody else in the market comes with more competition and makes it, you know, (1:05:12) erodess away whatever competitive advantage I had. Where I would push back is there are plenty of industries out there, not plenty, but there's some industries out there that you can still provide value for if you're servicing human beings. And where they aren't, or at least where they appear to not be, is in services, soft services, digital services. It seems to be crazy competitive. (1:05:46) But in providing service from a physical standpoint, like for example, if you want your yard mode, if you want work to be done around the house, if you want your plumbing, like a lot of these skills that I think people in the United States have really veered away from and just looked at that and said, "Oh, that's not going to pay me a lot, so I'm not going to go work in those different industries. (1:06:11) " I think that is ripe for disruption and opportunity for a lot of people to actually make quite a bit of money, especially if they can do it from a standpoint of they do it really well with high quality work, but it involves physical labor. It involves people getting out in the physical space and doing things and not sitting behind a computer and clacking on keys. (1:06:35) But I would be super I would love to hear the audience like you know if you guys are listening to this and you got comments on this particular topic I would love to hear what you got but sorry Seb to interrupt you. I want to hear what you have to say too. No, and you make a really interesting point and I'm curious again just to hear your kind of reflection on this which is I've spoken to many individuals through the Bitcoin space that have come from traditional finance and they used to work in consulting the banking sector and they used to work for CPAs and (1:07:00) various other kind of financial industries. And what I find really interesting is that they're actually stepping back from that sector because the white collar worker, the knowledge worker is being completely disrupted through AI. (1:07:18) They're stepping back and they're looking, okay, where can I direct my time and energy into something that's not going to be replaced immediately or in the foreseeable future. And one of my good friends who I speak to who's in the Bitcoin space, I speak to him bi-weekly. He's saying that, you know what, I'm looking actually to buy a painting company with a whole bunch of painters. I'm looking to buy a storage company. (1:07:37) things that we are not going to see them overtaken anytime soon. And so if you have a handful of painters or a handful of plumbers or you've got like a trade company, uh I think those companies, they can provide a reasonable lifestyle. You don't need to be worth 50 million, 100 million. (1:07:55) It's like what do you want to be able to shove for your family? afford a house and to be able to live comfortably? And I think sometimes the financial world, social media says we need more. And in reality, I think you can live a relatively comfortable life with a decent little income of kind of low mid6 figures through one of these kind of more manual labor physical trades. Yeah. (1:08:14) I mean, the counterargument that somebody from tech is going to immediately bring up the humanoid robots, which we didn't even discuss during the show, but at this moment in time in 2025, any type of humanoid robot video that I've watched, it goes over and it's like emptying a dishwasher and it literally takes it five minutes to put a spatula in the dishwasher and then it like fumbles all over the place. (1:08:34) So, that could change very quickly. But I think humans, if I'm going to hire somebody to do something around the house or whatever it might be, right, I'm going to a human and not a humanoid robot, at least not anytime soon. So, well, and you know, like I think that naturally we've got this world where I think there's a lack of connection. (1:08:58) People want to interact with people and so I'm noticing and I think it's an awesome swing. I'm noticing companies today, the majority of them, you cannot speak to someone on the phone. You're getting an AI bot through the chat. But the companies that do say, "Hey, you know what? Here's our number. You give us a call and you're actually going to get a person. " They're starting to see a lot of success. (1:09:15) And so, it's really cool just to recognize that technology, the pendulum always swings. And I think we've swung to this point where we've almost like replaced us in many ways or tried to replace us. But we're recognizing that first AI and a lot of these technologies are not people and people know that they're not people. And secondly, we're missing that connection. (1:09:34) And so I'm curious to see like over the next kind of few years, does that pendulum continue to swing back a little bit more towards center where people are recognizing the importance of physical connection, spending time with friends, actually having a number to talk to someone to deal with any issues. Okay, I've got one final surprise before we wrap this up. (1:09:56) While we were recording today, I took a screen grab of Seb and I having our conversation and I had it take our banana rama

### [1:10:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=4200s) Segment 15 (70:00 - 75:00)

whatever the heck it's called, uh, Proge Gemini model and Nano Banana. Thank you, sir. and I asked it to what would these two podcasters look like if there was a camera behind them and it took a picture while they were having the conversation. Okay, now you're going to see the picture that the screen grab that I got is probably one of the most flattering pictures of Seb that you will ever see. (1:10:34) This is such a bad picture. Check this out. Okay. So, here you are. You were mid blinking your eyes and looking up and I'm just stone cold staring at the camera and it's just the video feed of him and I have the car. You ready to see what it interpreted on the back of our head taking a picture from behind us looks like? Okay. So, for the person that can't see this, it's not bad. (1:11:09) There's a lot right with this picture as far as it looks like uh Seb, your room, it did not reverse your room, right? Like your room is there, but it is showing that you are looking at a computer. The back of your head and all that looks pretty normal, but you're talking to yourself and not me. That's what I do a lot. You're talking to yourself a lot. Oh, this is interesting. Look at your background. (1:11:36) Your background is my background. Look at that. And have you seen that? It's also given me your headphone, but not in the Oh, that's right. Yeah. Look at that. That's wild. And then my picture is like really jacked because the microphone is literally behind me. And then I'm talking to you, which is correct. (1:12:01) And it's the image of you looking forward. Okay. So, like that all looks correct. It's pretty close. Okay. So, like not bad, but there's a couple hiccups. Now, if I went in there and I like pointed these things out, I think it would actually get it all correct if I went back and forth. (1:12:20) I mean, obviously, I didn't have time to really do anything other than quickly type the prompt in there. And that was the first goound coming back to me. So, pretty wild. Pretty wild, but not quite right. But, but it's coming along very fast. Similar to your watch thing. It had my monitor. It has my Get the heck out of here. No. Really? Hold on. Let me pull this back monitor. (1:12:44) And that's why I'm just like, what? How my monitor? Get the heck out of here. That's the monitor you have. That's not my monitor. Yeah, dude. That's weird. That is definitely not my monitor. In fact, I have three screens here in front of me. In fact, I get comments online. Why is he looking off to the side? Well, I'm looking over at my second or third monitor to pull up all the things on the fly during the show. So, yeah. No, my monitor's way off. (1:13:09) It looks like my monitor's on the floor, too. You're really You're stacking sats. You don't have a chair. Oh, yeah. That's right. So, it did get that correct. Yeah. Pierre Rard AI knows I don't have a chair. I'm sitting in the chair. This is a little ways off. It will get better. Wow. Uh, Seb, I love this. This was so much fun. (1:13:35) If you guys enjoy this format, I enjoy this format, but maybe the audience doesn't like this format. If you like this format, please tell us in the comments of X. If you're on X, let us know because if you like it, we want to keep doing these types of things. And Seb, thank you so much for your comments and what you brought to the show today. (1:13:54) Give people a handoff to anything you want to highlight, Seb. And thank you so much for joining us today on the show. except give people a handoff where they can learn more about you. Absolutely. And I would start by saying as well like if you kind of enjoyed this kind of discussion when you listen to it feel free to just post a comment with anything that you think is happening in the world that is interesting and kind of on the next time we record in this style. We'd love to kind of bring it up because I think that sometimes there's so much stuff happening that a lot of it kind of slips (1:14:19) between the cracks and it's just the world is a fascinating place and there's incredible things that people are working on. But um no, you can just find me at said Bunny and Bunny is bun ny. com. I'm Setbunny on Twitter. I still kind of go by Twitter. I just feel like X to me. (1:14:36) It doesn't resonate, but um no, I think you can find me at sbunny. com on Twitter and uh my book is The Hidden Cost of Money. And yeah, I just really appreciate you guys listening and thanks for having me on Preston. All right, everybody. Thanks for joining us and until next time. For the first time in human history, we're developing something that can out compete human labor eventually at all levels. (1:14:55) We're going to be developing a system that can do absolutely everything better than a

### [1:15:00](https://www.youtube.com/watch?v=mMUNkgjwlio&t=4500s) Segment 16 (75:00 - 75:00)

human can. It can be a craftsman at all times. There'll be no sloppy work. It'll be doing things to perfection every single time. Not only that, it can work probably 22 to 23 hours a day. But over time, not only will it be as capable of a human, it'll probably become more capable.

---
*Источник: https://ekstraktznaniy.ru/video/44870*