# OpenClaw and Self Sovereign AI w/ Alex Gladstein & Justin Moon (TECH015)

## Метаданные

- **Канал:** Preston Pysh
- **YouTube:** https://www.youtube.com/watch?v=X7ua58iwcd4

## Содержание

### [0:00](https://www.youtube.com/watch?v=X7ua58iwcd4) Segment 1 (00:00 - 05:00)

(00:00) So if you like think of open flaw as like a story and it is a story that's why it went so viral it's like the story of like what one individual can do with the help of vibe coding the one guy it was basically and then eventually he got far enough where a big voluntary open source community arose around it and so it's very inspiring to see what one person can do and to me open is more of like an idea than an actual product. (00:23) It shows us the idea of what if an agent has its own computer and you can talk to it however you want. And you're going to see this big renaissance of stuff that can't be controlled that is customized to what the user wants. Guys, it feels like the world is moving at 10x the speed and pace that it was just a couple months ago. I don't know if you guys are feeling the same way, but things are accelerating. Oh my god. (00:50) I listened to the show with Pablo and he said it's compressing time and I'm like that that's just how it feels. Yeah. By the way, if a person's listening to this podcast and you haven't listened to the show that was basically too earlier where we were talking about the clawbot or it's called Open Claw now with the branding. (01:12) I would highly encourage you to go back and listen to that conversation as well because it's going to be pertinent to some of the stuff we're talking about here. Yeah. And I just spent three days with Pablo. So, you know, I applaud you for bringing him on and I'll be sharing some of his insights as well from the work we just did together over the last few days. Amazing. (01:32) Amazing. So, Justin, where do we even start this conversation? Cuz what I kind of feel like was the conversation I had with Trey and Pablo was so like we were already going 100 mph with the conversation. And for the listener that was listening to it, I think their takeaway might have been, "Oh my god, what is happening? I don't even know what they're talking about right now. (01:55) " So like maybe we throttle things back and like bring everything up to speed. So take it away. I agree. It was a great episode and I really enjoyed it. I could almost keep up. I could keep up with it only because I know them and I really know Pablo well, but I feel like for the drive by listener, it was like trying to get on a fully moving train like one of those Japanese trains. (02:11) It's like it's asking a lot. So I want to help kind of explain at least how I understand like what's going on like what the hell's happening and if you understand clawbot or openclaw I think it's the thing in the news right now if you understand that you kind of understand what's going on and I was thinking about how to understand it like break it down into basics and uh I realized you had to introduce a lot of foundational ideas first that most people don't quite get and it impairs their ability to understand what's going on. So, I'm going to try to introduce going to be 10 ideas. I have a bunch of notes here. I think you have to understand in order to (02:39) really understand what's going on, but I'm not going to use any jargon. I'm going to try to simplify it and make it understandable for people who've never who don't know anything in this. Okay. So, that's my goal. It's a bit of a highwire act, so it might not go well, but we'll see. (02:51) Look, real fast before you kick that off, would you say from a really like zoomed out from space kind of view that what's what all the excitement is about right now is everybody's accustomed to using cloud-based large language model AI. They type into a chat and they get an answer back. (03:14) But now you're at this pivotal point where the tech is so advanced that now people can run it locally in a way that's actually going to be quite useful. Yeah. And we haven't had the hardware software models to do that yet. That's really kind of like this clear break of what we're experiencing is now people can run it locally without even tapping into a cloud-based provider. (03:39) The significance for OpenClaw to me is it's a big step towards like self-s sovereign user controlled AI. It's not a full step all the way there, but it's a big step in that direction and it's a from a couple different angles. (03:56) And so I want to try to tease that out for people and I want to I need to introduce some basic ideas just to make it make sense because okay there's a few things that you can understand why the importance of like we've talked a lot with HF about the importance of vibe coding and so I'm going to try to that's going to be one of the takeaways here is like vibe coding enabled this and it's going to enable a heck of a lot more over time. So like just zooming out like what is an LLM, right? Like that we got to start from the very basics. (04:14) Like what is an LM? To me it's like a new way of using computers, right? So like traditionally a computer we use like computer programs, right? Like desktop apps and stuff like that. A computer program is something where it's like a recipe. It's a recipe for a computer. (04:31) So it's something that's typed out with exact instructions by a human and it tells the computer exact steps to follow to do something. So anything that can be broken down into steps can be like represented in a traditional computer program like arithmetic. Traditional computers are very good at arithmetic. They're very bad at telling jokes because you can't encode the steps of a good joke. (04:50) Like at almost a sense what makes it funny is because it's unexpected, right? Zuma, one way to think of LLM, it's like a new type of computer program that's like bad at everything traditional computer programs were good at like arithmetic, but good at all the things they were bad at, like creating art or telling a story or coding. (05:07) So that's kind of like the high level

### [5:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=300s) Segment 2 (05:00 - 10:00)

thing is I want to frame this as in a sense open claw is a new type of computer to me. That's what it is. It's a new way of using computers. It's a new type of computer program. And so like I'm assuming you've all used an LLM but have no idea how they work. So there's kind of three steps in an LLM. The first is like it's called pre-training. So what it does is it downloads all the text on the internet and compresses it into a single file. (05:25) That's the fundamental thing of what an LLM is. You take all the information on the internet. you try to lose the least important parts of it and only keep the most important kind of ideas and principles and facts and knowledge. What you get at the end is a file that can given like half of an internet document. It can complete it. (05:44) It can like do a best effort job of like getting half of a Wikipedia article and writing the second half. That's all it can do, which is it has a lot of intelligence, but it it's not actually useful because like when does a normal person need to complete an internet document, right? And so if that file, it's a file. That's what a model is. If you're at a model, (06:01) It's a file, right? If you've heard of weights, weights are what's in the file. That's what weights are in AI. And an open model versus a closed model. An open model is if you can download that file like DeepSeek or Kimmy. Generally, many of them are Chinese. And then the American ones are closed. Generally, you can't download the file. (06:20) So, it's generally the closed ones are a little smarter and the open more self-s sovereign, but the closed ones are generally American. The open ones are often times Chinese. But it's starting to get mixed up. Let's pull on that thread. Yeah. Cuz I think somebody who's hearing that, it makes no sense to them. Yeah. As to why the Chinese are the ones uh releasing I have an opinion on this. (06:38) I'm very curious to hear your opinion though. Why are they the ones releasing these open models? But in the US where you would think that would be taking place, you're not seeing anything of the sort. Why is that the case? To me, I think the biggest part is like the capital structure of the companies doing it, right? So like OpenAI and Anthropic have like these huge capital structures and they need to make a lot of money fast and they're on the frontier and they need barriers to prevent competitors, right? And so not releasing the model weights is the (07:07) biggest thing just from a business point of view, no kind of extra thinking. I think that makes sense. I mean another thing is like I bet the CCP likes that there are these open models out there that get embedded into like Airbnb. Airbnb's come out and said, "Hey, we use these Quinn for all kinds of stuff. (07:20) It's great. " Right? here. It's a way for like the CCP basically to embed Chinese values in American tech software and also you know America is like the leading one and then it's kind of easier uh I mean the Chinese economy is the last 10 20 years has done a lot of imitate they're amazing at imitating right so that's kind of another thing is it's kind of something that they're already very good at is reverse engineering those are three things I would Alex you have anything to add there yeah I just would say that at the moment they judged that they could not compete on the proprietary side and they could (07:51) both introduce maybe some chaos and opportunities for themselves by going this route. However, going that route, it's kind of like a Sputnik thing as we know it has opened a whole new door and actually I think been good for the world at large that you have other geopolitical powers pushing open source options because then it's going to eventually force the American companies to do the same. (08:18) So you're going to have pressure just like you had pressure to add encryption to devices and to apps after the Snowden files like over time there's going to be pressure on American companies despite profits like they're going to feel pressured to have open arrangements and open products and we'll get to this at the end of the recording I think but hopefully also privacy protecting ones too but yeah that would be my take one small note I want to recap from a talk that was given at our yearly AI summit in San Francisco was this guy Romez and mentioned how like a year ago we thought there would be a takeoff runaway leader in AI and that didn't (08:45) happen. They're all getting closer and closer. It's getting more and more competitive and the closed models and the open models are starting to get competitive. There was a bigger gap and now it's getting very competitive. So, this is like great for user sovereignty. (08:58) We're not, you know, it's trending in a way that you don't have like a single overlord and it's a very competitive dynamic which I think is great for freedom. One of the things that I think also makes it more competitive is when you start running these models that are not on the forefront of being the best from a intelligence standpoint. Mhm. (09:19) But you combine these lesser models with persistent memory. Yep. Run locally. Yeah. The performance that you get for what it is that you need is actually a lot better than a premier model because it it's continuing to learn and it's not forgetting all those past interactions like you get with a front tier model that has a new context window every single time that you open up with very limited memory. (09:43) So that persistent memory is one of the things that I think is massive for self-s sovereignty and from getting away from these large language models that are just sucking all the data and using that potentially against you, you're going to get better local performance. The thing that I was uh you know on that original question, it seems like and I've asked the AI this particular

### [10:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=600s) Segment 3 (10:00 - 15:00)

question why we're seeing the open-source models coming out of places that we would least expect it. And it gave me a really surprising answer in (10:13) that they're looking at the game theory and where this is all going. And what they're trying to do is just ever so slightly steering the results of what you get out of the model for like let's just take an example tan square. If you're training the model, you can either have that as part of the initial data input before it compresses everything into the model and it adjusts the weights ever so slightly. (10:42) And if those are the models that everybody starts to build on and run locally, you get somewhat slightly different results than if you have somebody who's feeding it with the base. You know, everything that's ever been written on the internet minus these things that we really don't want in there when we compress the model, they're removed. (10:59) And so I found that to be really interesting and you know, really a lot of foresight if true. There's in there to make sure that you get your model out there. Now, at the end of the day, I can run that model locally. I can ask it a question that maybe isn't in its weights. (11:19) I can say, you know, go out there and re that's just wrong. That's not truth. Go out there and research on the internet like more facts on and using the tan square as on this. And then my local model now knows it and it's not like it's part of its weights anymore because I've steered it in a different direction. So, in the end, it doesn't matter. Yep. (11:36) But so I want to make one before moving on point here. So we did a hackathon recently where we put together activists from HRF with freedom tech developers from my Bitcoin meet up in Austin basically and one of the interesting projects was an actual Tianaan Square like student organizer Jen Lee I forget his last name Dr. Young Jun. Yeah Dr. (11:54) Yan Jun Lee they did a project where they basically made a benchmark for all the different LLMs comparing their questions on like human rights questions like Tan Square, right? Which is very interesting and we look forward to that getting published. Yeah. Let me move on because I have a lot here. Uh so I'm trying to like where an LM comes from and how it's used. Right? So I talk about training. You take the internet and you get it onto a file. (12:10) Then there's a thing called post training which turns it into like a useful assistant. It gives it a bunch of examples like here's how to be useful to a person. Here's how to do a coding agent, right? And so now you have something that goes from be able to complete a document to be able to like answer questions, be your therapist, write some code, right? And so that's how the model happens. That's it. (12:28) So then the question is how do you use it, right? And the word for that is inference. You probably heard that word. It took me a while to remember that. That's what it means. (12:40) Inference means just when the model is run, right? And so this is something that you can hire someone to do in the cloud for you like chatbt or anthropic or you can do it on your own computer if you have a computer. So you can use something called Olana, right? And so what inference is you run that model basically and you can put text in and you get text out, right? So it's just like the chat GBT interface. That's what's happening behind the scenes. Text in, text out. (12:58) And the one problem with open models is you need about a $20,000 computer in order to run them, right? So that's one of the tough things right now. So it's a big technical barrier to real user individual user sovereignty in AI and it's something we're all kind of working on. That's what inference is. Okay. (13:10) So now I want to talk about another word that's very important. This is maybe the most important one called context. You've probably heard this word context. Justin, I'm sorry to slow you down. I just want So people heard on the la on the episode with Trey and Pablo that Trey was running his off of a Raspberry Pi. And so they're like, "Well, hold on. You just told me it cost $20,000 to run it locally. (13:28) " And I just want to explain to the listener. So the way Trey's open claw works on his Raspberry Pi which is you know three 400 bucks is he's making API calls to claude or to you know AI to do the inference on their cloud and then it's giving a result back right so he has an agent which we'll get to running on a Raspberry Pi but the inference the thing that's actually doing the smart AI stuff is on a in a cloud somewhere so it is a step towards user sovereignty because what chat GBT was trying to get us to do a year ago is run the agent in the cloud too. So this is like halfway there. So it's a huge step forward, right? Running the agent (14:09) locally. It can save memories locally and you have the option for certain things to use a local model too. So it's a great kind of a half step forward. I mean it's 10 steps forward but it's not all the way to the goal. It's a huge win for open source and it changed the game. So yeah, let's go. Okay. So we defined the word inference. (14:25) That's like one kind of word you need to do and context is maybe the most important one. So context is it took me a while to and I'm very technical. It took me a while to actually understand what the heck people were saying. It probably took like six months to actually understand it. And the key thing to understand is that LLMs are something we called stateless. (14:43) Every time you interact with an LLM is starting from scratch. We talked about memory earlier on a deep technical level. There is no memory at all. Every time you interact with it, you start from scratch. All it remembers is the training and the pre-training. That's it. Okay. So if me and Preston use chat GBT 5. (15:01) 2 to we are getting exactly the same model, right? If there are some memories that are specific to Preston, they come from elsewhere.

### [15:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=900s) Segment 4 (15:00 - 20:00)

They don't actually come from the model. We get the exact same thing. So that's an important thing to understand. And so Justin, would you would it be safe to say that this context so we're using you and I use the same model, but the header that's put into the start of that chat is what's different? Exactly. So if you have like past memory like Preston likes short answers, he doesn't like a long answer. (15:26) That little snippet or that header is inserted and you don't see it getting inserted into the context window, but it's inserted in there. And so that's how we might get a different answer from our p from its past memory of us and how we use it is that header that it's seated with before you enter the context window. Exactly. (15:49) So like if me and Preston have the same model and we're getting different answers and you can see this with yourself, right? Like let's say you use chat GBT. If you're in a long conversation, it will remember things previous, but it usually won't remember things from different conversations, but every once in a while it will, right? So that's like a big question. (16:07) Well, if LM are stateless, how are these two things that we've all observed true? Right? And so the answer is that every round of conversation, let's say you open a chatb tab and you make 10 answer, you go through 10 back and forth, right? On the 11th one, it doesn't just send the question you asked or the thing you said the 11th time. (16:26) It sends that, it sends the 10th, the response, the ninth, it sends the entire history every single time. And there's also one extra one that you don't see, which is called the system prompt. This is what the header that Preston was talking about. This is like, think of it as like the ten commandments. This is something that God you know like the developer basically chatgbt or sometimes the user themselves gets to put in there and it's instructions for how the model should behave which the model doesn't always follow it tries to and it's also important that it be the ten commandments and not like the 10,000 commandments right so like what we were doing with AI a year ago is we were doing the 10,000 commandments we'd write (16:56) like a whole essay on the beginning and we'd basically overload the model and it couldn't do things and so a lot of the development over the last year that has enabled openclaw and things like it is that we figured out a way to only give it 10 commandments and figure basically derive the extra things and do like just in time learning to figure out the other things without overloading it right as a start. So what context means is it's the conversation, the entire conversation. (17:20) Everything you've gone in that session is what context means. It's everything that has been said previously, including the magic system prompt at the top. I want to pause here and really foot stomp why this is such a big deal. So you're about to see commercials coming out at the Super Bowl from Claude basically banging Open AI over the head because they recently said that they're going to start doing advertisements in their service. Think about, let's just like really pull on this thread and go deeper. If you're Open AI and (17:51) you have an advertiser that's doing really well with you because they've got a high margin product and you're able to convert on that, Open AAI could potentially, and I'm not saying they're going to do this, but there's an incentive for them to do this where they start blindly inserting in the header things that could potentially steer the user to wanting said product that's being advertised. (18:18) Yeah. And you would have no idea that that's in the header. Yeah. And this just goes to the whole point of like why we're having this conversation, which is local AI is going to be very important for you to see the world clearly because you won't know that you're being very indirectly, subliminally steered in a certain direction because you have no idea what's going into that header. (18:41) Yeah. Like the AI experience will get steered by something. Do you want it to be an advertiser? a big tech company? Do you want it to be another government or do you want it to be you? Right? Like we want it to be you and that's what Claudebot is a step in the direction of. (18:58) Alex, do you have anything to add on that particular point? Cuz I mean this is really why you're so passionate about running local AI, right? Well, let's let Justin finish the context, no pun intended. Uh and then um I have my piece and I think it'll help pull things together. Yeah. So we think about it from like a Bitcoin point of view like the Bitcoiners we understand scarcity that's like one mental model that the Bitcoiners really get. (19:22) You apply that to AI, it's like what's scarce, right? In the training, it's like you need data, you need energy, you need computers, right? In inference, when you actually run it, it's context. Context is a scarce thing. That conversation, the longer it gets, the more confused the I will get. And at a certain point, you run out of context and you just have to start over. (19:38) And that's called compaction. And it makes everything worse. Right? So that's the big engineering battle. And it's traditional engineering. It has nothing to do with AI really. Traditional software engineering. The last year we've all been trying to figure out how to get better at managing this. And that is what has led to good AI agents now that we didn't have a year ago. (19:55) It's a big part of it, right? The models got smarter, but the context engineering also got way smarter. So I want to

### [20:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=1200s) Segment 5 (20:00 - 25:00)

discuss next what an agent is, right? An agent is like so now we're getting close to OpenCloud. Open Cloud is an agent, right? So an agent to me is like a marriage between these new and old computer programs, right? The old stuff is like, you know, how you control your desktop computer or how you run a browser, stuff like that. And the new one is an LLM which can generate text that's like really smart and in some (20:20) sense has the entire all the intelligence of the internet baked in. Right? How is it a marriage between these old things? An agent is the thing that makes requests to an LLM. So like the chat GBD website in this definition would be an agent. Bod code which is like a desktop or terminal program you can run that will write code for you or replet those are agents right? So it's something it makes a bunch of requests to some AI and also has the ability to use what we call tools. A tool is like you can do something. All an LLM can do is spit out text. It can't do anything (20:50) in the world. So the question was how do you make something that can only spit out text control a browser or do a web search, right? How can it do a web search? And so what we did is we invented this idea called a tool. What a tool is you put in the system prompt. There's a special marker that means I want you to search this on the web. (21:08) Right? So, think of this. It's like a sentence that says, "Search this in capitals. " And then there's like a question and then it ends search this in capitals as well. If the LLM sends that back, search this question, search this, the agent will say, "Oh, I know that that's a marker, special marker. (21:25) I got to do something special with that. I'm not going to show that to user. I'm gonna go fire up Google and do a web search and then I'm gonna send it back to the LLM. So this is what an agent does it in the system prompt. You teach it tools that the agent software itself will intercept and do special things like search the web, control a browser, send the message on telegram and all the other things that openclaw does. That's called a tool. And so once we had that you had uh this is the way of augmenting (21:50) an LLM to be able to do stuff in the real world. So you maybe heard of MCP. MCP was something like a year ago that blew up because it was a way to publish a bunch of these tools and share them. So they basically like you know in the beginning chatb tried to dictate what tools you could use right they said now we have our tool and you can only use this one right and everyone's like screw that we want to do anyone we want and so MCP was invented as a way to share tools and so the user can choose which one (22:13) they want and the problem with it was like if you ever heard of like just in case learning versus just in time learning is like getting a college degree to solve a problem just in time learning is like you have a problem and then you go to YouTube and learn how to solve that problem and you solve it right and so like a year ago ago, we were doing just in case prompting with MCP. (22:33) We'd say, "Here's how to do 10,000 commandments just in case you need them. " And then the first round of conversation, the AI is already kind of confused cuz you should have told it way too much, right? And so now a thing called skills, which I'll talk about next, is more like just in case prompting. (22:49) You say, "Here's a bunch of manuals you can use if you need them. They're over on that shelf over there. Don't read them yet, but you can see the titles and when you should use them on the bindings, right? " That's kind of the difference between MCP is that was like uh just in case prompting and a skill is like just in time prompting. (23:08) And so this was a kind of a revolution in context engineering because you could expose many more things to an LLM without overloading its context window. That was extremely helpful for me personally because I've seen both MCPs and I've seen skills and I there's so many if you feel overwhelmed by all the jargon like I do too. Like there's just so much there's so much. (23:25) It's kind of like in the Matrix when they plug the different things into Neo's head, right? What skill do you want? And you're going to have a whole freaking library of them. Yeah, it's very similar. So, yeah, let me tell you more about what a skill is. So, now skills are like this is a foundational thing that OpenClaw is built on. An MCP was like, here's 50 different things you can do. (23:43) You got to figure out how to use them. Now, when to use them. Like, it was asking a lot of the LLM to kind of map to figure out the user's intent and like when to do stuff. skills are based on the insight. It's a mapping from a user intent to an action. (24:01) When user wants X, you do this, right? So, you only see that at the beginning of the system prompt and when the user declares the intent, you go and look up the manual and figure out how to do that, right? And so, what is the manual? The manual, this is a skill. A skill is kind of like an analog to an app, right? Now, it's the closest thing to the old word. It's like an app, right? So, a skill is a folder. So, it's a very traditional thing, a folder. (24:19) You've seen many folders on your computer with two types of content. One is text files containing prompts meaning just a plain English description of like hey when the user wants to book a flight first you open the browser then you log in and the user has to enter their password and you wait for that and then go to kayak. (24:38) com and then you know and so it's a prompt but it's not only a prompt because sometimes if you give it an open-ended task like that it won't be able to do that but parts of this are better done by like a traditional programming technique like a computer program. So that's the second thing that goes in the skill folder. (24:55) You can have programs, right? So you could have a program that can specifically open kayak. com and can specifically find where to put the credit card information and can specifically do a bunch of the thing the actual steps that are

### [25:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=1500s) Segment 6 (25:00 - 30:00)

involved in booking a flight. Uh can control Google Chrome browser for example and do all these things. And the prompt would say, hey, they prefer aisle seats to window seats, right? They'll have a bunch of preferences like that. (25:13) It's like a compact manual that maps a user intent to an action and it leverages prompting which is the new type of computer and a simple computer program which is kind of like the old type. So to me it's kind of like a marriage. It's a good marriage between these two and that's why it's so powerful is because it allows these LLMs to more effectively use a computer to accomplish what the user wants. It's more efficient. It's faster. (25:36) It's not bloated. your context window probably won't fill up nearly as it only it fills up once the user wants it to but not before. So it's much more efficient. Yeah. And so that's kind of like one thing here is that we figured out a nice hierarchy for these types of things, right? So like in QuadBot, it saves a bunch of memories but it doesn't look at the memories until they might be relevant, right? So it goes through like file system hierarchies to only expose what the user needs but to allow it to be discoverable for other things they might want need in the future. Right? (26:06) That's been a big thing in context engineering. We've been adding hierarchy for all these things we used to just dump in there just in case. Right. One more and then it'll be open class. So vibe coding. What is vibe coding? So this has been a really big thing. We just had like the one-y year anniversary of this. Happy birthday coding. Happy birthday v coding. Yeah. (26:25) So normally when you write computer programs it's like a very you have to have the blinders on. you have to really look and if you get one semicolon, you're typing text into a file doing really logical operations and it's like very focused kind of anal, you know, thing. And so Vibe coding is like the complete opposite where you put your feet on the desk and you're like, "Hey computer, build me a movie player app uh that can download it from my Dropbox and you just watch it do it. " Right. (26:52) And so this became sort of possible a year ago and it's become very effective in the last three months. Like very effective. It's Yeah. And so let's just talk about like what is actually happening there. What happens is you say, "Hey, I want to I want you to write a program. (27:10) " So something like blog code or replet, right? And then it might come back like a normal chatbt conversation, ask you some clarifying questions, try to clarify your intent a little bit. And then it will go into a loop, right? A loop just is a programming term that we just do something over and over again, right? And so we'll do a bunch of these tool calls. (27:21) It will do a tool call to do web search to search something you might have said. Then it will read some files in the existing thing. Then it will write a file. Then it will edit a file. And at the end once it thinks it's working it will do a tool call to run the program and then you can interact with it and at the very end and it might try to do some tool call to test it manually itself. (27:36) So it's just doing a loop doing these tools over and over again in skills and stuff like that until it judges hey I think I accomplished the thing and then loops have a termination condition. You do it until meet some condition and in vibe coding and coding agents that condition is a response from the LLM that doesn't have a tool call in it. (27:54) So everyone is just a bunch of these little things with this special marker to do something special and at the end it's just a text message and that's just displayed to the user and the loop exits and if you're lucky you have a working app that does exactly what you wanted. A year ago you often didn't but now you often do. Uh and some agents are like some of the agents update you along the way. (28:13) They're like showing you oh we did this cross that off this off and it could they could be quite transparent so it's exactly what he's saying. They're you can see it's how it's working. Mhm. And you can steer it along the way if it's going in the right direction. You say, "I want blue, not purple. " Right? Uh so you can control a lot. (28:31) And you know this is something now if you go on Replet for example, you can have a pretty good time with zero technical understanding. Uh and I encourage everyone to do it just as purely as an educational experience. It gives you a lens into what that's what the future is going to look like. Is like co-work like Claude's co-work kind of. (28:49) So, Replet is a website that you can go to and you can ask it to build an app and it's very good at building also very good at hosting it on the web or like getting it onto your phone if it's a mobile app. So, it's a 10-year-old company that was where they were dedicated to make it easy to learn to program. I actually used to do interviews on this platform uh like 10 years ago and they were early to seeing this vibe coding trend because hey, this solves the mission of the company. So, it's like a Yeah. (29:11) So, you're about to explain how open claw works, right, Justin? Yeah. open. I think this is a good time for me to like interject some of the social impact of what Justin has just described and then I'll sort of end with something I just saw open saw do and then you can explain how that works because I think we've covered a lot of ground and I think we're ready for this now. (29:29) So, okay. Okay, so a lot of people including me and Pablo 5 years ago if you had asked us about AI zoom out way outside of learning how it works just impact on the world we would have thought that it would be inherently repressive with regard to civil liberties and personal freedom there was an old you know I'll paraphrase Peter Teal about seven or eight years ago he said something like Bitcoin is decentralizing AI is centralizing if you want to frame it ideologically Bitcoin is libertarian and AI is communist and you know a lot of people including We really believed that

### [30:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=1800s) Segment 7 (30:00 - 35:00)

(30:01) um we thought it would be very pernicious towards human rights in the hands of states as they vacuum up everybody's information and build a more efficient surveillance and control machine. And a lot of that is true like obviously you know part of the program we've launched the human rights foundation where we've brought Justin on to help us is um is going to be exposing how dictators are using and abusing AI. (30:20) But what we didn't see coming until, you know, in the last 24 months was how could AI supercharge individuals asymmetrically in the same way that encryption or Bitcoin could certainly help dictators, but it helps individuals way more. I mean, dictators already control vast communication networks, banking systems, massive data centers. (30:38) They already have ways to exploit money and spy on people, control armies, and big companies, and they have huge numbers of talented people to do their bidding. But individuals and resistance groups and innovators don't. So, by coding changes this, right? So now individuals have access to enormous cutting edge computing power and unbelievably intelligent personal assistants that are already saving them huge amounts of time and resources. (31:03) I mean just very simply the fact that you can talk to a computer and make it do things for you is revolutionary and this is increasing exponentially. So again one year ago v coding was invented. 9 months ago a nontechnical person could v code a website decently. I don't know if they could deploy it maybe through replet but like little shaky but like they could do it. (31:21) Today a nontechnical person can spin up an agent that can autonomously conduct work and perform tasks in the background without human oversight and tomorrow like we don't know right. So six months ago a lot of elite developers including a lot of the ones that Justin and I know looked down upon vibe coding and they thought it was very ineffective and a bad work ethic etc etc. (31:38) I did a retreat with some of these people amazing elite developers in the beginning of December and a bunch of them were like nope don't want that. All of them have changed their minds as of today, right? It's really crazy. So, Karpathy, the former head of AI at Tesla, who invented VI coding, more or less, said that in November, he was manually doing 80% of his code work and using Vibe coding essentially for 20%. (32:01) And as of a few weeks ago, that's switched to now he's vi coding 80% and 20% annual. So, um you know, these agents are capable of massively automating a lot of human work makes it possible to really supercale individuals and small organizations. So, you know, where we started with the activists doing some basic trainings and workshops, that's now blossomed into like multi-day hackathons and bespoke trainings, and we can basically give people superpowers. The way I like to look at like what's available for the activist today, and this lines up pretty (32:30) much with what Justin has said so far, and I'm getting close to finishing here, is uh you have your chatbot, just in terms of terminology. Okay, everybody knows they have their chatbot. They could go to chatgpt or claude or whatever. Then you have what I would call creator mode which is like clawed code. It can do a lot more than just spit text out as Justin was describing. (32:48) It can use tools, skills. Then you have a personal agent. So these are three kind of options that are out there. Now we're about to explain how open claw actually works. But the social impact of it is really important. Essentially what I've seen with openclaw. (33:06) So like yesterday what we did is to a group of 20 people from different industries Pablo and I did a 40-minute session where we did some background and then we did some pretty amazing things with cloud code and then we used his own open claw that he set up and basically like from my phone I can go into Telegram and I can message his bot and I left it I just left it a two-minute voice note with an incredibly complex task to do and like 3 minutes later it responded on my like it gave me like this thing. (33:38) Um, and it was just like the most insane data rich website thing that was actually quite useful. I mean, to be very clear, we asked it to create a zoomable, scalable, manipulatable, circular, global, spherical map that shows exactly how much civil liberties and free speech and democracy funding every single country in the world gets, broken down by who gives it, and then like sorted so you could like rank them. (33:56) And you sent this request over like over Telegram. No, I just like from the phone I was just like yo and I had on speaker and other people were listening in the room and I just said I want you to do all these things and then a couple minutes later it gives us this like freaking incredible visual project and what is showing me is the following and this is the kind of where I'll conclude is that workflow for creators is going to change. So basically the way it works to up till this point is like if you're an executive or you're a creative (34:26) person, you have a meeting and you have a cool idea, you really want to do something. Well, what do you do? Well, you normally like talk to your executive assistant or your product team or your program team depending on what kind of organization you work with and you have a meeting and you describe what you want and then they go talk to the creative team cuz they're not designers or you know engineers or they go talk to engineers and then those people talk to web people and then maybe they come back to you a few weeks later with some proposals. Hey, do you like this one (34:53) better or this one? And there's just so much human time and effort there. Now, what you're going to be able to do this year is like the creatives, like the founder person can literally

### [35:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=2100s) Segment 8 (35:00 - 40:00)

describe exactly what they want. They could say, "I want it to look like liquid glass on iPhone or I want it to kind of look like this movie vibes or they could they can literally like the dream can come out of the head so specifically and then they can take like off of a voice note, they can speak it into existence and then they take that and give it to the creative team and then there's no more (35:21) like, well, do you like this color or that? " No, no. they have a really specific idea of the vision. So, this is going to become, in my opinion, like a skill like surfing or like sculpting. And it's like, are you going to be decent at it or like Michelangelo? And we'll see. (35:38) But I think it's going to be so amazing for creators, people who have big dreams and visions because it can like really quickly get them to like a really good blueprint of what they want and then their colleagues or alliances or teams can finish the rest. And that's the I think one of the biggest social impacts of what Justin is describing. (35:57) So maybe Justin now we turn to you and figure out how I can like talk to telegram and have it do stuff something like that. Yeah. So the transition from vibe coding to openclaw or like you know so it's like chat it started with like the chat GBT interface then it became kind of vibe coding agents right and now it's like the personal assistant is we're just starting to enter that where you know we've had a good coding agent for about a year we're just starting to get good personal assistants and that's what open clause is it's kind of the first actually useful personal assistant and so to transition though I want to make a (36:25) note that like I actually met Peter Steinberger I think his name is the guy who created it from a blog post about how he vibe coded and when I read it was called shipping at inference scale and it like blew my mind. I'm like, "Oh my god, I'm a complete amateur. What this guy's doing is unreal. (36:42) " And I think OpenClaw is largely a story of he was like the world's best vibe coder basically. Uh this guy figured out how to vibe code and that's actually what created OpenClaw. Like the real thing that unlocked it was that he was able to use vibe coding tools so effectively. So I'll get to that. (37:00) Bitcoin mining has a reputation for being complicated, risky, and hard to evaluate as a real investment. If you're considering mining in 2026, what actually matters isn't headline profitability. It's uptime, repairs, and whether the operation is run like a business. That's why I've been using Simple Mining. (37:18) They're based in Cedar Falls, Iowa, and they run a white glove hosting operation where you own your own miners, choose your pool, and have Bitcoin sent directly to your wallet. They were featured on Inc. 's 5000 list as the fastest growing company in Iowa with over 40,000 machines under management. What stands out is execution. They have the number one rated ASIC repair center. And for the first 12 months, repairs are included. (37:37) If mining margins get tight, you can pause with no penalties. And if you want to resize or upgrade your fleet, there's a marketplace to resell equipment instead of being stuck. To help people think through whether mining actually makes sense right now, they put together a short resource called the 2026 Bitcoin mining blueprint. (38:00) It walks through the five mistakes investors make when allocating to mining and how to avoid them before deploying capital. If it sounds interesting, you can get it for free at simplemining. io/preston. That's simplemining. io/preston. But so what is the user experience, right? It's a personal assistant that you can chat with on any messenger you like. (38:25) signal telegram noster like the last time I did a live stream we used there's an existing noster thing that wasn't very good and I built a new one using marmet right so it's like you can add and do whatever the heck email any emails anything you want and if it doesn't exist you can make it so the ingestion talking to this can be from anywhere the agent has its own computer it gets a computer and it totally controls it can be a desktop like a little Mac mini it can be a virtual it can be something in the cloud it can be on your laptop although don't probably do that in general be very careful with this. Do not try this in (38:50) the information security skills. Like I'm still scared of it and I'm like an exp I'm a Bitcoin security expert almost and then it can totally control that computer, right? So you can talk to it anywhere you want. It has its own computer and it totally controls that computer. (39:07) Basically the premise is what if you gave the agent its own computer and gave it skills and tools to control literally anything about that computer that the user wants to. And it got to a certain point now where the developers don't even have to invent the skill anymore. Now, if it's missing something, if there's some app you want it to be able to control that it can't do, you just say, "Hey, now make it has recursive self-improvement. (39:25) " Now, it can be, "Okay, make a skill that allows me to pilot this weird app that nobody else uses. " Right? So, it's basically vibe coding internally to make a personal assistant. Or if you could color this in, it can also buy free market skills. So Pablo was showing me that his what he's building a not competitor to open claw but like something like an alternative that's more for a different use case but the idea is that when he wants stuff done his agent can go hire the Noster and Bitcoin can hire like an expert in cash for example that Kalai has like worked with so that it knows kung fu right so like he can hire that one and then or hire one that's really good at designing (40:00) liquid glass apps for iOS for example so

### [40:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=2400s) Segment 9 (40:00 - 45:00)

we can go out and hire these and then like do it. So again, like the skills thing is not just something that you'd have locally. You could hire them or you could, you know, acquire them or whatever you want. (40:18) But the point is, it's fascinating to see this start to work real fast because we have a huge Bitcoin audience here. Yeah. When you look at how these AIs are going to want to transact with each other, for me, it's become super obvious that they're going to want Bitcoin because it's the only form of payment that they can't be rugged on. (40:35) So if they're managing their own wallet and you look at all the different ways that they could be paid, yeah, anything that touches human rails or has the capacity for a human to be like, uh, I think I'm going to liquidate this account that it's using, I think the AI is deeply understand that risk and never want to denominate their exchange. I think for sure that's where we go. (40:53) But it's just worth noting now that like, you know, for example, I saw the founder of Umbrell today. He was just posting that like he had his open claw on Umbrell like just book his for him and yeah he like gave it his credit card. He gave his credit card and his billing address. (41:14) So it does work with fiat but like I think you're right that like over the coming years it'll be way easier with these things to work with a digitally native currency. Yes, of course. Yes. Yeah. I almost think it's going to happen the opposite way where it's like they'll just use dollars cuz that's what's in the training data and that's what everyone accepts by default, right? They'll use fiat and then they'll try to do something where they can't and they'll be like, "Dang it, I can't. (41:30) Is there another option? Oh, I can just use the title. " Oh, I keep the Bitcoin skill. It'll be more I think it'll come more from trial and error where it's like, yeah, dang. It's like they keep asking me for all this stuff and I got to check emails and my owner has the email and I can't get in there. (41:47) So, it's like, let me just create a Bitcoin wallet, right? I think it'll kind of happen that way from the ground up just based on failure with the fi. It's like I'm trying to hire person in Nigeria and the credit card is not working. Well, why don't I try something else? Let me see. Oh, there's this Bitcoin skill. Oh, let me learn that. really quickly. Oh, okay. It works now. Like, it's going to do that. So, I talked about the user experience, right? It's just a it's a personal system that you can message however you want that has its own computer and that computer can be whatever you as the user want. You have the freedom to choose. And so, it completely blew up in (42:10) popularity. So, to give a sense, GitHub is like the collaboration platform for software, open source software, right? There's something called a like or like a favorite on GitHub. You can like favorite a post or a project. You say, "I like this one. " Right? A star. It's called the GitHub star. Bitcoin has 80,000 GitHub stars. That's a really popular project. (42:29) Open Claw and it's 15 years old. Open Claw is like six weeks or seven weeks old and it has 160,000 stars. So it's double as popular as Bitcoin in like six or seven weeks. Linux is like 200,000. So it's almost got up to Linux which is like the most famous open source project that exists. (42:47) So that just gives you an indication of like how viral fastest moving um Oh yeah. Like they show there's graphs where you can find where they show all these other like super fast moving projects that look like a hockey stick and compared to those open claws like a vertical line. It's just insane. Like there's no X dimension to the adoption. It's really cool. (43:03) So that's to give you the listeners a sense of how popular it got. It's because the user experience was really good. Like this is what everyone's wanted. It's like a relatively self- sovereign personal assistant. So I just want to kind of ask some questions about like why did it happen now and give my takes on it. (43:16) what enabled this like and this is in the sense of like where are we now? It's like the first thing you think is oh finally the AIS got smart enough and I kind of disagree like I kind of think that if we had Claudebot like when Claude 4 came out this is May 22 of last year I kind of think it could have gone viral at the same time it could have all it wouldn't have been able to do everything but I think that some of the previous models from six or nine months ago maybe could have done this. I'm not sure. I want to do some testing on it but I don't actually think that actually (43:39) when it comes down to running the assistant we needed the models that we have today. So one big one was context engineering. We got a lot better at this just in time prompting instead of just in case prompting. And that's traditional software engineering. So this was human software engineering. (43:55) But I think to me the biggest one was that this one guy basically vibe coded a massive bridge. Like Peter Steinberger's GitHub is insane. The average developer does like maybe 10 GitHub contributions. That's like an action on GitHub a day. This guy does like a thousand a day. He's just absolutely ripping it. He's operating at a much higher level than the rest of us. (44:13) uh and one of many of us are trying to catch up. So he has like 50 projects on his GitHub that compose this bridge between a traditional computer and an agent. So stuff like managing a Google calendar, managing Gmail, making tweets, communicating over Telegram, communicating over Apple Messenger. (44:31) He made all these little command line tools, little basic tools that were optimized for an agentic user, not a human user. Like no human would want to use a CLI tool to manage their calendar. And it's all since LM are all text based, right? It's all based on text. They are really good at making these little uh CLI tools. And so eventually it got to this kind of recursive improvement where the tool bad codes itself. (44:49) I mean also it's like the labs couldn't do it because it was reckless. Like you needed like a cowboy basically. You needed an open source cowboy who just didn't care. Like very I don't know if this guy's a Bitcoiner, but he would fit right in. Yeah. Uh like Satoshi like you needed someone who just Yeah. open source

### [45:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=2700s) Segment 10 (45:00 - 50:00)

this thing. No big thing would ever do this. (45:08) And also he's kind of a hero because he didn't you know he's just like no I'm not you know he could have raised all kinds of VC money and all these things and he's like you know no I'm already successful I'm just going to leave this for the people right you know so there's a lot of these technical things like making skills were missing abstraction context engineers amazing and it put so much pressure on the large corporations because the users are now going to want the choice of using whatever input they want whereas before they wanted to corral you in their thing like they wouldn't have (45:30) wanted you to use signal to talk to Anthropic's new product they want they wanted you to just their out, right? And now it's like, well, what are we going to do? Be gonna have to try to they're probably going to have to offer ways for people to use any input they want. So, this was pretty seismic. (45:48) And I just would also just note that from a human rights perspective, maybe we can conclude a little bit of this part, Justin. Like, I'm not doomer. Yes, of course, these things are risky and evasive, but like the cool part is you can hook up signal and maple and do openclaw like that. (46:07) like you can use privacy protecting AI agents and uh messengers and there are some serious innovations happening on that front now by some of our friends and people in our community who are making you know what are going to essentially be full stack personal agents where maybe in 3 to 6 months some of them are already like very alpha but like you could experiment with them but like you'll be able to go in your signal and have it do stuff and have like the whole supply chain be encrypted and I'm so bullish on that. So that's what HF's really going to be (46:35) focusing on this year from a investment point of view like supporting the infrastructure is going to be building those tools and then the rest of what we're doing is going to be just the super scaling and education. Yeah. Let's like go into those in a little more detail. I just want to kind of summarize first. Yeah. Go ahead. (46:53) So if you like think of open cloud as like a story and it is a story that's why it went so viral like the story is just as much as the tool I think in a sense it's like the story of like what one individual can do with the help of vibe coding like with AI development right the one guy it was basically and then eventually he got far enough where a big voluntary open source community arose around it and this is exactly what we bitcoiners participate in this is what Nostra is and so it's very inspiring to see what one person can do and to me open claw is more of like an idea than an actual product. To me, it shows us the idea of what if an (47:23) agent has its own computer and you can talk to whatever you want. I'm going to build my own openclaw. I'm not going to use open claw. I'm just going to b code my own and I'm going to use some of the pieces they have and all my friends are going to do the same thing and you're going to see this big renaissance of stuff that can't be controlled that is customized to what the user wants. (47:41) And so, so for my takeway, it's like I want to teach more people about AI and also that this is why I'm proud to work on the HRF like AI for individual rights program. like we're fighting to make sure that more of this type of stuff can happen that AI remains user controlled and that people can thrive in an AI world. (48:00) So yeah, I'd transition out to Alex just hear a little more about you know maybe share how the program started and what we've done and what we want to do. Well, yeah. Again, just the moment we was fortunate about 13 months ago when we were presented with the opportunity to do this by generous supporter and anyone listening. You can just do things. You can support people like us and have us do really cool things. So, thank you to everybody who supported us, including you, Preston, for helping us today. Just even having this conversation. (48:24) This is going to spark a lot of thoughts, I think. But yeah, created the first world's first AI for individual rights program. every other human rights group either hates AI or they're going to try to really focus on research and it's just they're not going to do anything and you know what like we wanted to do it differently and most of our effort is going to be focused on how to make this tool a mechanism for personal liberation period um we are going to do again some research and investigations into how dictators are abusing it that's very (48:53) important we do feel like that will start to get crowded with other people what I don't see anyone else doing for sure is like in the same way that we've been pioneers with educating dissident and activists and resistance groups on Bitcoin. (49:11) Well, we're going to do the same thing with these open source privacy protecting AI tools because in the same way that Bitcoin helps them become unstoppable, AI is going to help them 10x or 100x what they can do. And we need that right now. Like right now is the moment for us to push freedom forward. So that's what the program is designed around. (49:28) We're going to do events that bring people together as Justin was describing like bringing together talented developers with activists. I mean, both of them were thrilled. I mean, the event went so well. The first one, we're going to do two more this year at least. We're doing one in Nashville at Bickland Park in May. Dr. Rod, we're going to do one at Pub Key in DC in September. So, we're going to cook with these. (49:46) And the developers were like thrilled cuz it's like something so inspiring to work on as opposed to just like the standard hackathon. And the activists are like, "This is awesome. I get like five of the smartest people in the world to like help me do what I want to do. So like everybody's psyched, you know? Let me chime in here a little bit. So like we had this idea, you know, so HRF one thing I mean my friends still every once in a while

### [50:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=3000s) Segment 11 (50:00 - 55:00)

give me crap about like how do you work for an NGO? And I'm like I don't know, man. I don't know. Alex, it's true. We are non-governmental. (50:09) We I'm not uh I'm not I'm like the last person you would expect to be here. My friends always joke with me about this and I'm like, well, Alex brought me and my friends who are these freedom tech developers, very ideological freedom tech developers, and we met these physical freedom fighters who actually fight for freedom in authoritarian regimes. (50:26) And over the years, I would meet these people and they were like some of the most courageous, inspiring people I've ever met. And I just like, man, I wish I could help them, but it was always a little distant because it's like I'd be like, "Okay, use my wallet, you know, use my I can teach you how to use Bitcoin, right? " So, it's it friendship, a social thing. But then when VI coding happened, like what Videoding means is the cost of software production going kind of to zero. (50:43) That's what it means, right? So you have a year ago you needed to be chatbt to build an agent. Then Peter Steinberger could build one himself and Pablo and now the tools themselves can recursively self-improve, right? The cost is going down, down, and down. (51:02) So the opportunity is like, okay, what if we could put activists and developers together and have them actually try to solve problems, right? So the problems with acronyms traditionally is that usually the ideas are bad and there's no distribution of the product at the end. The activist collaboration fixes both of these. the activists bring a real problem like hey how do we make a leaderboard of which LLM's respect human rights and then how do we distribute it okay the guy's got a massive academic following and is very respected and works at Harvard so it's like you know like this is all the this is what all the projects were like right so it was very empowering from the actor's point (51:26) of view because they got to do something useful and they also got to see how software is created right so a lot of these people have been around HF talk to these developers but I don't think they actually understood like where it comes from and they got to see for a day where it comes from and from the developer point of view it was very empowering because like man we've been working on these abstract problems all the time and now I get to make a tool that can help find corruption and a big data dump of documents from Russia. It's very nice to work on like a concrete (51:51) problem and then apply the skills you knew previously from your work in Freedom Tech. So it was a big success. It was a surprising success for me and I'm really looking forward to doing more of these. You know TLDDR what are we doing? I mean two main things. We're going to be bringing people together at all kinds of interesting events. (52:08) We'll have a big freedom tech day at thousand of freedom forum where we're going to have quite a bit of ad coding for activism. And then the second thing will be grants. I mean, you know, we want both the activists to apply to our AI fund to seek help to build the things they need. (52:24) Um, and then we also want really talented developers working on, you know, things like open code or open claw or maple like open-source sovereignty and or privacy improving infrastructure. We want to aggressively support that. So people should get in touch with us. You know, we really want to beef that up. And even small investments can go a really long way right now because of how powerful the virality is here. (52:44) Like again, like the guy from OpenClaw when he released it when it was Claudebot, it's not like he had raised $30 million of venture capital like did it out of his house. And it's like we could do that. Like I don't know if you want to mention like briefly just what like Kali came out with today or yesterday, but like the clawed AI thing like our friends are coming up with amazing stuff. (53:09) Cali, another pretty famous Bitcoiner who has just done incredible things historically as far as writing code. He uh made a turnkey clawbot that he just recently website, right? That makes all of it super easy. A person could just go to the website that he just stood up. And I can only imagine how quickly a guy that's as talented as he is in writing software was able to engineer something like this and no and it's got a ways to go on security side. But like you know he knows that he's a privacy maximalist and he's going to work on that. (53:39) But again where we are today like for the activists at least is we want people to use something like Maple for their basics for what their 101's are. Like you should just not be using other chat bots. it'll get 95 cents on a dollar at least of the big corporate model. Then you could be encrypted like let's move there. (53:56) Let's move from text message to signal and then in the next three to six months we're going to be able to move your creator mode you know your basically your cloud code type things and I think we're going to be able to move your agent as well into a similar environment. (54:12) So that's like the hope and the dream right now that now like in the next three to six months people who really value privacy and sovereignty you know will have access to extremely powerful tools that reflect their values but that can also 10x to 100x their work and that's very exciting guys. I uh we have to keep this conversation going like honestly like you guys are on the tip of the spear. It's a military term. (54:38) you're on the tip of the skier of that means a lot coming from you. Thank you, Preston. Thank you, brother. And the conversation I had with Pablo and Trey, I was like, "Guys, you got to come back and keep us updated because I honestly think that this Clawbot thing," and it's interesting cuz Sam Alman literally said the same thing. (54:55) And you know, coming from a guy that's well, you know, one of the biggest in the AI space. No, he said it's here to stay. It's here to stay attention. And I think that this is something that is going to be massive for

### [55:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=3300s) Segment 12 (55:00 - 60:00)

individuals. It's the wild west right now and for all intents and purposes from a privacy security people losing their bank accounts and email addresses and things like that. (55:19) I think it's the wild west right now but in a year from now I can only imagine what this is what this I mean it's a new era of personal computing. you know, for just commentary like the creator of OpenClaw really just tore a new hole in what's possible and now we're moving into that world. Let me give one analogy like personal agents at this stage really remind me of ecash right which I worked on through pettiment and which Cali worked on through cashew because it's like there's an obvious trade-off big security trade-off right up front. It's like hey (55:47) you trust another guy random guy with your bank right and so it's like kind of crazy you know you give an AI agent its own computer and let it do whatever the heck it wants. Uh, so it's like a big upfront trade-off that's a little reckless, but then you get this flowering of all kinds of hobbyists and people who are kind of understand the risk, understand the trade-offs. That's what we're trying to communicate. Don't just recklessly do this if you don't understand what's kind of going on. (56:09) That's why I tried to explain so much of these ideas to you because you need to equip yourself with some of these basic things in order to make these decisions. But when you have this flowering of a big group of very motivated people in like the open source ecosystem, that's when you can have really magical things happen. And that's what happened with Cash and Cashew. (56:27) And that's what's happening with these personal self-s sovereign AI agents. You know, you have all these people talking like AI is coming. It's going to take all of our jobs. The other side of the coin that I think I really want to, you know, impress on a person listening to this, the tools we're talking about also give a person the ability to 100x or thousandx their capacity and their ability to do things. (56:53) And so these two forces amazing really come down to what is your perspective? Is your perspective this is too hard and complicated? Well, AI is probably going to eat your lunch. Or are you sitting there saying, "Hey, this is my moment at a Yeah. Like what can you do with this? Think about this. I'll give you a great example. I'm here with a really well-known Cuban activist. (57:17) I'm thinking to myself like right now there's no like Bitcoin wallet or app that like really is perfect for her needs. " And you know, no one's really going to build that. She's going to build it like within the next year. She'll be able to speak to a computer and it'll open source. It'll take some stuff from Bit Chat, which is very important given that Cuba doesn't have great internet. (57:30) It'll take some stuff from like some very popular open source lightning libraries. It'll just build what she needs and it'll look awesome and it'll be exactly what she needs and she could just do it in a few weeks or few days or a few hours depending on how much she wants to put into it. (57:48) I mean, we're going to see the blossoming of so many interesting little personalized tools that can radically expand people's potential. And it's just such an exciting moment to your original point, Preston. And yeah, we'll come back and, you know, we're making a mini documentary right now about the six months that we're living on that we're going to play on the main stage of the OS of Freedom Forum. (58:05) It's going to start January 1, it's going to end June 1. We're going to play it on June 2. And at the bottom third, you're just going to see the days go by and you're going to see like the headlines interviews and work and it's going to be so crazy what happens on June 2. We show this thing. The speed is just face melting that what is going on here. (58:23) So, honor and a pleasure as always. Hey, that event and also the one in Nashville in May. I am very going. Let's go. Yeah. So, we'll put uh links to that in the show notes. Yeah. May 8 to 10 for the Bitcoin park hackathon part two uh AI hack for freedom and then it's uh June 1 to 3 for the freedom forum in Norway a freedom forum. (58:50) com check it outing I have one thing to plug here at the end uh so I started doing some live streaming on Noster to try to share what I've learned over the last year and so next week I'm going to try to vibe code a bitcoin full node that's what I'm going to try to do so I'm going to be live streaming on all week and probably going to injure myself severely in this process but you know wish me luck good luck It's amazing. (59:08) Okay, so we end the shows now with a song and we need you guys to select either one of you what your favorite artist is or song. Like if there's a specific song you like, I want it to be like that. And then the song is going to recap everything we just talked about in a fun songike way. So do either of you have a very strong preference for a specific song, artist, genre? Go ahead and speak up. (59:31) Justin, you fire. I don't have I can't think of a specific song, but I would go with the sea shanty song style would be fun. Shanty songs. I don't even know what that is, but I'm It's like the sailors. I could send you one afterwards. I just don't know how to find it. Okay. (59:49) Yeah, like the sailors sing about how they're getting off the shore and they're going to get into trouble and you know that sort of thing. That's great. Wow. I love how diverse these song selections are. The last one uh I think was a Beatle song or something like that. So

### [1:00:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=3600s) Segment 13 (60:00 - 65:00)

all right guys, thank you so much for making time. We're going to have links to all of that in the show notes. Enjoy your Seanti song. (1:00:11) Uh on the close out here and hold her steady through the fog we sail when ready. Open close and see. They can't hold what they can see. Yeah. Uh-huh. Okay. Okay. Okay. Uh-huh. Flip the ship. Now we dip dip. Sliding on the bass. The cold double time. Never leaving any trace. (1:00:56) Sovereign, sovereign running on my own. Pie on the counter. Hey, I'm picking up the phone. Build it in the night. Yeah, the coffee wasn't cold. 160,000 stars. That's a story being told. But I don't slow down. Nah, I keep it moving. Keep it spinning. Keep the sound bouncing off the walls and the ceiling and the floor. (1:01:14) Open source a recipe. Then I'm cooking up some more. Feel it in your chest when the baseline drop. Once we start this way, we don't ever stop. Open sea. They can't open but they can't see. Open. This the code that set us free. It's One more time. Okay. (1:01:49) Okay. Let me break it down slow. One developer changed the whole flow. Then we speed it back up like we never hit the brakes. Signal buzzing telegram hum making sovereign stakes acts of this and hackers builders in the dreamers. Everybody vibing. None of us are sleepers. Fast now. Catch it if you can. Code is in my left hand. Future in my right hand. Sliding like the base do. Popping like the sandal. (1:02:13) Building what they said that we would never dare do. Calendar handled. Email flowing. Data growing. Where we heading next? Baby, we already going. Open. See, they can't hold they can't see. Open. It's the code that set us free. Let's go. (1:02:50) It's the code that sets us free. Used to think the future wasn't ours. Now we hold the key to sovereign ours. Yeah. Let it sit. Now we build it. Now we share it. Got a whole new world and we calling open claw. Yeah. We calling open sea. Never stalling. Open walls are falling. Open sea. New day dawn and old day. Open sea. They can't hold but they can't see. (1:03:39) Open. Open see this. The code that set us free. The sea is ours. Now, thanks for listening to TIP. Visit the investorspodcast. com for show notes and educational resources. (1:04:13) This podcast is forformational and entertainment purposes only and does not provide financial investment, tax or legal advice. The content is impersonal and does not consider your objectives, financial situation or needs. Investing involves risk including possible loss of principle and past performance is not a guarantee of future results. (1:04:30) Listeners should do their own research and consult a qualified professional before making any financial decisions. Nothing on this show is a recommendation or solicitation to buy or sell any security or other financial product. Hosts, guests, and the investors podcast network may hold positions in securities discussed and may change those positions at any time without notice. (1:04:48) References to any thirdparty products, services, or advertisers do not constitute endorsements, and the investors podcast network is not responsible for any claims made by them. Copyright by the Investors Podcast Network. All rights reserved. What Claudebot is or what

### [1:05:00](https://www.youtube.com/watch?v=X7ua58iwcd4&t=3900s) Segment 14 (65:00 - 65:00)

Moltbot is kind of like this all powerful personal assistant that you can set up here. But the implications go way beyond that. (1:05:10) I'm thinking about like you can run a team of robots and they all have their specialized tools that they can work with models that are designed for specific purposes. and your chief of staff, your main guy there. Uh he can coordinate all of those different robots to build stuff without you really even interacting here.

---
*Источник: https://ekstraktznaniy.ru/video/44859*