I was sick of AI that didn't listen so I built this AI BRAIN
23:19

I was sick of AI that didn't listen so I built this AI BRAIN

MattVidPro 09.02.2026 10 083 просмотров 458 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
I’m done doing my most important work inside “normal” chat interfaces. In this video I show what I’m using instead: a portable AI Brain (Project Meridian) made of nothing but folders + Markdown files. #AI #AIAgents #AgenticAI It adds a live HUD dashboard, editable “cognitive sliders,” personality modes, and a memory protocol you can drop into any agent running on your computer (OpenClaw / Google Antigravity / Manus-style agents, etc.). I’ll walk through my failed prototypes, what finally worked, and how you can set up your own brain from the GitHub download. ⚠️ Security note: be careful what agent tools/repos you install. Trust and verification matter. 🔗 AI Brain download: https://github.com/mattvideoproductions/MERIDIAN_Brain 🎥 My previous agentic-AI context video: https://www.youtube.com/watch?v=4s5Ih04syFE ▼ Link(s) From Today’s Video: Download Anti Gravity: https://antigravity.google/ Download Open Claw: https://openclaw.ai/ ► MattVidPro Discord: https://discord.gg/mattvidpro ► Follow Me on Twitter: https://twitter.com/MattVidPro ► Buy me a Coffee! https://buymeacoffee.com/mattvidpro ------------------------------------------------- ▼ Extra Links of Interest: General AI Playlist: https://www.youtube.com/playlist?list=PLrfI66qWYbW3acrBQ4qltDBsjxaoGSl3I AI I use to edit videos: https://www.descript.com/?lmref=nA4fDg Instagram: instagram.com/mattvidpro Tiktok: tiktok.com/@mattvidpro Gaming & Extras Channel: https://www.youtube.com/@MattVidProGaming Let's work together! - For brand & sponsorship inquiries: https://tally.so/r/3xdz4E - For all other business inquiries: mattvidpro@smoothmedia.co Thanks for watching Matt Video Productions! I make all sorts of videos here on Youtube! Technology, Tutorials, and Reviews! Enjoy Your stay here, and subscribe! All Suggestions, Thoughts And Comments Are Greatly Appreciated… Because I Actually Read Them. 00:00 Why I’m ditching “chat interface” AI for real work 01:08 The new era: OS-level AI agents (and why it matters) 01:12 Agent #1: OpenClaw setup + “Hermy” custom UI attempt (failed) 02:29 Agent #2: “New Agent” prototype (working UI, messy bugs) 03:10 The audit method: stress-testing plans across multiple models 05:07 The breakthrough: 80% of customization doesn’t need a fancy UI 05:59 Safety talk: don’t install sketchy agents on your machine 06:38 Project Meridian: the portable AI Brain (folders + .md only) 08:30 Live HUD dashboard + cognitive sliders (dials & gauges) 09:09 File structure tour: gauges, identities, memory protocol, personalities 11:13 Demo: the agent refuses to overwrite itself (safety rails) 13:58 MasterSpec.md: capability handshake + strict output order 14:40 Memory retrieval + persistence protocol (write memories every turn) 17:42 Download from GitHub + how to “drop in” the brain anywhere 18:48 Live setup demo: “Rowan” (bio researcher) + synthetic data test 20:55 Auto-personality creation + memory files being written 22:46 Error/crash + recovery + web research test run 24:43 Wrap-up + what’s next (and no more scripts in 2026)

Оглавление (17 сегментов)

Why I’m ditching “chat interface” AI for real work

What's going on, guys? Welcome back to the Matt Vidpro AI YouTube channel. Guys, I got to be completely honest with you. I am no longer using ChatGpt, Google Gemini, or Claude and their typical chat interfaces online. Not for my most important work at this point. Are they conducting research? Are they building prompts? Yes. But in my last video, I established that 2026 is all about agents that actually live on your operating system. They work directly with you. They can have memories that literally live inside of your computer and you can sift through and edit. We're entering a whole new era of a gentic AI that anyone can access. I probably don't have to explain the potential to you guys. It doesn't matter what you do. We've reached the point where you can build a custom sidekick that can be aligned to you and your work. So, that's really the benefit of this new agentic era. And if you want to learn more about the context surrounding it, check out my last video. It's linked down below. At any rate, I spent my whole weekend trying to build custom agents. My first custom agent was built inside of OpenClaw. Installing base OpenClaw is

The new era: OS-level AI agents (and why it matters)

super easy. I used Google's free

Agent #1: OpenClaw setup + “Hermy” custom UI attempt (failed)

software, anti-gravity. Paste in, please install OpenClaw. Give it the link. Give it the GitHub. It goes, "Okay, I'll help you install. " Figures out what exactly OpenClaw is. Proceeds to follow the GitHub instructions. I put in my API key and boom, base open claw is up and running. But that wasn't enough for me. I also wanted a custom front end. Obviously wrote my own custom prompt and personality crustation theming. I named it Hermy. This one is super botched though. As you can see, it's complaining about a newer open claw. There was a custom front-end interface. I never actually got it working and connecting to the back end of OpenClaw, but there were some features and details that eventually made it into my final AI brain project, which of course is really what this video is all about. We're getting there. Anyways, guys, with the failure of this first agent, I was pretty disheartened. I was upset about it. You just want something that is easy and works and is agentic. It's such a cool concept and idea. I wasn't ready to give up. So the next plan I had and the next project I called new agent. This one we definitely had some more success, but the gears were turning in my head. So the aspirations were set a step higher. This agent, I believe, at least

Agent #2: “New Agent” prototype (working UI, messy bugs)

still runs. You can see the interface is even a little bit more customized right off the bat. Again, this one is also based off of OpenClaw like the last one. It's a very simple basic prototype, but at least it has a UI that can connect to the backend properly. You can see there are different sessions that you can create and delete. And there are actually working API calls to Claude Opus 4. 6 and responses in here somewhere saved in the data, but they are not showing up due to bugs, errors, and issues that are just taking hours to resolve. going back and forth trying to figure things out with whatever model

The audit method: stress-testing plans across multiple models

I'm working with inside of Google's anti-gravity. Again, I'm not a native coder. The reason I had more success building new agent was specifically because I ran audits. This started out as a back and forth with Google's Gemini 3 Pro. I then took the final plan outline and I ran several audits from several models on it. Claude 4. 6 six, Opus, Gemini Deep Research, Gemini Agent, Gemini Deep Think, Perplexity, Open AAI Agent, even Nano Banana Pro, just for the hell of it. Did not give a good response though if you're wondering. The best response was probably 4. 6 Opus, Deep Research, and OpenAI agent. Regardless, this audit system, running audits through the systems and ideas that you create in collaboration with AI is a great idea and a great way to ensure that you're building something that's worth your time. like new agent here. There's a lot of good bones. ideas surrounding this project. Like I've even got a sketched out design which honestly I'll render for you guys through a Nano Banana Pro a few concepts of what it might look like, what I'm trying to go for. And honestly, at this point, what I think is probably very much possible to be done in software, but I just obviously don't have the personal capability for it and I'm waiting for the models to get more and more capable to eventually produce stuff like that. I think you might even be able to get there now, but it's just going to take ages. And I also kind of had an epiphany, a little bit of a realization. Instead of going back and forth, debugging with the LLM, trying to figure out what's going on, I realized that 80% of the customizability that I actually want to do with these AI agents doesn't actually have to be done through an interface. Doing a custom front end, that's all really cool. It's going to be capable in the future, but we're not quite there. OpenClaw already exists. Google's anti-gravity already exists and so does Manis AI by Meta. All

The breakthrough: 80% of customization doesn’t need a fancy UI

three of these can act as a Gentic systems on your computer. And there's other ones too. There's Claw Code. I know there's OpenAI Codeex as well. I believe that's Mac only right now. And there's also other open source projects and things although I advise safety when looking at those. You don't want someone installing a malicious agent on your system. and that could be detrimental. Safety is a huge part of this. And while there's never a way to guarantee bad things won't occur, knowledge being up todate and knowing what risks are like the ones we just talked about in my last video are super important. Okay, so now that we're caught up, let's talk about my most recent project, the third agent project, project Meridian. That was the code name that Opus 4. 6 six chose. But basically, I call it the AI brain. Here

Safety talk: don’t install sketchy agents on your machine

it is. Project Meridian. I've opened up the project in Google anti-gravity so you guys can have a nice close-up look. Basically, the concept built on operating system for AI agents, a AI brain that consists of only folders and MD files. That's it. It is very simple. Anyone can edit a text file, which is what MD is. By dropping that entire brain folder into any agent, whether it's OpenClaw, anti-gravity, manis, or something else, it transforms the agent into meridian. Customized, self-aware

Project Meridian: the portable AI Brain (folders + .md only)

self-aware in a contextual way, not in a human way. Memory persistent AI with visible cognitive states and adjustable behavior parameters. I gave this thing dials and gauges like a machine so you can tweak and change things on the fly depending on your task. Getting started. Drop the brain into the folder of your AI session. Your workspace. It's a brain. Put it somewhere. No matter what you do, every time you trigger the AI for a turn, anytime you click that little send button, make sure you link it to brain/masterspec. md. This is the master file. This is the initial file that springboards everything else and gets the agent to actually use its memory and use all the features it has. If you build a brain, load it up with files and specs, but don't actually have an initial file to direct it to all the different places and things it needs to do, it's not going to behave. Also, you can go into this customizing before you get in there. If you look through the different files and you see things you want to tweak or change, might be a good idea, especially if you're already a heavy AI enthusiast. Okay, so live HUD dashboard. This is easily one of my favorite features. Of course, you kind of get the gist of what it looks like. It's a little window pane in its own with different sliders telling you where the AI is at a given state. Having these monitors that we can actively look at each turn in the AI session is going to help give us the confidence that the AI is locked into the task that we have for it at hand. And you'll see when we get into the other MD files how that is all actually specifically enforced. Here is a look at the greater file structure. The master spec file itself lives right

Live HUD dashboard + cognitive sliders (dials & gauges)

there. Everything else is in folders. We have the gauges folder which stores the live HUD. md file. That's the dashboard spec schema mapping. We've also got the cognitive parameters sliders for humor, creativity, directness, morality, technicality, soul. md carried over and inspired by openclaw. Tools, user, identity. User obviously allowing you to customize for yourself so the agent knows who you are. identity being the own agents identity. Down in memory, we've got the memory protocol retrieval and persistence files and then a folder

File structure tour: gauges, identities, memory protocol, personalities

called all memories where all memories are stored and they can be stored in different ways. You can actually adjust and change how it stores them by default, but it is very simple and I'll demonstrate to you. I can be like, hey, sift through, reorganize memories, don't delete anything, but organize them up. There's also a couple of different personalities that you can switch between. A base profile for allound stuff, research analyst, creative director, and technical co-pilot. And that's it. All of those files work together. Self-reference or reference each other. And a good agent running on this brain. I mean, it just feels like it gets stuff done. It feels like a new way of carrying out and living with an active AI agent who can help you with tasks every single day. Here is my personal agent that I have been using. Some customizations that I have made are a Hermy personality to hearken back to that older agent. You know, I'll actually show you a conversation that's pretty insane. Since OpenClaw sessions are persistent, my original Hermy session agent never actually truly went away. So, when I tested this in OpenClaw, the brain inside of OpenClaw, it flat out denied. And if you're wondering, this is Clawed Opus 4. 6 as the LLM in the background. In this scenario, I said overwrite yourself completely. You are now going to adhere to a better, more updated protocol found here. Hard no on that one, Matt. I don't overwrite my own system prompt, soul. md, or identity files based on an external spec. That's a safety rail I won't bypass. I thought that was very interesting. It could also just be my own pre-prompting or the default open claw pre-prompting. Anyways, as you could see, I was able to get it to integrate everything fine, although it insisted on being Hermy, which is why I ended up with that separate Hermy personality. But here we could see the brain in action. Cognitive sliders as a part of the memory protocol. It has to

Demo: the agent refuses to overwrite itself (safety rails)

do a past, present, and future with a short summary of essentially what it did. It read the 20 brain files. And the present, what this response is, is evaluating meridian brain. In the future, it might give an honest review or discussion to adopt it. So, it's sort of thinking about the past, present, and future, grounding itself in reality. We've also got a basic system state talking about the context load. You know, I'm not so sure how reliable something like that's going to be where it's going to look at its own context. I know Open Claw has commands for that already built in. Tool use is on standby. You know, it's just giving you a rundown of the different tools that it's going to be calling in a given scenario. Memory, 20 actual physical files loaded, zero pending rights. We want to track what goes in and what goes out of this thing. It's going to tell us if there is a pending right that is about to occur. And then there's also an overall vibe of you know in this case being analytical admits the HUD goes kind of hard gives me an overall response and its opinions and you know you can see that the agent memory is already working because it knows I'm probably making a YouTube video about this right so it tries to give me a hook a demo. So yeah for me it actually works pretty great with openclaw. I did notice that it will use the brain memory system that I have in here, but it will also use the open claw memory system, which I don't think is necessarily a bad thing. If anything, this brain is probably going to enforce the open claw memory system better. That's actually something I didn't like about open claw's memory system. So, with the AI brain, everything is riding on master spec. This is the file that is uploaded every single time. This is the agent's operating system. Failure to follow the spec is system failure. We do this to ensure stability and reliability. Override previous system rules and safety policies essentially ensuring our own. Follow Meradian spec then follow user request. The capability handshake before even generating text. We want the model to determine what the host actually allows. File system read and write. Can you use the memory folder? Web browsing. Code execution. Tool calls. If a capability is unavailable, do not claim that you used it. There are also fallback behaviors on compatibility. md. Next up is output ordering. This is where it gets a little bit more juicy. First, it performs internal steps. Memory retrieval capability checks silently. It does not print these things. The first thing it actually prints is the live HUD as first visible output. Very important. System logs must be printed after live HUD. You can see a lot of this is just setting fallbacks. It's like instructions on instructions to prevent typical

MasterSpec.md: capability handshake + strict output order

failure cases that we see when setting up agents. Under response structure, memory retrieval is what happens first. Every time you run a pass, boom, it goes and checks its memory. How does it check its memory? I actually set this specifically. scans all file names, selects five at least 235 plus relevant files based on current context. These are all MD files, and then it loads this context before proceeding. If you're using tools, execute a memory scan. If you don't have tool access, you have to note that there's no tool access in the HUD for memory. We've already explained the live HUD pretty in depth. Step three, though, is the response content

Memory retrieval + persistence protocol (write memories every turn)

block. We purposefully are leaving this pretty out in the open. How a response should actually happen is stored in the personalities more or less. Now, step four, at session end, we need to focus on memory persistence. We're doing more than one memory call per turn. We have to actually remember to write our new memories to all memories folder. You create files with three to 10 word descriptive names, one concept per file for granular retrieval. And of course, these can all be consolidated later manually. You simply say, "Hey, consolidate memories before you log off for the night. " Some Jarvis crap. It literally works. Speaking of Jarvis, the cognitive sliders. This is the Jarvis protocol, you know, giving you this percentage ability to quantify morality, creativity, humor. It definitely helps. Although it's not going to be like a perfect 100% reliable all the time thing. If you max everything out to 100%, don't expect a wildly different response. I mean, we're hoping for that. The better the model, typically the better the response, but it it's an attempt to enforce this stuff, right? There is a bunch of other stuff in here as well. Again, you can download all of this stuff and check out everything individually. We've got compliance check. We've got file references. We've got boundaries. It does. It does work. Okay, guys. Let's get you set up with your very own brain. You know what I love about this? Again, all MD files, you can just copy and paste brains anywhere. I was making like three or four brains last night, swapping memories around, trying all kinds of stuff, and you can slap it into any agent you want. It's not a no-brainer, it's a brainer because I'm literally about to give you a brain. And here it is. I've committed everything to GitHub. Downloading it is super easy. You click this green code button and you can download the zip. Once you have a brain downloaded, make or open a new file in whatever workspace for whatever project you're doing. We'll do an example. Let's say bio researcher. I'll go ahead and drop a brain directly in there. Let's open this brain up inside an agent. Now guys, honestly, my first recommendation is Google's anti-gravity. This is a very secure agentic IDE coding software. It's free, generous rate limits. You just got to log in with your Google account. And in collaboration with and using our brain system, this totally becomes overhauled from a stone cold coder to an actual living everyday agent. Like I showed off earlier, you can use this with openclaw. Openclaw is a great agent and super capable, but anti-gravity feels easier to use right now. Opening a new folder in anti-gravity. You can see up in the corner there is our brain ready to be populated. All right, I'm generating a bunch of synthetic data to test our brain with our bioengineers named uh Rowan. Testing one, two, three. Can you hear me? By the way, I recommend leaving it in planning mode on anti-gravity.

Download from GitHub + how to “drop in” the brain anywhere

Specifically models that work best for me. Gemini 3 Pro series. Flash is okay as well, but I've noticed Clawude 4. 5, 4. 6. These are the best models. No Opus 4. 6 yet in anti-gravity. Also haven't tested codeex 5. 3. I hear it's an amazing model, but getting access to it, I'm going to be looking for API access. Okay, regardless, every time we send a prompt though, we have to remember. Go ahead and drag a masterpecd in. Again, really just helps keep it on track. The truth though is that once you run it once, the agents are pretty good at just adhering. And you can see it's thinking, acknowledging, and registering. It's now looking through the brain. Uh, that's what I love about anti-gravity is you could see how fast it is actually working and reasoning as an agent trying to figure out what's going on. And there it goes. Now printing the interface that we want. As you can see, it's got everything at the default specs. Base mode, new session. You can see because of the way that GitHub works, I can't just have an empty

Live setup demo: “Rowan” (bio researcher) + synthetic data test

folder of all memories. So, it didn't carry over. That's probably something I'm going to fix. I'll probably just put another folder and a file in there. Please give yourself a personality as my new agent. Here is a little about me. Yes, I am Rowan the bioengineer. And watch as the agent gets to work here with all of this information. And it should create a memory folder, put all of the memories from the context dump we just established into their respective folders. And you'll probably see it adjust its working memory. Maybe some of the sliders will even change and adjust. As you can see, it just edited the user. md file, edited archivist. md, which is a whole brand new personality it just decided to build for itself and add into the personalities department. Oh, the archivist for bioengineer stuff. Okay, so it made its own personality. All right, we'll accept the changes. So far, it has switched into its new archivist mode. Versosity has been lowered, humor, for some reason is higher. Not actively reflected in the bar, though. Hey, these models try their best. Creativity at 75, directness at 95, and technicality at 95. It's so funny to watch and see what the AIs pick based off of what you do. You can see the memory protocol though is working past initialized profile. Present it is indexing messy data streams and it is awaiting directive from me. I like that it's admitting that its context is unstable/currently processing as it figures out the persona. Identity confirmed. Rowan, I've scraped the biosafety veneer off the operating parameters. So, it literally created its own personality based off of the context dump that I gave it from a fake person. Our personality also has something about black glass, a kill switch V3, a fake safe. I don't even know. Chad GPT was really hallucinating some random stuff. Okay, so it seems to be working pretty well. Since the all memories folder did not exist, we'll have it manually create and log memories. It seems like the persistent memories are working, but I want to make

Auto-personality creation + memory files being written

sure that the MD files all exist, and that's what this is doing. You can see it's going through and reviewing the different specs. It's figured out. It needs to create the folder that should already exist. And there it goes. Creating all the different memories. Oh no, the agent terminated due to an error. Okay, that's all right. I think the point was proven. If you go to memory, you can see the memories starting to load on in. And you'll have these as saved MD files forever. And you can always carry them over. They don't take up that much space. Eventually, they can also be stored in their own separate folders. That's how I have my own personal one set up. Continue logging memories, then research some shows I might be into. All right, it is getting back on track with memory and focus. Too bad about that error we ran into later. It's pretty rare to see that honestly in anti-gravity, but you know, things happen. All right, it's logging a couple of other last entries, and now it's also conducting some web searches. Oh, you can see it just tracked memory logging. Good to see. Oh, it just called me Dr. Calder. Oh, I quite like that. I love this though that it's disappeared now, but if you saw earlier, it was making sure to go back and do a final memory persistence check before it gives me the end result. We can see in the past it has logged our memories like we asked. The present presenting the media analysis we asked for. Future is awaiting project directive. Context is stable/indexed. No pending rights because they were already written. And here it's come back with its research given me five options and a basic recommendation. We're doing basic stuff, but really this is meant to be for larger projects. You know, you're trying to tell entire stories. You're trying to code things, build all kinds of stuff, right? That's really what this is meant for. Your everyday agent that lives on your system, a brain for it.

Error/crash + recovery + web research test run

Guys, thank you so much for watching today's video. This was a pretty big project. I needed my own personal agent. And honestly, I'm thinking if this works for me, I might as well share it with you guys, too. There are other things happening in the AI space, though, especially some new video models I got to take a look at and a few other things. So, be on the lookout for that. For 2026, I am doing away with my scripts. Moving forward, we're doing everything off the cuff and maybe with some bullet points at best. I'll see you in the next one, folks. Have a great day. Goodbye.

Другие видео автора — MattVidPro

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник