How to Build Agentic Prompts Better Than 99.75% of People
18:05

How to Build Agentic Prompts Better Than 99.75% of People

Corey McClain 11.02.2026 8 030 просмотров 441 лайков обн. 18.02.2026

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Everything I make for this channel runs through the same AI workflows. Not a course. Not a prompt pack. You download them, plug your business in, and run them. One hour, one session and a month of content that builds your audience: https://www.incomecreator.pro/ltao1 Mastering AI: Automating Workflows & Boosting Productivity Stop writing new prompts every time you use AI. In this video, I'm breaking down the exact system I use to solve a problem once, standardize it, and never deal with it again — saving roughly 10 hours a week. If you're still chatting back and forth with AI like it's a conversation, you're losing hours you don't even realize. Most people treat ChatGPT, Claude, and Gemini like an intern they have to babysit — constantly re-explaining their business, their process, and their constraints. In this video, I show you how to move from being a "chatter" to becoming an architect by building two things: a library and a logic system. 00:00 Introduction 00:51 The Library and Logic 00:58 Time Traps 01:57 Creating a Standardized Workflow 06:20 The Router Prompt 11:07 Building a Comprehensive AI Library 16:56 Building Your Own AI Systems 17:53 Final Thoughts and Next Steps

Оглавление (8 сегментов)

Introduction

If you look at my side panel, you can see how many different GPTs I was creating, trying to solve this problem. I was working on this for like two days straight. And now, whenever I need a great looking PDF, all I do is open that GPT, drop in my content, and press enter, and I allow it to create the PDF for me. No more headaches because I had a problem, I solved it, and then I standardize it. And I'm going to show you how to do that in this video. If you write a new prompt every time you use AI, then you're using it wrong. I'm Corey Mlan and I help professional creators, operators, and solarreneurs turn AI into reliable infrastructure. This video is a part of my series titled Systems Over Luck, a new series of deep dive videos into how we can use AI better so we actually use it less. And if you want to fasttrack this process, I just launched my 1hour content strategy kit. The link will be in the

The Library and Logic

description. But today, I'm showing you the only two things that actually matter when it comes to building these systems, and that's the library and the logic.

Time Traps

The reason I say you're using AI wrong if you're starting with a prompt every time you use it is because you don't realize how much time you're actually losing by doing that. Every time you have a conversation, you have to explain your business. the different challenges and constraints in your life. Every time you start the conversation, you have to think, okay, what am I doing here? How do I do it? And it's this repetitive process that you don't realize is adding up over the weeks. the months and the years and is stealing time away from you. And I know that you feel it. You sit down at the laptop, you start working on a project, you're chatting back and forth with AI and before you know it, you've spent hours just to accomplish maybe a small amount of work. And so instead of you being more productive with it, you feel like it's a intern that you have to babysit that's always asking questions or getting things wrong and then you have to correct it. And it's like, how did you not catch this an hour ago? And

Creating a Standardized Workflow

so, what I want to do is introduce you to my system for working with AI, using AI to be more productive that will automatically save you so much time and keep you from falling into those time traps that just steal all your time away. You can probably save around 10 hours a week just by doing this one thing. Whenever you solve a problem, whenever you've been in one of these conversations with AI, you've gone back and forward, you've looked at different solutions, and it finally came upon the right answer, and you're filled with anxiety and frustration because you're sitting there thinking, "Why didn't you come up with this 3 hours ago? We talked about it, but you didn't mention it. You didn't bring it up. " Just pause. Let the frustration and anxiety go. And this is what you want to tell the AI in so many words. Let's turn this conversation into a standardized workflow. Map out the steps where we got things wrong. right. And that's the logic of the problem that you just solved. Because once you solve that problem once and you standardize it, you never have to deal with that problem again. I'll give you a perfect example. I was trying to find a new way to design some great-looking PDFs and I did find one. And in that process, I ran into a lot of problems. I learned a lot of new things about printing and browsers and different platforms where you can print things. And at the end of the conversation, I simply have the AI turn it into a standardized process. So that now whenever I get ready to create beautiful, professionallook PDFs that match my brand, it's as simple as copying and pasting a simple prompt. I could even set it up as a custom GPT where all I have to do is drop in my content and it automatically gives me a beautiful well-designed PDF. So, not only am I saving countless hours every week as I'm creating this content and these assets to share with my audience, but I'm also saving money because I don't have to go to Fiverr and pay someone to create it for me. So instead of saying chat GPT, write a blog post for me. The next time you sit down to actually write a blog post, think about your process. Where does your research start? What do you do after that? How do you decide what tone to write it in? length? What I like to do is use the voice memos app on my iPhone. And after I get through recording, there's three dots you can tap and you'll see a tab that says copy transcript. And the reason I do this is because you can do an 8 to 10 minute brain dump and just talk about your process. You can go backwards and forwards and lateral with your thinking and you can just think the whole thing out without trying to sit down. Make sure you get everything in the right order. Erase, move things around. You don't have to do all of that, right? Just do a brain dump on your phone, copy the transcript. You won't have to wait 90 seconds like you will if you do the dictation inside chat GPT Claude or Gemini. And then you just paste it. And then the AI is going to organize your process. You look at that process. You add notes, anything else. And it takes about 5 minutes tops. You're going to have your entire process laid out in front of you. And as soon as you do this with the first problem you solve, you will have taken the first step in moving away from being a chatter to becoming an architect. And this is one of the biggest problems I see with chat GPT right now. It has remained a vertical platform from its inception to the moment now. Whereas platforms like Gemini and claw are beginning to fan out laterally and create other features and platforms like co-work skills, notebook LM and more that allow you to experiment or be more productive and more creative with AI. Whereas chat GPT is still basically a back and forth conversation that is eating up a lot of your time. This step-by-step system that you've laid out is going to become your logic, but it's not everything that goes into logic because there's a lot of problems that you can run into with AI. And a lot of people think that they have to keep giving it these prompts because if I'm not constantly prompting AI, then who's driving the car? And if I just let this probabilistic large language model do what it wants to do, then how do I keep it from hallucinating? drifting? How do I keep it from just going awall? And those are all valid questions, all valid concerns. And

The Router Prompt

so I want to introduce you to the idea of a router prompt. I've talked about this several times on the channel, but I really want to focus on it now. Because a router prompt is one of the most important prompts that you will ever write. And actually, it's the last prompt you should ever write. And I'll explain why in a moment, but not right now. So the router prompt is the prompt that controls everything. It is the place where your step-by-step logic will live. if it can live there and I say because of different platform constraints and platform preferences. If you prefer chat GPT as your platform then you have to understand that with your custom instructions in projects and in custom GPTs you are limited to 8,000 characters. I don't know why we're still limited 8,000 characters there but that's the state of the affairs. If you're using Claude or Gemini, then you don't have any problems. Your instructions can be as long as you want them to be. I've had instructions that are 13 to 14,000 characters and they fit inside of the custom instructions just fine and the AI had no problem following it at all. So those platforms are good. But essentially the first component of your router prompt is going to be your governance layer. And so you might call this your constitution, your rule book, but it's a list of things that tells the AI what the boundaries are. It sets the stage for the game and controls the way it behaves. And so I like to write this with machine language and in a deterministic tone so the AI understands that these rules, these laws are immutable. There's one system I created where if there wasn't enough data, it should immediately stop and not go forward because the results would be subpar and you really couldn't put any trust in the results because you don't have enough data. And so I was using this system the other day and I was trying to get it to go past it, not realizing that I put a governor on it. I put a lock on it that if it's not enough data, it's not going to work. And so I had to just go inside of it, rewrite the system instructions or the constitution. And then it would actually let me go by the second component that I always place inside of my router prompt is a registry. The registry is either going to be a link to the file if it's hosted on a virtual private server because with chat GPT I've had to do that in the past or it's going to be a list of files that I have uploaded to the library. And this registry just gives the model context on what it's going to be working with so that when I reference these files later on, it has something to tie it back into. And the final component of every router prompt is conditional logic. So, we take that step-by-step scenario that we created earlier for say writing a blog post and then we convert that into conditional logic. So, I've been doing this for a while, so a lot of times I get it right the first time around. But what I like to do is I'll take that step-by-step procedure that the AI has lined out for me from my brain dump. and I'll open it up on my monitor screen and then I'll take my phone and I'll do a voice memo and I'll just look at the screen and review everything that's written line by line and I'll talk about how I make decisions at these different points and places where I have to stop or the system should stop and get feedback from the operator. I just include everything that I possibly can and I just leave all the notes for conditional logic so the system understands well at this point if it's this topic then it needs to be titled this way and if it's this then it needs to be this and so forth and then I drop that voice recording into the conversation on my iPhone. I refresh my browser so I can keep working on my monitor or on my desktop. And just like that, I have a new set of step-by-step instructions that have conditional logic embedded within it so that the AI is prepared to deal with any situation in the same way that I would deal with it. And so these steps can become quite lengthy, but the way that I number them, I use an alpha numerical system. And so we'll have step A might be title the blog post. And so within step A, it might be a. 1, a. 2. And so we get as granular as we need to make sure that the full decision tree and condition of the logic is captured one time and one time only and the AI knows exactly what to do and it knows not to skip any steps. And this is how we make certain that we can just hand the keys over to the AI and we can trust that it's going to give us a high quality output or in most cases a quality output comparable to our own because if we're building the system it can only do what we give it.

Building a Comprehensive AI Library

But now that we have the logic in place, the next thing that I want to talk to you about is the library. The AI knows the boundaries. It knows what it should and shouldn't do. It knows the step-by-step procedure. It has the conditional logic. But now what the AI needs to really excel and create that high quality content output or consistent activity is a library. And in the library, you might think about claw skills. It's going to be all of the different files, folders, PDFs, how-to knowledge, frameworks, data mining, just everything you could possibly think that the model will need to perform the task at hand. So let's just say that you have an accounting workflow that you're building and there's a lot of equations that the model needs to use very specific equations for amortization or different things like that from Excel or something then you might have a database with the 15 or 20 formulas that you use and these are the only ones it should ever use and you might have them labeled and you might tell the model when to use it within the conditional logic etc. And so you would upload that so that the model knew the exact equation that you wanted it to use and it wouldn't be trying to figure it out. The more you tell the model what to do, the more you give it the resources that it needs to do the things you told it to do, the less hallucination I personally see with all of these models across the board. So if you're someone who just sits back and you want to ask the AI to do it, you definitely need to check your work. But if you're someone who's building the system based on solving the problem one time and then giving the model everything it possibly needs to carry out this task in the future, then you still need to check the outputs, but you can have a lot more confidence that those outputs are going to be right every single time. There might be some edge cases where it hallucinates after a very lengthy conversation, but for the most part, you can have a lot of trust in the outputs that you receive. So when it comes to components of a library, one of the most important components is synthetic data. And synthetic data is basically it's data that's not real, but it's formatted like it's real and you use it to give the AI expertise knowledge. Another component of the library is artifact templates. So for instance, if you have proposals that you send out on a regular basis and you like your proposals to be formatted a certain way, then I would create a template of a proposal, upload it to the knowledge base, and inside the instructions, I would tell it to pull such and such file when you get ready to create the final proposal. And the third component of a healthy library is going to be prompts. Sometimes the custom instructions are going to be so detailed that you need other prompts to carry out the smallest task. So, for instance, and I know you guys would like to see examples outside of content creators, but I'll just use myself first, then I'll try to come up with one for outside of content creation. But let's go back to the blog post writing and titling. You might have a prompt that is specifically activated when you're at the title writing stage. And it has an entire way of writing titles and coming up with ideas for titles that is unique to your workflow, the way that you think. And then after it writes the title, it immediately reverts back to the custom instructions inside your router prompt. And in this way, instead of you trying to cram all 15 or 20 prompts into a single prompt, you can break them up into smaller prompts that focus on individual tasks. And each of those prompts can focus on being an expert at that particular thing. And if you like you can either create separate knowledge packs that you upload to assist those particular prompts or you can embed the knowledge within the prompts. And this way the router prompt can say okay let's run prompt number one then prompt number two then prompt number three. So instead of trying to place your conditional logic in the prompt that actually captures your workflow inside of the custom instructions, especially if you're using chat GPT, you can simply take that prompt, upload it to your library, and inside of your router prompt, you'll tell the AI to start with that prompt and use it as the overarching prompt for the entire workflow every time a conversation is started. And it works like magic. And that is exactly why your router prompt should be written last because after you've laid out the workflow, you have the conditional logic, you have an idea of the rules, you need to build your library. Because once you build your library and you know the outputs that you want, you know what you need to get those outputs, you've created your synthetic data if you need expert knowledge, you've created your artifact templates if you need to teach it how to do something or give it a document that allows it to do it in a standardized way, high quality every single time. Or you have several different prompts that it needs to complete and you've written all of those prompts and everything is ready. Then you write your router prompt because now you have the view of the full landscape and that's the only way you can write an effective router prompt if you know everything that needs to happen every file and how every file needs to be used and when it under what circumstances. But once you write that router prompt, you have a system that will take hours away from you just sitting in front of your laptop staring and wondering how to get this done because you chose to solve the problem once and then document it by turning it into a standard workflow. And this is especially helpful when it's something that you know you're going to be doing on a consistent basis. And so if you

Building Your Own AI Systems

want to, you can absolutely keep giving the AI prompts, keep giving it messages, and keep trying to get the best response from it. But in my opinion, it's best if you begin to build your own architecture, build your own systems so that you can have a smoother ride and experience with AI and get high quality content every single time, high quality outputs every single time. And so the next time you have a problem or a difficult conversation with AI, don't be discouraged. Think about it as an opportunity. Ask yourself, is this something that I'm probably going to be doing again tomorrow or next week, or is it something that I know I do consistently on a regular basis or that I use AI for on a regular basis? And if the answer is yes, then come back to this video, watch it again if you need to, or drop the transcript in the chat GPT and ask for instructions on how to do this here and then create an asset, a tool that you can use on a repeatable

Final Thoughts and Next Steps

basis. If you got value out of this video, make sure you hit the like button, hype the video, subscribe to the channel, and as always, take care, have a great day, and be on the lookout for the next video in the series that'll be right Here.

Другие видео автора — Corey McClain

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник