How to Build NotebookLM Workflows That Save Hours of Work
41:33

How to Build NotebookLM Workflows That Save Hours of Work

Corey McClain 12.01.2026 15 699 просмотров 717 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
👉 Free download: NotebookLM Compound Intelligence Prompt Pack: https://forms.gle/FwPXp2utCFaVxvkC6 Mastering AI Efficiency: Optimize Your Workflow with ChatGPT, Claude, and Gemini In this video, the speaker addresses the common issue of spending too much time using AI tools like ChatGPT, Claude, and Gemini and provides strategies to optimize their usage. By creating efficient workflows and systems, users can let AI work in the background, thus using it less while achieving better results. The speaker shares personal insights, demonstrates how to automate workflows, create content, and combine all these strategies for maximum efficiency. Key topics include the importance of context, setting up persistent memory, and avoiding common mistakes. The video also covers the integration of Notebook LM with Gemini for enhanced AI capabilities and the importance of exporting and utilizing ChatGPT data. 00:00 Introduction: Using AI Efficiently 01:22 Standardizing AI Workflows 02:30 Strategies for Effective AI Usage 05:27 Automation: The First Lego Block 06:10 Breaking Down Processes into Phases 10:55 Creating Effective Prompts 15:03 Avoiding Common Mistakes 17:22 Importance of Context in AI 20:10 Using NotebookLM for Enhanced AI 35:47 Comparing AI Models Using Gemini 3 Pro #AI #NotebookLM #GeminiAI

Оглавление (10 сегментов)

  1. 0:00 Introduction: Using AI Efficiently 252 сл.
  2. 1:22 Standardizing AI Workflows 257 сл.
  3. 2:30 Strategies for Effective AI Usage 561 сл.
  4. 5:27 Automation: The First Lego Block 136 сл.
  5. 6:10 Breaking Down Processes into Phases 959 сл.
  6. 10:55 Creating Effective Prompts 782 сл.
  7. 15:03 Avoiding Common Mistakes 444 сл.
  8. 17:22 Importance of Context in AI 533 сл.
  9. 20:10 Using NotebookLM for Enhanced AI 2915 сл.
  10. 35:47 Comparing AI Models Using Gemini 3 Pro 942 сл.
0:00

Introduction: Using AI Efficiently

If you use chat GPT claw Gemini notebook LM every single day and you use it for several hours every day, then you're using it wrong most likely. And the reason why is because nobody is teaching us how to use AI better and therefore use it less. We shouldn't be spending hours chatting with a large language model, but instead we should have workflows and systems where the models are working in the background for us or completing the task that we require with exceptional quality and at lightning speed. So that the final outcome is that we actually use AI less because we're using it better. This is a problem that I've identified in my own life. If I show you my year with chat GPT and we start working through this, what you're going to see is that they label me as a top or first 1% of users and a top 10% of that 1% messages sent. This means that I can confidently say that I use Chat GPT just as much as you, if not more. And while that may give me some insight into the model, it's also a problem because if I was using AI better, I would actually be using it less. And a large portion of that time is research and building assets. But even then, I should have processes and workflows that allow me to perform those same tasks with the same level of quality as I would manually at a faster
1:22

Standardizing AI Workflows

speed. Every restaurant tries to standardize their recipes so that no matter where you get a latte in their chain, it always tastes the same. And we should be standardizing the things that we're doing with AI so that no matter when we run the workflow, we get the same level of quality. output that we're expecting and we're not caught off guard and we're not spending hours in front of our screen. Before you jump into the video, I want to let you know that I'm going to put timestamps at the bottom. At the beginning of the video, I'm going to be explaining my thought process behind a lot of the things that I'm going to be demoing later in the video. So, if that portion isn't what's interesting to you and you want to skip that and just get to actually watching me do this stuff, then skip ahead and watch that. But if at any time you're watching me do something and you don't understand why I'm doing it that way, then feel free to jump back to the first half of the video and you should find an explanation. Either way, use this video as you see fit to get value out of it. And if you do, do me a huge solid, hit the like button, subscribe to the channel, and if you want to download this Notebook LM compound intelligence prompt pad, I'm going to put a link in the description. Now, back to the video. And so, what I
2:30

Strategies for Effective AI Usage

want to do in this video is I want to take the time to actually break down the way that I think about AI and some of the strategies that you can use to actually start building out these different workflows, strategies, assets, and so forth. so that you're still getting the same results from using chat GPT as you've always gotten, but now you're getting them faster and it's freeing up your time to do other things that you want to do and you're not spending all day in front of your laptop screen. And so the idea is that we would actually use AI less. If you look right here on the screen, you can see that this is our chat on the lefth hand side. This is our value. This is the effort that we're putting in. It's all the way at the top. I want this to actually go down for you. I want the effort that you put into AI to go down, but I want the value to actually go up. And this happens as we mature with our usage of AI and we create a compounding system where we begin to create different prompts that we save, workflows, etc. Claude skills touches on this a little bit. And even if you don't use Claude or Claude skills, the idea that you should have standardized prompts for different things in your life and in your business is an idea you need to lock in right now. And this is how we all started using chat GPT. We just open it up and we start asking for answers. And a lot of us still use it like that to this day. And I have no problem with that until we start sitting down for hours just chatting with no end in sight. And instead, what we should be doing is building small little Lego blocks. And you can build almost anything with Legos. We we've seen Lego sets where you can rebuild the Death Star from Star Wars and just so many other cool things that you can do with it. So, when we start using it this way, we begin to create assets that compound over time and that give us the ability to actually build bigger and better things, so to speak. because once we add a Lego block to the box, it's something that we can use repeatedly. And so for this video, we're going to follow a very simple road map. Number one, we're going to talk about automating workflows because it's very important to understand the process of automation to speed up different processes and tasks that you perform on a daily basis to make a living, to get healthier, to make your relationships more healthy, or whatever your personal use is for AI. The second thing we're going to be doing is contents because we have a new method for creating far more permanent contents inside of AI than we've ever explored before on this channel. And finally, we want to talk about how to combine all of this to create the perfect storm. And I'm talking about this workflow is so beautiful. I absolutely love it. It's my new favorite workflow for 2026 and you're going to absolutely love it. So, make sure you stay for the full video.
5:27

Automation: The First Lego Block

So, let's get started with the first Lego block, which is automation. The first thing that you should understand about automation and AI in general is that you're going to have to learn new skills. And how to embrace new ideas if you really want to become a better user. And one of those ideas is the idea of architecting. You have to learn how to build things. Even if you're not going to build an app, a website, you still have to learn how to build processes and workflows. And so people think that, well, I could just have one prompt and tell AI to do everything. And that's a huge mistake. We've known for a very long time now that AI performs best when you give it one task at a time. And so
6:10

Breaking Down Processes into Phases

it's important that when you have a very large process that you learn how to deconstruct it, see the whole process, but then slow it down so that you can identify the different steps, the different outcomes that need to happen in your own thought process and your own work so that you can share that with the AI and it can replace you. Because when the system only has one task to work on, it's far more reliable and you have more confidence in the outcome in general. And so essentially, you have your process for whatever it is. Now, mine might be creating content, but let's just say that yours is your daily routine or your workout habits or something you do at your job. The first thing you want to do is you want to understand what that process is right here. And then you want to slow it down like I said already, but you want to identify the phases. And so the phases are going to be these large chunks. Maybe the first phase is importing all of the client's data into an Excel spreadsheet or a program that you use at work. Taking a holistic look at a person's uh body mass index, their vitals, and other items. If you're a personal trainer and you're a dietician or something similar and you're trying to put together a wellness and fitness plan for them and for most people who do this on a regular basis, you're going to know what those phases are. This is not going to be difficult. This is not rocket science. This is something you're going to know like, hey, I have to do this first, this second, this third, this fourth. But what I want you to do that's not shown right here is before you go from the phase to the prompt, I want you to slow down and I want you to take this phase and break it down into steps. So I'm just going to use me creating a piece of content as an example. One of the first things I do is I have to come up with an idea. And so that's an entire phase and a very long process for creating a video like this. And so in that process, I realized like, hey, not only do I have to come up with an idea, but I need to make certain that this idea is relevant for the news cycle right now. It's relevant to my audience. It's valuable. It's something that I can talk about for a long time without a problem. It's something that's going to actually deliver some level of transformation to the people who watch this video. There's a lot that actually goes into trying to make the video as good as possible. and I'm improving in that area in my process and in this phase, right? And so in that there going to be different steps. So first we need to come up with the idea. This is my idea. Okay. Now I need to find an angle because I've had ideas before that I posted on my channel. Somebody else saw my idea. They took it. They put a different angle on it. Did the exact same video and it performed like 10 times better. And so I understand that the angle is something that's very important with YouTube. Then after the angle, I have to think about the hook. How do I get them to stay for the first three seconds? Then I have to think about, okay, well, after I get them to stay for the first 3 seconds because 10 to 20% of people automatically drop within the first 30 seconds, how do I get them to stay for the 30 seconds? So now I got to work on the next 27 second portion. And I have to make sure it's tight and efficient. And that's a lot because people are watching YouTube kind of like Tik Tok now and it's difficult. So if you're still here, congratulations. I really do appreciate you. But you need to understand what your process is. And then you need to break it down into phases. those phases down into micro steps. And I love to do this with um like a traditional Roman numeral outline where I just have my sub points uh bullets uh etc. Right? So I can see the hierarchy of the ideas like I got my uh process, my phase, then I got my steps in there and then my smaller task beneath each steps so that I can kind of see how the whole system goes together. And markdown is great for this because the AI reads it very well and it understands the structure. But then after I do that, I have this process, these different blocks or phases and these are going to be longchain prompts. each one of these phases. Then I'm going to use a router prompt to bring it all together. This router prompt simply says, listen, after you perform this first phase prompt and you've walked through all of the 15 steps here one by one, you're going to move to this next one and then walk through the 10 steps here one by one. Then you're going to move to the next one and walk through the final five steps one by one and then we're done. Right? So that's what the router prompt does. It organizes all of these because instead of trying to write one prompt that conveys this entire pipeline. It's much easier if you just have one prompt that organizes the other prompts. And so a very simple way to do
10:55

Creating Effective Prompts

this is to simply do a brain dump. Just if you have a process, you could just say like, "Hey, this is what I'm trying to do, etc., etc. " Just do a brain dump and just tell the AI everything that you want to have happen. And then the AI is going to organize your ideas into a logical sequence automatically. Now if you want to and this is experimental you can ask the AI to organize it emotionally thematically. There are different ways to organize data there and I don't want to get ahead of myself but I truly believe that one of the most underutilized ways to use AI is cross-pollination. like looking at different industries and different ways that people do things that have nothing to do with AI and then taking those concepts and ideas and using them for your business per se, creative process per se. And AI is great at doing that. But after your brain dump and converting it to phases, you want to write the phase prompts. And again, there should be another step here, but those phase prompts are going to be minute steps, minute tasks. do this and also this is how you do it and also what I like to do is embed examples or embed knowledge so that it can look and see these are good examples of how it's done and each of our small little prompts are going to look like that inside of this one mega prompt that that's probably too big to upload to a project or something like that and we'll get into that later. And then we use our router prompt for our custom instructions and our custom GPTs and our projects and our notebooks and notebook LM. And then you just install it and you run it and it works like magic. But we don't just write the small prompts and the small steps. We also make sure that we have certain checks and balances and gates placed along the way. So for instance, you want to make sure that you have different places where you stop within this phase after each little step and you verify what the AI is doing. So I have an AI that will uh create an offer, right? And so it's going to create a full offer system for you. And so instead of just adding the steps, you want to make sure that you add gates as well. And so these aren't going to be phase one inside of this, but inside of a phase at the end of step one, you might have a verification that says, is this legit? Do you agree with this? It do you approve this output? And it doesn't go forward until you approve of it and so forth. And this is how you write your prompts, your mega prompts that capture your workflows in a way to where all you have to do is sit back and watch the AI do your work the way you would do it. Now, it takes some time to build these the first time around. It might take you a weekend, maybe five days at the most, but once you get it done, after that, all you're doing is making small tweaks to this prompt, and it's going to be something that you're absolutely happy with. I promise you. And so, what we're doing is we're going from this place where we're asking the AI, well, what do I do and how do I do this? And we're creating a standardized way to do certain things. And we're telling the AI, hey, this is the stuff I want done. this is how I want you to do it. And these are some examples of how you do each of these small little things in a good way. And this is what creates these repeatable workflows to where you can automate a lot of the things that you're doing with AI right now. And so, yes, this is going to save you time with whatever your processes are. Yes, you're going to get consistent outputs. Yes, you can have modular refinement where all you have to do is tweak one section of the prompt instead of this full prompt. Yes, this workflow is going to be an asset that you can reuse over and over for a very long time to come and all you have to do is update small little portions as things in your field or expertise change. So, if you haven't been doing this, this is something you absolutely need to start doing immediately. So, here are some things
15:03

Avoiding Common Mistakes

that you want to avoid so that your first time doing this can be as smooth as possible and your second time doing it can be better and your third almost perfect and it could be something that you enjoy. Don't try to over complicate it. Just start small. So, you might say for a YouTube video that okay, step one, I need an idea. Step two, I need a title. I need a thumbnail. I need some uh talking points. You don't get into any of the other technical stuff about scripting because there's a lot to learn. But you just start small. You start small. Okay, I need to first I need to do I need to find a product. Second, I need to source the product. Third, I need to find product market fit. Whatever your steps are for what you're doing, just keep it small and then start building from that. Another mistake that people make is vague deliverables. You want to tell the AI exactly what you want and the format that you want it in. If you want it in markdown, which is going to be like titles, subtitles, paragraphs, then tell it that. If you want it in JSON code, then tell it that. Be very particular with what you want and how you want it. And so a lot of times what I will do when I'm actually writing my prompts, I will also create templates and I will upload a template with the prompt and say, "Hey, when you get through doing this, use this template to give me the answer. " And it will format the data and the response according to the template every single time. Works like a charm. Beautiful process. Another problem and this is very similar to overengineering but excessive scope trying to automate a massive complex process on day one. So start with a contained well understood task. Start small. Don't try to automate something in one day and think you're going to get it right. But if you do it the way that I'm showing you, you can probably get it done in three to where you're happy with it and you can use it on a regular basis and make improvements and updates in the future. And the last one is something we just covered. People might think that no checkpoints is not a problem, but it's a major problem. Sometimes the AI is going to make assumptions. Sometimes it's going to hallucinate, and you need to be able to stop it when it does and pull it back on track. So learning the steps of your
17:22

Importance of Context in AI

process, standardizing those steps in a natural language prompt, code, standardizing those steps how it's done well is very important, but it's useless if the AI keeps forgetting your rules and the details about your workflow, your life. And this is where context comes in. Now, this is nothing new to me, and this is probably nothing new to you if you've been watching the channel for a while. you understand that this is something that I'm very interested in when it comes to AI because I want AI that feels personalized, that understands the conversation. When I start talking to it, I don't need it to start asking me questions about things that we've already discussed. I want it to feel like I'm talking to a human as much as I possibly can, even though I know it's not because nobody likes starting a conversation over from zero and repeating themselves every day. And the reason that contest is so important is because a lot of times you have a lot of great outputs but they get trapped in chat history. You don't have those assets. You can't carry them around with you. They're somewhere in your chat GPT account. The same explanations are repeated every session. So instead of the AI learning like this is how we solve this problem in the past and basically like saying hey we already talked about this you're supposed to be doing this and just putting you back on track it's having the same conversation again with you. You have native memory. All platforms have it now, but it's limited. So many times the model isn't the problem. It's not clawed. It's not Gemini. It's not chat GPT. It's just you don't have that context. And so the best way to build a persistent memory for your AI is not to wait for OpenAI claw and other platforms to build it for you. You're going to have to find ways to do it yourself. Now, if you still prefer using chat GPT and Claude AI for your primary platforms, and this is where you want your AI to have the most memory, there are a lot of different ways to do it. Depending on the platform that you're using, you might use GitHub or your repo to connect your code so it can always know your code and your database or whatever. You might use model context protocol for chat GPT or claude if you know don't mind turning on developer mode on chat GPT or you can use a virtual private server with chat GPT and claw if you don't want to go into developer mode and use model context protocol because of safety reasons and you want your stuff stored somewhere privately where people can't see it. But essentially this is going to be a library with your rules, your templates, your protocols. This is going to be a place where you store everything that's important to you. And this is the data that the AI needs on a regular basis to always know what's going on. It can quickly go through it and respond in an upto-date fashion. And one of the
20:10

Using NotebookLM for Enhanced AI

best platforms right now for this is Notebook LM. Because of the integration with Gemini, I can add a notebook or more than one notebooks to a single chat thread with Gemini 3 Pro. It It's crazy. It's really I'm going to show you later in the video. So, I can create a notebook around my personal health. personal finances, business finances. I can create notebooks around all types of different topics. And then just drop them in the chat and start chatting with Gemini. And now Gemini instantly understands four to five different areas of my life without missing a beat and helping me connect the dots. Because sometimes when you use AI, we compartmentalize a lot to organize. But then sometimes we want or in my opinion, we need a synergistic or holistic view that takes everything into account all at once. You can do that now with Gemini. You can't do it with the other models. not in as much detail as you possibly can with Gemini because of notebook LM. So Gemini does not have projects over here in Gemini. You do not get projects, folders, you get Google gyms, you can build gyms, you can even build AI apps, but you do not have projects. And the reason why I think you don't have projects is because you have Notebook LM. You have a place where you could upload sources. And if you have notebook LM plus, you could upload 300 sources in a single notebook. And one of the beautiful things about this that I'm going to show you later on is that it kind of creates this safeguard for you because let's just say that something does happen to your chat GPT account. It gets hacked or accidentally deleted or whatever happens and you lose all of your data. Well, guess what? You just lost all of your data. three years of conversations, three years of ideas with chat GPT completely gone forever. But if you download your data like I'm going to show you later in the video, then you always have it. It's always yours. You can never lose it. You paid $20 a month, $500 a year for the last $1,500, let's just say $1,500 for the last three years. And that $1,500 and not even that, but the ideas, which is more valuable, it's all gone. But not if you download it and you repurpose it. And so no matter whether you're using chat GPT with a virtual private server, claw with model context protocol, which is going to be a public community server, I believe, may be a private server. I'm not sure about that. Or you're using Gemini 3 with notebook LM. Either way, whichever one you're using, when the AI has the context that it needs, it's you're going to be faster in your process. This is one thing that you can do that you don't necessarily have to get good at. You just have to figure out your platform and then set it up. And you're automatically going to use AI a little bit less because you're going to spend less time reexplaining yourself because the AI is going to have the context it needs to build on top of your last conversation and keep going forward instead of going backwards. And in addition to the faster outputs, they're going to be more consistent outputs. And the decisions that you make are going to be better because now you can track where you went wrong in the past and you can update those items. And so now we have the automation for reliable steps. We have the context to fortify or reinforce our confidence in this automation and the quality of the outputs. But now we need to talk about the brain in the hands. Now when it comes to the brain in the hands of this entire operation, one of the platforms that I'm going to recommend to you over any other platform is Google Gemini because of the notebook LM integration. It is just such a powerful integration that I feel like when it comes to my way of thinking about AI which started with chat GPT and chat GPT projects uh claw projects are better than chat GPT projects for this workflow in my opinion just more expensive but notebook LM and Gemini is just the ultimate it's the best place right now for the way I see using AI going forward in the future and so this is a place that's kind of like your library where You hold everything and you bring everything together very quickly and easily. And in this example, Gemini is going to be the hands, but your notebooks are brains. And so notebook LM is where you store vast amounts of knowledge and information that you want to repurpose in your daily workflows and in whatever you're doing. And it's kind of hard to come up with these examples from different industries off the top of your head because I really don't know about the different security measures and different accounts you can use. So let's just say that you're trying to learn how to write Python manually, even though the AI can write a lot of this code for you. So you upload a lot of free content about writing Python code and how to write it so that it's like a repository to troubleshoot your code with or to debug it based on these known principles for how to actually do it. And so the way this runs once you get it set up is kind of like this. Instead of you just dropping single prompts in the chat, you're going to identify the task. You're going to attach the relevant context. You're going to run the execution. You're going to review the output. You're going to save the best output back to the vault. This is updating the context. And then you're going to repeat. Gemini with Notebook LM is the easiest platform for you to actually do this and compound what you're actually learning from the AI and save it. But as fun as this new integration is, there are some things to be aware of with this as well. Uh the first one is noisy context. Like you shouldn't add notebooks to the conversation just for the sake of adding them. There should be a definitive purpose and I've made this mistake before myself. Number two, vague requests. We talked about this already. We need to be very specific with what we want from the AI, the format we want it in, etc. And number three, broken loop, not saving that output back to notebook LM and then converting it into a source that you can then download, which is critical to this entire operation. Because what a lot of people don't understand is that your history is an asset. This is your data. And so a lot of people just think that their chats with AI are just that. They just think about them as chats. But what you have to realize is that some of these sessions you've had critical breakthroughs for problems that you've been trying to solve, situations in your life, and you can't remember and hold all of those things in the forefront of your memory. They're buried in your subconscious. And so with AI, we can resurface our best ideas, our best memories easier. And so when you start viewing your conversations as a data set, you realize how valuable they are. you've had three years to converse with chat GPT about everything under the sun and it's chatted back and sometimes it said some crazy stuff and sometimes you've been like wow that was a good response and you've implemented what it said and it was great advice. It leveled the playing field for a lot of people, but it's all buried because you don't have your data. And so your strategies, your ideas, your decisions, your blind spots, your experiments, your frameworks, everything, it's buried. And you need a way to actually make it useful rather than just giving it a prompt because chat GPT is not going to dig through the millions of words to find those little nuggets for you. It's not going to do it. Reference past chats is it for you. Sorry, bud. And so you need to export your data set because when you do it, your thinking, the intelligence that you give to other AI systems is going to compound. Your thoughts become better thoughts. Better thoughts become frameworks. Frameworks become systems and systems become outcomes. But it starts off with better thinking. And so that's the whole process that we're in. We want to download our data and then we want to turn this into something that we can compound and get an unfair advantage and get leverage. And so when you own your history, you can recover your best prompts. You can extract hidden SOPs because you have processes that you don't even realize. You can stop repeated patterns that you can't see by uncovering them, identifying them, and then you can even give your AI a system prompt that teaches it how to identify it when you're doing it and to call you out. I've done that. I have a video about it somewhere on my channel. You can build a plan using years of real context and not just generic advice. That is probably one of the single most valuable things you could do. When you think about the sheer volume of data that you've shared with AI and the number of problems that you've solved and then you think about the fact that you're able to download that and have it on your computer and then upload it and reuse it or modify it or clean that data or convert that data into different frameworks. If we go back to the first step, convert that data into different automations, use that data as a canvas to create your new 2026 on. Oh my goodness. ite the possibilities are endless. You really have to understand how valuable it is for you to export your data because this is going to create your unfair advantage. A lot of people are making plans for 2026, but they're doing it without their data. exported data. And so the question is, do you want to build your plans for 2026 without any context, just starting off from scratch, or the things that you can tell Gemini in a single conversation, or would you rather start a conversation off with chat GPT, uh, Claude or Gemini with the full context of your conversations? And so now I want to walk you through the things that we've been discussing in this video. And hopefully what I was able to do in the first part of this video was explain these concepts in a way that you can understand them and you can take them, repurpose and reuse them however you see fit. And if you did get value out of that section and you're still watching right now, do me a huge favor and hit the like button on the video and subscribe to the channel if you haven't subscribed already. And so the quick start is a simple two-step process. Number one is you want to download or export your chat GPT data and use a bash command to do it. One way you can quickly tell if it worked right or not is the file size. So I can look at these and tell, okay, four megabytes, three, four, they're mostly around four megabytes. So I know that it's most likely accurate, right? And once you get over here, just add sources and then you upload your documents. And let's just walk through this prompt pack and give you an example. So the first prompt is a prompt library extractor. And so we already ran a version of that here. I'm going to run it again though. And this time I'm going to go to Gemini, start a new chat, click on the plus button, add a notebook, and I'm going to add chat GPT as a source. And I'm going to tell Gemini 3 Pro to do the same thing. Ah, and Gemini pulled the actual prompt word for word, more context in the window, bigger model. And so I pasted in the prompts from Notebook LM and I told Gemini to create a table that names or labels and compares these prompts as two lists because I want to see the ones that Notebook LM pulled alongside the ones that Gemini pulled because you can run every prompt that I'm going to show you inside Notebook LM. You can absolutely do that, but Gemini is more robust. And this is the power of adding your notebook in the chat. It's like it's almost like you still have the same body of the car, but you're dropping this massive Gemini engine inside of it. All right, so here they are side by side. This is the provided list. This is what three flash provided from Notebook LM. And this is from Gemini. So the way that it's looking, it's almost as though column A, the list I provided, it looks like it's more prompts. Which list included the most prompts? And secondly, which one do you believe pulled the most relevant prompts? Or which one, you know, performed the task at hand the best? I'm just going to see what Gemini says. Gemini loves to acquies, though, so I wouldn't be surprised if it says Notebook LM did a better job. It'll hardly ever stand up for itself. Your list A wins 15 prompts. Gemini excavated nine prompts. But I will give Gemini this. It wrote the prompts out word for word. Which list performed best? Your list performed best. Here's the honest assessment of why your list was superior to my excavation. The personal insight blind spot. I completely missed the personal insight category. Naming versus mechanics. Your list uses strategic names where my list added value while smaller lists the forensic linguist and viral playbook prompts. All right. Let me ask you this here. Which list wrote the prompts out word for word? All right. And so my excavated list is the one that I wrote out word for word as full copypaste templates in the first response to clarify the status of the text. Column B my findings. I expanded these into full master templates with ver content steps. Column A your list provided these as prompt kernels. So hear me out. This is a workflow. If I want to identify a list of the prompts, I'm going to use notebook LM. But then I will turn around and I will say or let's just do this here in real time. Let's just copy this and then come over to Gemini and we're going to paste this list in. Or before we paste that list in, we're going to tell Gemini, I want you to extract each of these prompts word for word and provide me with a template from the uh available notebook. You will not be able to place all of these prompts in a single response. So I will prompt you continue and you will give me the next prompt or next set of prompts. Paste in the list and then we can hit return. And that's how I would use these two for this particular workflow. Now that I understand that it works like this. Once it gives me the prompts, I would export the sheets, download as a document, mark down, upload wherever I'm going to be using these assets. And very briefly before we move forward, you can see that it's pulling out the prompts, but it did summarize some of them because they're very lengthy. But we're going to start over here again. Upload a new notebook. We're going to upload chat GPT and claw this time. Click add. And then we're going to ask act as a behavior analyst. analyze my conversation pattern to identify topics I return to repeatedly, questions I ask in multiple variations, concepts I reference frequently, and what I'm trying to solve that I have not solved yet. Output, five patterns, five root causes, five recommended experiments. We can see the behavior patterns are the meta system recursive loop. you repeatedly attempt to build a master system or agent that can do the work for you rather than using the AI to simply help you do the work. And then if we look at the root causes, we can see that cognitive overload and executive dysfunction are number one, perfectionism is procrastination, the expert trap. And so after it goes through the behaviors and analysis of it, it gives me some ideas for things that I can do uh to actually break out of these different problems that has identified these patterns. And one of them is like, hey, don't use your YouTube made simple workflow to make your next couple YouTube videos. You know, try to just make them raw instead of trying to make them perfect and post a raw video for seven days straight. And so now we're going to ask the AI act as a fact checker. I have asked similar questions
35:47

Comparing AI Models Using Gemini 3 Pro

to both chat GBT and Claude AI. Compare their responses. Where do they agree? Where do they conflict? When they conflict, which is more likely to be correct, and why? And so here's the breakdown of how Chat GPT and Claw both approach business with the questions that I ask. And so both models agree on the fundamental economic theory, but they conflict significantly on implementation. Chat GPT acts as a COO, chief operating officer. It focuses on building durable systems, workflows, long-term operating systems. Claude acts as a CRO, chief revenue officer. It focuses on sales psychology, immediate cash flow, and market positioning. Here is the detailed breakdown. Business departments and structure, conflict level, high fact check verdict. Chat GPT is more correct for business stability. Standard departments are universal necessities. Claw's model is specific to a content production agency. If you stop creating content, Claw's business model halts. Chat GPT's model, which includes sales ops, implies a broader foundation that can survive without constant content. Offer strategy, conflict level, medium, launch strategy, conflict level, critical, philosophy, and mindset low. And so here's a summary table. That was a very good exercise. I can tell Gemini something like based on your assessment of each platform and which model performed the best in that particular area. Give me a final output for my business from each of those models. So maybe chat GPT was better at one thing. Use that notebook to create that documentation or process. Maybe Claw was better at another thing. then use claw to create that document or process. And so now I have the benefit of still using opus 4. 5 even though I cancelled both of my clawed accounts just because the usage is just so brutal. If the usage was like chat GPT and I don't know if they'll ever get to a place where they can handle that much compute and it still be profitable for them or make sense for them as a business. I don't know but if they ever did they would probably become the number two contender in the space. Claude is a really well-trained model. ChatGpt 5. 2 is smart. It's intelligent, but the way that it expresses it, it's almost like it's very computer-like smart. Claude, you feel more like you're talking to a really smart person. Both of them are intelligent. It's just Claude communicates it better. The next thing we're able to do is really get to understand ourselves. And so, we could run prompts like act as an urban planner for the mind. Review these chat logs as if they were traffic patterns. Identify my desire paths, the specific topics or workflows where I consistently go off road, spiral, or have to correct the AI. Map these friction points and suggest a paved road, a new master prompt that follows my natural behavior rather than fighting it. And so now I have a list of different items where I constantly get derailed with AI. And I have a prompt that I can use the construct engine to keep me from actually falling into this again. And so I can copy this prompt, return to chat GPT, go to settings, personalization, and right down here where my custom instructions are, I can paste in that new prompt. It's a little long, so maybe I need to come up and take some off or just have Gemini write it until it's less than500 characters because I think that's the limit. But you get the idea. But in addition to learning about the different ways where you've been going off track with AI, there's the shadow self prompt where we ask the AI to act as a jungie in psychology to analyze your messages to build a psychological profile. Where are your blind spots? What cognitive biases do you have? What do you repeatedly demonstrate in these conversations? Where do I sound confident but lack evidence? This is a good one because sometimes we don't know that we don't know. We have the voice fingerprint. Act as a linguist and ghostriter. Analyze my writing style and the user messages. Decode my voice fingerprint, sentence length variation, common vocabulary. This is one of the easiest ways to create a style guide for Notebook LM or any other AI to writing your voice. We have the hostile witness. Uh, and this one acts as a mimemetic epidemologist. Trace the infection vector of my best ideas. Find patient zero, the first vague mention for my top three successful concepts. Trace their mutation through the logs. What environmental conditions, time, context, emotion allow these ideas to thrive and I would say even grow. And so there are so many different prompts in here that you can run. The blue ocean radar, project premortn, the skill gap bridge, and then finally you create a full synthesis that's going to create your 2026 blueprint. And hopefully you have put something together that allows you to create and use AI in a way that completely changes your life, completely changes the way you use it, and just helps you reach your goals. so that you're achieving more accomplishments at work, at home, at business, in your content, at play, but you're also using AI less, but you're still getting more out of it. If you did get value out of this video and you are watching this right now, I really do appreciate you. If you could do me a solid and hit the like button, that would be great. If not, still cool because you're here. But anyways, take care. Have a great day.

Ещё от Corey McClain

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться