95% of People STILL Prompt ChatGPT-5 Wrong
11:50

95% of People STILL Prompt ChatGPT-5 Wrong

Jeff Su 23.09.2025 622 927 просмотров 17 976 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
🦾 HubSpot’s #ChatGPT Playbook: https://clickhubspot.com/65d274 If your ChatGPT-5 results seem worse, it's not just you. #OpenAI fundamentally changed GPT-5’s architecture, which is why old prompting techniques are now less effective. After a month of testing, I've found 5 simple tips that dramatically improve your outputs. This video covers everything from easy "nudge phrases" that force deeper reasoning to advanced "perfection loops" for complex tasks, helping you master the new model. *TIMESTAMPS* 00:00 Why ChatGPT-5 is WORSE than before 00:34 Update #1 from OpenAI 01:29 Update #2 from OpenAI 02:10 Tip 1: Router Nudge Phrases 04:32 Tip 2: Verbosity Control 06:18 Tip 3: OpenAI’s Prompt Optimizer 08:12 Tip 4: Create an XML Sandwich 09:55 Tip 5: The Perfection Loop 11:26 ChatGPT-5 Updates Recap *RESOURCES MENTIONED* My free AI Toolkit: https://academy.jeffsu.org/ai-toolkit?utm_source=youtube&utm_medium=video&utm_campaign=189 Prompts & Phrases from the video: https://www.jeffsu.org/chatgpt-5-prompting-best-practices *BUILD A POWERFUL WORKFLOW* 📈 The Workspace Academy - https://academy.jeffsu.org/workspace-academy?utm_source=youtube&utm_medium=video&utm_campaign=189 ✍️ My Notion Command Center - https://www.pressplay.cc/link/s/DE1C4C50 *BE MY FRIEND:* 📧 Subscribe to my newsletter - https://www.jeffsu.org/newsletter/?utm_source=youtube&utm_medium=video&utm_campaign=description 📸 Instagram - https://instagram.com/j.sushie 🤝 LinkedIn - https://www.linkedin.com/in/jsu05/ *MY FAVORITE GEAR* 🎬 My YouTube Gear - https://www.jeffsu.org/yt-gear/ 🎒 Everyday Carry - https://www.jeffsu.org/my-edc/ #ai

Оглавление (9 сегментов)

  1. 0:00 Why ChatGPT-5 is WORSE than before 92 сл.
  2. 0:34 Update #1 from OpenAI 170 сл.
  3. 1:29 Update #2 from OpenAI 128 сл.
  4. 2:10 Tip 1: Router Nudge Phrases 448 сл.
  5. 4:32 Tip 2: Verbosity Control 329 сл.
  6. 6:18 Tip 3: OpenAI’s Prompt Optimizer 373 сл.
  7. 8:12 Tip 4: Create an XML Sandwich 323 сл.
  8. 9:55 Tip 5: The Perfection Loop 293 сл.
  9. 11:26 ChatGPT-5 Updates Recap 79 сл.
0:00

Why ChatGPT-5 is WORSE than before

After ChachiPT5 launched, millions of users reported getting worse results even though they haven't changed how they prompt. But that's precisely the problem. To be clear, Chachi PT5 is a more powerful model. So, the same prompts should work better, not worse. But 95% of users don't know that OpenAI made a fundamental change to GPT5's architecture, meaning our old prompting techniques now perform worse in the new model. I spent a month testing GPT5 with OpenAI's official guides and discovered five simple tips that drastically improve our outputs. Let's get started.
0:34

Update #1 from OpenAI

First, let's quickly cover GPT5's two biggest changes for some important context. Update number one, model consolidation. So, previously plus users had access to all these different models. Now, there are just three. GPT5, GPT5 thinking mini, and GPT5 thinking. On the surface, this is great. Fewer options should mean a better user experience, right? But the problem is because of this consolidation, OpenAI had to add an invisible router that picks which model handles your request. It's like calling customer service, you explain your problem once and they should theoretically route you to the right department. Except in the case of CHP5, the router does not work very well. In simple terms, if you just type a prompt and hit enter like before, you sometimes end up with the best models and sometimes with the worst. And since the more powerful models are more expensive to run, it's actually in OpenAI's best interest to let you use the fastest but dumbest option whenever possible. Update number two, GPT5 is
1:29

Update #2 from OpenAI

much better at following instructions. Again, this sounds great, but it's actually a double-edged sword. You see, OpenAI specifically trained GPT5 with AI agents in mind. And AI agents need to be very good at following instructions. For example, if you tell an agent to insert a new row within a specific spreadsheet, there's no room for error. So, the good news is that GPT5 now adheres to our prompts with extreme precision. The bad news is, unlike previous models, it's a lot worse at guessing what we want when our prompts are vague and poorly constructed. Put simply, if we stick with our old prompting techniques, we get worse results. As promised, though, here are five tips to fix this. Sorted from easiest to hardest. Starting with
2:10

Tip 1: Router Nudge Phrases

tip number one, router nudge phrases. effort low. In a nutshell, by adding just four words to the end of our prompts, think hard about this, we're able to force the invisible router I mentioned earlier, to select a higher reasoning model. Jumping into real world example, I asked Chach the same question twice. What are the pros and cons of putting my cash in a lowcost index fund versus a money market account? First, without the nudge phrase, and then with the nudge phrase. Comparing the outputs, you'll see a few key differences. First, you'll know that deeper reasoning was triggered when you see this thinking indicator showing that Chachi PT thought for a set amount of time. Second, and this is extremely nuance and important, the deeper reasoning output will almost always include secondorder effects we hadn't considered. In this example, GPT5 gave a much better answer with more thinking time. Right off the bat, it tells me what each one is, pros and cons at a glance, and most importantly, how to choose between the two, right? None of that was actually available in the original output. So, as a rule of thumb, always trigger more reasoning for high stakes tasks where missing second order effects could really hurt you. Pro tip, I found three phrases to reliably trigger deeper reasoning. Think hard about this, think deeply about this, and think carefully in contrast with this is critical and this is very important. Phrases others have mentioned but didn't work in my testing. And I think the reason why is because GPT5 follows instructions literally, right? The word important is vague, whereas think hard is very explicit. Pro tip number two, plus users with access to the thinking model will still benefit from this tip, though free users will see the biggest improvements. Quick side note, if you're serious about mastering these techniques, then I highly recommend checking out HubSpot's free playbook on prompt engineering. It's full of evergreen techniques that complement the tips from today. For instance, I love their focus on building systems and not just one-off prompts because no single prompt will get you from 0 to 100, but chaining multiple prompts into a workflow will. For example, if the objective is to accurately categorize customer feedback, you'd first have AI analyze past examples to extract the patterns, then test those patterns on a small batch to verify accuracy, and only after that validation do you unleash it on the full data set. If you want to learn more practical strategies, I'll leave a link to the free playbook down below. Thank you HubSpot again for sponsoring a portion of this video.
4:32

Tip 2: Verbosity Control

Moving on to tip number two, the verbosity control effort low. Going back to ChachiPT's invisible router, in addition to determining reasoning depth, it also has a separate verbosity setting that controls output length. But simply, just like nudging the router toward deeper reasoning, we can use specific phrases to control exactly how long or short GP5's outputs are. And after extensive testing, I found three power phrases we can use daily. First up, low verbosity outputs work best when we need only critical information. For example, I asked HTBT to draft a Slack message with an update to our CMO. Without a control phrase, the output is fine, but too long for a Slack message to a C-level executive. But when I added, give me the bottom line in 100 words or less. Use markdown for clarity and structure, the result was much better. Moving on. Meeting verbosity works best when we need key takeaways plus context. For example, I need to explain in a team meeting why click-through rates dropped when conversion rates are up by 30%. The team needs more than just the numbers. They need to understand what's happening and what to do about it. The phrase aim for concise 3 to five paragraph explanation gives enough detail without losing anyone's attention. Finally, high verbosity outputs are great for comprehensive documents like project briefs, research summaries, or reference materials for multiple teams. For example, I asked HGPT to generate a project brief for an internal team kickoff. The project is a complete overhaul of Apple's non-existent AI strategy. And here's a key phrase. Provide a comprehensive and detailed breakdown, 600 to 800 words. The word count is optional, but I found GPT5 to handle specific word counts much better than previous models. I'll throw the three reusable phrases down below for you to copy. And if you watch my AI habits video, you'll also know to add them to your text expander app for easy
6:18

Tip 3: OpenAI’s Prompt Optimizer

access. Now, let's take it up a notch with our first medium effort tip. 95% of users don't know OpenAI has an official prompt optimizer tool that rewrites our prompts for GPT5. Here's how it works. I paste a prompt designed to help me prepare for my next performance review. Uh, click optimize, wait for a minute or so, and you can speed this up. And not only do I get an optimized prompt, but I can also click here to review the rationale behind each change, which teaches me to write better GBT5 prompts myself. After running hundreds of prompts through this tool, I've noticed it makes three consistent improvements. First, it adds structure. If the original prompt is just a bunch of text, the optimizer breaks it down into logically distinct sections. Second, it eliminates vagueness. For example, the reason for the change here was explicitly stated that all arguments should be based on provided achievements. Third, it adds error handling. When your prompt has contradictions or missing information, the optimizer adds reminders to ask for clarification. This is a medium effort tip because you need to create a separate developer account with OpenAI that's different from your chatbt account. And you need to add a payment method. Don't worry though, I've got a free workaround that works just as well. All you got to do is to use this meta prompt with chatbt5. You are an expert prompt engineer specializing in creating prompts for AI language models particularly and we can change this to chat GBT5 thinking model. You're tasked to take my prompt and make it better blah blah. Here's my initial prompt and you paste your initial prompt. This works really well because according to OpenAI, GBT 5 is extremely good at critiquing and improving its own instructions, which is a theme we'll see again later on. Pro tip, add this meta prompt to your texture spanner app so you can quickly access it like this. So you basically have OpenAI's prompt optimizer tool available for free. By the way, I have a free AI toolkit that cuts through the noise and helps you master essential AI tools and workflows. I'll leave a link to that down below.
8:12

Tip 4: Create an XML Sandwich

Tip number four, create an XML sandwich. Effort medium. If you end up using the meta prompt from just now, you'll notice that sometimes the optimized prompt will have these weird angle brackets wrapping different sections of the text. These are called XML tags and OpenAI themselves recommend organizing your instructions this way. Taking a step back, if you've ever typed something like context, here's the background task, do this thing. Then you're already familiar with the concept of structuring your prompts. But with GPT5's surgical position at following instructions, this structure is now critical. Think of XML tags as labeled boxes. Instead of dumping everything into one paragraph and hoping GPT5 figures it out, you're explicitly labeling each component. This is background information, this is the task, this is the output format. For example, instead of saying, "Help me prepare for a product manager interview. Here's all the background information. " And then dumping a wall of text. We should do something like this. The task is act as a hiring manager and based on my resume and job description, ask me three questions I'm likely to face. Right? And then here I'm just going to paste my resume. And under job description, I'm just going to simply paste the job description. The output quality improves dramatically because at the end of the day, a well ststructured prompt helps Chach5 better comprehend its task, which in turn leads to a better outcome. Pro tip, save a template in your text expander with defaults so you don't have to type it out every time. For example, for me, I have a user friendly and conversational tone of voice under the tone tag. Pro tip number two, this structure works especially well for custom instructions in custom GBTs and Chad Gypt projects. And since these are recurring use cases, it's worth investing the time to create the XML sandwich. Moving on to our final
9:55

Tip 5: The Perfection Loop

tip, the perfection loop effort high. Remember how GPT5 excels at critiquing itself? Well, OpenAI actually recommends we exploit this. In practice, this means instead of accepting GPT5's first response and manually asking for improvements, you tell it upfront to create its own definition of excellence, grade its own work, and keep iterating internally until it achieves the best result. Imagine hire a designer that grades their initial draft against their own quality checklist, decides it's only a five out of 10, redesigns it, but version two is still only 7 out of 10, and they keep going until they score a 10 out of 10. This is essentially what we're making GPT5 do. two examples to illustrate how this works. Example one, write a market analysis report on the enterprise AI industry. Before you begin, develop an internal rubric for what constitutes a world-class market analysis report. Internally iterate and refine the draft until it scores top marks against your rubric. Example two, draft an outline for my QBR quarterly business review presentation. Before you begin, create an internal rubric with five criteria for a perfect QBR. Then use that rubric to internally iterate the outline until your response scores 10 out of 10. Pro tip, don't worry. You don't have to write custom iteration instructions every single time. I have a universal perfection loop. You can pace to the end of any prompt. And I'll share this down below. Now, some of you might be wondering, Jeff, we went through a lot today. When should we use this versus the other tips? And the rule of thumb is the perfection loop works best for complex 0ero to one tasks like creating finished documents from scratch or writing productionready code. And to
11:26

ChatGPT-5 Updates Recap

be clear, the tips I went through today are not mutually exclusive. They can stack on top of each other. You can use nut phrases with verboseity control, with XML sandwiches, with the perfection loop, right? It's not an either or, it's a yes and. If you enjoyed these tips, you might want to check out my deep dive on when to use each chat feature. See you on the next video. In the meantime, have a great one.

Ещё от Jeff Su

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться