#sponsored 🚀 Become an AI Master – All-in-one AI Learning https://aimaster.me/pro
📹Get a Custom Promo Video From AI Master https://collab.aimaster.me/
In this video, I explore Sam Altman’s hints about GPT-5 and a lifetime ChatGPT subscription available for a one-time payment. I also cover the new GitHub integration that lets ChatGPT help developers quickly understand complex codebases. Next, I break down Google’s Gemini 2.5 and its standout capabilities. Finally, I explain how Apple is poised to challenge Google’s dominance on iOS with AI-powered search features in Safari.
Chapters:
00:00 - Intro
00:25 - GitHub Codebases now in ChatGPT
02:35 - The Hallucination Dilemma
07:54 - OpenAI’s Next Moves: LIFETIME Subscription
13:22 - Google’s Gemini 2.5: The Next Leap
14:46 - Apple Joins the AI Search Race
GBT5 is coming and it might be the last subscription you ever buy. Open AAI just dropped clues about a lifetime chat GBT plan and it's not just pricing that's changing. From hallucination fixes to multimodal upgrades, deep research on GitHub and a major strategy shift, GBT 5 could be a turning point, but that's just the start. Google's Gemini is punching back Apple might ditch Google search. Let's dive into what's really happening and why the AI landscape is about to change forever. Open AAI just
leveled up Chad GBT's capabilities again. this time targeting developers. The company announced its first official connector for Chad GBT's deep research feature integrating directly with GitHub. This means Chad GBT can now dive into GitHub repositories, helping developers quickly understand complex code bases and engineering documentation available initially for Chad GPT plus pro and team subscribers. This integration allows developers to ask detailed questions directly about their code. The AI can break down product specs into clear technical tasks, outline dependencies, summarize code structures, and even provide examples on implementing APIs. This new feature continues OpenAI's push into developer tooling following recent launches like the Codeex CLI terminal tool and enhanced coding support within the Chat GPT desktop app. Programming remains a core focus for the company, underscored by OpenAI's reported $3 billion acquisition deal for coding assistant windsf. Additionally, OpenAI rolled out fine-tuning capabilities for its newer models, including the powerful GPTO4 mini and GBT 4. 1 Nano, offering developers deeper customization options through reinforcement fine-tuning. Currently, GPT04 mini fine-tuning is limited to verified organizations, while GBT4. 1 nano fine-tuning is open to all paying users. These developments highlight OpenAI's commitment to evolving Chad GBT from a generalist assistant into a specialized powerhouse for software developers. But here's the real story. Everything OpenAI is launching now feels like prep work. the GitHub connector, the O series reasoning models, the fine-tuning tools, it's all laying the foundation for GBT5. Sources suggest the next model will unify these features into one smarter and more reliable system. This is great because even in the released version, there is a high risk of Chad GBT hallucinations where the model confidently but incorrectly invents details. Developers emphasize this tool is intended to assist but not replace human expertise.
By the way, hallucinations of Chad GBT have become a big problem for many users of OpenAI. Imagine an AI that can ace a math exam but then confidently claims George Washington was the first astronaut. Welcome to the perplexing world of AI hallucinations. Chat GBT and other AI chatbots have had this quirk since their inception, generating imaginary facts and straight up lies alongside accurate answers. One would hope that smarter models would hallucinate less, but the opposite might be happening. OpenAI's latest reasoning focused models, cenamed GPTO3 and GPTO4 mini, are indeed brilliant. They reason through problems step by step, yet they're also spouting nonsense at an alarming rate. In OpenAI's own tests, GPTO3 hallucinated answers for 33% of questions about public figures, roughly double the rate of its predecessor. The more compact GBTO4 Mini did even worse, making things up in 48% of such queries. And when tested on open-ended trivia, the issue mushroomed to a 79% hallucination rate for GPTO4 mini, a full-blown identity crisis, as one report puts it. In other words, nearly four out of five answers it gave were lucid gibberish. But hallucinations aren't exclusive to only Chad GBT. All models suffer equally, even the small ones like Grock. What I love about Grock is that the team making it is pretty small. It's almost like a motivation for small businesses. If the words token and transformer still sound like Marvel characters, check out the brand new generative AI 101 crash course inside Geek Academy. You're fast track to mastering how models like Chad GBT actually read fronts, why they hallucinate, and how to make any LM image, music, or video generator answer exactly the way you want in byite-sized no fluff lessons with fresh drops. Every week, we move from core concepts to realworld hacks, prompt engineering, automations, plus readyto-use templates. One subscription unlocks everything. And right now, 6 month access is 50% off. So, hit the link, level up, and let's dive back into the news. Why are these advanced AIs so prone to veering off the rails? Ironically, the very feature that makes them smarter, their reasoning ability, might be the culprit. These new GBT models don't just regurgitate training data. They think out loud, exploring multiple solution paths. But with more freedom to improvise comes more chances to stray from reality. One theory making the rounds in AI research is that the more reasoning a model tries to do, the more chances it has to go off the rails. Unlike earlier chatbots that played it safe with high confidence predictions, the reasoning oriented models venture further into the unknown, sometimes blurring the line between a logical inference and a fabricated fact. Open AI itself admits it's puzzled. In a technical report, the company noted that more research is needed to understand why scaling up reasoning models worsens hallucinations. One hint, because 03 and 04 mini make more claims overall, they also end up making more wrong claims in absolute terms, even if many of their complex answers are correct. Essentially, a more verbose and adventurous AI has more opportunities to stumble. The implications of this hallucination problem are serious. It's almost dark comedy to see an AI rattle off a perfectly formatted but utterly fictitious citation to a court case unless you're the lawyer who just submitted it to a judge. Yes, that actually happened. The more we integrate AI into critical tasks like drafting legal documents, answering medical questions, writing code, the higher the stakes when it makes a mistake. Open AI and others know this. Of course, they're racing to tame hallucinations so that AI can be trusted in realworld applications. One promising approach is to give models access to tools like web search, effectively letting them fact check themselves. For instance, GPT4 with browsing tools enabled hit 90% accuracy on OpenAI's internal factual benchmark. A dramatic improvement, but using external tools comes with trade-offs. more complexity, slower responses, and potential privacy concerns when every query pings a search engine. Open AI is treating this hallucination issue as an ongoing area of research, and the entire industry is watching closely. There's a paradox at play. Models that think more and thus can solve harder problems also tend to make up more stuff. It's a highwire act to find the balance between fluency sounding convincing, and fidelity being correct. For now, the takeaway is clear. Today's chatbots, even top-of-the-line ones, cannot be fully trusted to tell truth from fiction. They're extremely knowledgeable and extremely errorprone at the same time. As users, we have to keep our guard up. And as AI creators, OpenAI and its peers know that fixing hallucinations is key to the next leap forward. In fact, that next leap may be arriving soon in the form of a major Chad GBT upgrade. One that OpenAI hopes will be smarter, more reliable, and more userfriendly. Let's talk about the upcoming Chad GBT5, what changes it will bring, and how OpenAI's strategy, including some new pricing plans, aims to address both the technical flaws and the business challenges facing AI today.
OpenAI has never been shy about pushing the envelope and rumor has it the chat GPT5 release is just around the corner. Officially co Sam Alman has put updates on hold for a short while to integrate everything they've learned from recent models like those 03 reasoning models and to ensure they have the infrastructure to handle what he predicts will be unprecedented demand. Most likely this delay is necessary to ensure that GBT 5 turns out significantly better than we initially expected when it's finally released. Behind the scenes, OpenAI's plan for new GBT version is to unify the best of all their models, the creativity and fluency of earlier GBT4 style systems with the step-by-step reasoning of the new O series. Once GPT5 rolls out, you won't have to pick between different modes or model versions for each task. The confusing model lineup is set to be simplified into one cohesive chat GBT experience that just works. We expect that when GBT 5 arrives, the standalone Oer models like GBT3 will be rolled into the new system entirely, operating behind the scenes as part of new GBT's brain. The aim is a smoother user experience. One chat interface, one unified AI capable of handling simple and complex queries without mode switching. It's also expected to be smarter in a general sense. Trigantly, OpenAI's plan may include making GBT 5 available even to free users, at least at a basic level of capability. The idea is that the free tier would get GBT5 at a standard intelligence while paying subscribers get to unlock higher performance modes, essentially tiered AI smarts. This hasn't been officially confirmed, but it aligns with OpenAI's broader strategy to keep Chad GBT's core accessible to maintain its vast user base and data stream while monetizing power users who need the maximum capabilities. Speaking of monetization, another big development is the discovery of new subscription tiers hidden in the Chat GBT app code. Right now, users either pay $20 per month for Chad GBT Plus or a hefty $200 Pro or stick with the free basic access. But OpenAI appears to be reading weekly, annual, and even lifetime subscription options. These were spotted by a developer digging through an app update and later confirmed by multiple sources. There's no official pricing yet. These plans aren't live, but the very idea is fascinating. A weekly pass could attract users who only need ChatGBT's full abilities for short-term projects or bursts of usage. Instead of shelling out for a full month, you might pay a few bucks to binge an AI coding assistant for a week to finish a hackathon or research paper, then cancel. An annual plan would presumably come at a slight discount for long-term commitment, perhaps around $200 per year for plus if they follow typical discount rates. And then there's the lifetime subscription. a rarity in the software world and almost unheard of in a cloud AI service. What does lifetime even mean for an AI that's rapidly evolving? Industry observers note that many so-called lifetime deals essentially mean around 10 years of access. Digital trends ran the numbers. For ChatGpt Plus, 10 years at $20 per month is about $2,000 to $3,000 in today's money. If someone wanted a lifetime of the pro tier, assuming pro stays near $200 per month, we're talking on the order of 20,000 to $30,000 upfront. Not exactly consumerfriendly. After all, running these models is extremely expensive. And despite multi-billion investments, OpenAI spent more money operating chat GPT in its first year than it earned. More subscription revenue is one obvious way to close that gap. From a market strategy perspective, these new tiers hint at OpenAI's balancing act between accessibility and profitability. They've drawn in hundreds of millions of users with a free service, and now they need to monetize without driving people away. More flexible plans could lower the barrier for newcomers to pay. For example, I only need it for a school project this month. I'll buy one week of Plus. And a lifetime option could signal confidence that Cad GBT will remain a staple tool for years. bald marketing message in itself. It also somewhat mirrors moves by competitors. We've seen Microsoft bundle co-pilot with Office subscriptions and others experiment with pricing. Open AI expanding its offerings could be in part to capture different segments before someone else does. It's also possibly a response to user feedback. Many have asked for annual plans for convenience or complain that $20 per month is too steep for occasional use. We should note that none of these new plans are confirmed publicly. They're at the founding code stage. So, OpenAI could still change course. But given multiple reports and even Sam Alman's own teasing of a shocking new payment structure on social media, it's likely we'll see pricing shakeups alongside the chat GBT5 rollout. In summary, OpenAI appears to be gearing up not just for a technical upgrade that addresses things like reasoning and multimodality, but also business model upgrade to solidify its lead in the market. And it might need every advantage it can get because the competition isn't sitting still. While
OpenAI wrestles with hallucinations, Google is advancing fast with Gemini 2. 5, its most powerful AI model yet, built by DeepMind. The Pro variant leads leaderboards in coding, math, and science. Gemini 2. 5 Pro scored 63. 8% in a significant coding benchmark, surpassing its predecessor and likely GPT4. It demonstrated capabilities like building full web apps from oneline prompts and outperformed GPT 4. 5 and GPT 4. 1 in code editing and multilingual tasks. Gemini 2. 5 is fully multimodal, handling text, images, audio, and video. The pro mode supports a 1M token context window. Google also launched Gemini 2. 5 Flash, a faster, cheaper version maintaining reasoning capabilities via a Think and Budget. Gemini 2. 5 introduced Veo 2, a texttovideo model generating 720 pixels clips up to 8 seconds long, already used in game development. In benchmarks, Gemini 2. 5 shows superior performance over GBT4. 5 in reasoning and long context tasks. Gemini features an integrated image editor in its chat interface for layered photo modifications, applying visible and synthed watermarks for safety alongside human feedback filters. It supports batch uploads up to 10 images or PDFs and integrates Imagine 3 for native
image generation. Google's $20 billion per year deal to be Safari's default search engine is under pressure due to regulatory concerns and changing user habits with Safari searches declining for the first time in 2024 as users turn to AI tools. Apple plans to integrate AI search engines like Perplexity AI, OpenAI's Chat GPT, and Anthropics Claude into Safari as selectable options aiming to reduce its reliance on Google. Apple is developing its own large language model Ajax and an internal tool Apple GPT. Strategically, it's partnering with Anthropic to integrate a custom claw model Sonnet into Xcode for AI code assistance. This hybrid approach allows Apple to advance its AI capabilities rapidly while maintaining control. This move directly threatens Google's search dominance on iOS. Google has responded by rolling out its AI powered AI overview search experience. For smaller AI companies like Perplexity, a partnership with Apple could provide access to hundreds of millions of users. Users in turn would get more choice in Safari. Apple is now significantly invested in the AI race embedded AI across its ecosystem. New Safari AI options and deeper AI integrations are anticipated as Apple prepares to challenge Google in the search domain. The AI race is no longer just about models. It's about who controls your tools, your data, and your attention. Whether it's Apple shift in the balance or Chad GPT rewrite in the rules, this is just the beginning. Subscribe if you want to stay ahead of what's coming next because the next wave of AI won't wait.