🔗 Subscribe to our Newsletter!
Get the latest AI updates, tips, and insights straight to your inbox:
https://dub.link/staying-ahead
The AI war just went nuclear 🚨
This week was INSANE—OpenAI fixed their biggest mistake with GPT-5.1, China dropped TWO open-source AI models that are beating GPT-5 (and they're FREE), and there's a courtroom battle happening RIGHT NOW that will decide how you shop online forever.
Over 50 AI updates dropped in 7 days. I filtered out the noise and found the 10 that actually matter. If you're not paying attention to these, you're already behind.
--------
To Know More,
Follow Vaibhav Sisinty On ⤵︎
Instagram @VaibhavSisinty
https://www.instagram.com/vaibhavsisinty
Twitter @VaibhavSisinty
https://twitter.com/VaibhavSisinty
Facebook @VaibhavSisinty
https://www.facebook.com/vaibhavsisinty/
LinkedIn - Vaibhav Sisinty
https://www.linkedin.com/in/vaibhavsisinty
OpenAI just fixed its biggest mistake. China dropped two open-source models that beat GPT5. And the battle that decides how you'll shop online is happening right now in a courtroom. The AI war just entered a new phase, and it's nuclear. Over 50 AI updates dropped in the last 7 days. Here are the top 10 that actually matter. I do this every week, breaking down the AI news that changes everything. If you want to stay ahead of 99% of people, subscribe now. Quick thing though, I've got a weekly newsletter where I break down all the latest AI updates, the tools worth trying, and the ones you should skip. It's completely free and honestly, it's the best way to stay ahead with all this AI stuff. Links in the description below. OpenAI just dropped GPT 5. 1. Here's what happened. 3 months ago, OpenAI launched GPT5. It was supposed to be their biggest leap yet, but users immediately took to Reddit and X complaining that GPT5 felt stiff, robotic, and weirdly formal. Sam Altman himself admitted they messed up. The model was smart, but it wasn't enjoyable to talk to anymore. So, OpenAI went back and worked on it. Now, they are back with GPT 5. 1. So, what's actually different? GPT 5. 1 has two modes. Instant mode made for quick responses to everyday questions. And the thinking mode which is built for deep reasoning and complex problems. Instant mode is interesting. It's now warmer and more conversational. They added three new personality modes, professional, candid, and quirky. So now you can literally tell chat GPT how to talk to you. But the real breakthrough is its adaptive reasoning. Instant mode figures out when to pause and think before responding. It answers the simple questions instantly, but it reasons through complex problems. Then there is the thinking mode. This one's twice as fast on simple tasks and twice as slow on complex ones. It's literally adapting how much time it spends based on what you ask. And OpenAI fixed a massive problem. You see, people used to get confused with the jargon and technical terms. So, they fixed it. The language is now simple even for complex topics. GPT 5. 1 is rolling out right now to plus pro, business, basically all the paid users. Free users will also get access this week. Two massive AI updates just dropped from China and they're proving that they are winning the AI race. First, they released an open-source AI model that beats GPT5. Moonshot AI released Kim K2 thinking under a modified MIT license. Within hours of launch, their servers crashed from traffic. Here's what K2 does. It plans multi-step workflows, searches the web, writes code, and executes hundreds of actions autonomously without stopping to ask permission. On humanity's last exam, the hardest AI benchmark in existence, it outscored GPT5. While all the other big models are locked behind a payw wall, Kimmy K2 is an open-source model that is just as good. K2 matches or exceeds closed models across agentic reasoning, web search, and complex coding tasks. Developers are calling it a turning point. But China didn't stop there. BU had a crazy week. They dropped Ernie 5. 0. And this thing is wild. It's an AI that combines text, images, audio, and video all in one model. That means it can understand and create across all these formats at the same time. This is amazing considering that they just released Ernie 4. 5 a few days back. Ernie 4. 5 is an AI that sees and thinks with images. It zooms into photos, reads tiny signs you can barely make out, and solves math problems just by looking at them. The best part, Ernie 4. 5 is completely free for commercial use. No licenses, no restrictions. Any company can download it and start using it today. And you can try Ernie 5. 0 right now through Ernie Bot or access it via BU's cloud platform. This is China's way of saying they're not playing around in the AI race. While China is giving away AI for free, India is slowly catching up. IT Madras just built something that might be India's first real move to win the global AI race. Here's the problem. Every AI test out there checks if models work in perfect English. But when you're texting Hindi or mixing Tamil and English, those tests have no clue if the AI got you right. AI4 Barat at IIAT. Madras saw this and dropped the Indic LLM Arena, a leaderboard that tests AI for how Indians actually talk. Here's how it works. You go to the website, type or speak your question in any Indian language. Two different AIs answer you. You don't know which AI is which. You just pick whichever answer is better. When thousands of people vote like this, we figure out which AI actually works best for Indians. Now, here's why this matters. China is crushing it in AI because they built their own models and made them free. Their entire population became AI literate fast. Their data stayed local. That one move gave them a massive edge. India did the opposite. We became the biggest user base for chat GPD. Claude and Gemini building their
empires for them. That's why Anthropic, Google and OpenAI are all rushing to open offices here. But through Indic LM Arena, India is deciding what good AI means for India and the timing is perfect. India is building its own AI models right now under the India AI mission. This leaderboard becomes a test to see if those models actually work for us. Google just announced it's launching AI into space. It's called Project Suncatcher, and they're sending the first satellites in early 2027. Here's what they're doing. Google is testing whether its TPU chips can actually handle space conditions, solar powered satellites, will carry four TPUs that will run AI workloads directly from orbit. Now, this is early stage research to see whether their TPUs are ready for an actual AI data center in space. But this makes total sense because you get unlimited and unfiltered sunlight in space. Space-based solar panels capture eight times more energy than ground panels. But there's one challenge. In space, the sun's radiation hits directly as there is no atmosphere blocking it. So Google tested their chips in a simulation and the chip survived 15 times more radiation than required. So they are good to go. Google is teaming up with a company called Planet to launch two test satellites with eight AI chips total. This is huge. If this works, Google will have access to unlimited clean energy for running AI. Google just made a big leap in the AI race. While Google was planning a space mission, OpenAI dropped three features in the last two weeks. First, chat GPT interrupt. Remember how once you hit send on the prompt, you couldn't stop chat GPT midway in case you forgot to add something? They just solved this issue. You can now pause and add context during lengthy queries and add new context without restarting or losing progress. Just hit update and chat GPT adjusts your query. This is a major user-friendly improvement. Crucial for deep research and efficient outcomes. Second, Sora character cameos. You can now upload a video of your pet or your brand logo and Sora takes that and turns it into a character. You can then take that character and use it across videos. You can build an entire library of AI characters and use them whenever you want. — And lastly, Indqa. It's a new benchmark OpenAI made to see if their AI actually understands India. Not just the language, but the culture, the traditions, and the way people think. Here, this was a calculated move from Sam Altman because India is the second largest user base for OpenAI. But OpenAI isn't the only one with multiple updates this week. Google just dropped four AI bombs that change everything. First up, they have dropped a tool that lets you doodle on images and AI instantly gets it. Watch this. You're planning a birthday party in Mix Board. You've got your banner, balloons, the whole setup. Grab the pen tool, draw where you want something added, hit save, then just tell it add Sophie to the happy birthday banner where the red circle is. Boom. AI processes it and generates a new version with Sophie's name exactly where you marked. This just launched and mix boards already in 180 countries. The crazy part, you can mark multiple spots at once and it understands composition, foreground, background, layering. It just knows. No more typing out long instructions hoping AI gets the position right. Just draw and describe perfect for designers, event planners, anyone creating visual content. It's free at labs. goo. Google/Mixboard. What do you think? Next, Gemini Live just got voice superpowers. Google rolled out native audio on November 12th. Now, it talks like an actual person and not like a robot reading text. — Hi, Gemini. Can you remind me what a VIO framework is? — Sure. A VRIO framework is a tool used to analy. — You can tell it speed up mid-con conversation and it adjusts on the fly. — Yes. So, this assesses if a resource is valuable, rare, initible, and organized. — Cool. Thanks. Ask it to tell you about Julius Caesar and it'll do a full accent. Want a cowboy accent for party planning? Done. It's powered by Gemini 2. 5 Flash Live API with native audio. But here's the kicker. Google also launched two AI agents that manage your entire marketing. Ads Adviser and Analytics Adviser drop in December for all English accounts worldwide. Ads Adviser lives inside Google Ads and doesn't just suggest changes, it applies them. Analytics adviser is basically a data scientist in your analytics dashboard. Now the plot twist nobody saw coming. Google launched private AI compute and even Google can't see your data. This is massive. They built a secure cloud environment that processes your info with Gemini models but isolates everything in Titanium intelligence enclaves. It runs on custom tensor processing units with hardware level encryption. your personal data, browsing patterns, everything locked down in what they call a trusted
boundary. It's already live on Pixel 10 devices. Amazon just took Perplexity to quote. Here's what happened. Perplexity built Comet, an AI browser where you can simply say, "Find the cheapest dog food on Amazon, and Comet logs into your Amazon account, compares products, and buys it. You skip the ads and sponsored results and get to your desired product in seconds. Users absolutely love this. So, what's Amazon's problem? Well, this skips all the ads and ignores sponsored products. That's a direct threat to their $56 billion ad business. That's why on October 31st, Amazon sent a legal notice claiming Comet pretends to be a regular Chrome browser instead of saying it's an AI. 3 days later, they filed a lawsuit in federal court. Perplexity fired back. CEO Aravindas called it a bully tactic. And then they pulled an absolutely savage move. Perplexity released a blog post titled, "Bullying is not innovation. " And this is what they had to say about Amazon in the blog. Amazon wants to kill user rights so it can sell more ads. Absolutely savage. But here's where it gets interesting. You see, Amazon built its own AI shopping assistant called Rufus, and they're blocking everyone else's AI. OpenAI is Google's Metas, but their CEO just told investors they want to partner with other AI companies. So, the plan is simple. Sue the competition, then team up with whoever survives. And here's why this matters. If Amazon wins, websites everywhere can start blocking AI assistants from shopping for you. But if Perplexity wins, AI shopping bots can continue to help people shop and get the best deals. The future of online shopping is being decided in court right now. But Apple just secured its future in AI by paying Google a billion dollars a year. Let me tell you what happened. Apple's been promising a smart Siri for two years, but they keep delaying it. Turns out catching up to chat GPT while it keeps getting smarter is nearly impossible. So Apple tested OpenAI, Anthropic, and Google and picked Google not because it's better, but because it's cheaper. Now Apple's getting a custom Gemini model and it's going to be eight times bigger than what Siri runs on. Now, Gemini will handle series summarizer and planner functions, the parts that actually need to be smart. It's set to launch in spring 2026. But there's a catch. Apple's running all of this on their own servers. Private cloud compute, not Google's infrastructure. Your data stays with Apple because Gemini's temporary Apple is building its replacement. Classic move. They lean on others until ready. 11 Labs launched something called the iconic marketplace this week. It's basically a platform where companies can legally use celebrity voices. Here's why this matters. Every voice on the platform is approved by the actual person or their estate. So, companies license the voices with celebrity permission. British legend Michael Kaine has already signed with them. Many popular faces or should I say voices like Maya Angelo, Alan Turing, Judy Garland, John Wayne and Bert Reynolds are now available. Over 25 legendary voices are now ethically licensed. Now companies can submit requests and deal directly with the celebrities team. It looks like 11 Labs has learned their lesson after last year's Robocop incident where they copied Joe Biden's voice but without his permission. So now they're doing it the right way. They are getting permission from the celebrities and signing contracts before cloning their voice. But let me give you a big plot twist. Remember Matthew McConno from Interstellar and How to Lose a Guy in 10 Days? His newsletter, Lyrics of Living, is getting a Spanish version narrated entirely in his AI cloned voice. So, yeah, Hollywood just went AI, but this time the actors are actually getting paid. So, here's my question. In 20 years, will we even know if we're talking to the real person? I would love to hear your thoughts on