Here's all the AI News you probably missed this week. Learn more about how Box AI can unlock key insights for your business here: https://www.box.com/ai?utm_source=youtube&utm_medium=paidinfluencer&utm_theme=icm&utm_campaign=FY27_Q1_MattWolfe_March6
DGX Spark Giveaway Details: https://www.linkedin.com/posts/matt-wolfe-30841712_nvidiapartner-activity-7432424153711816704-AC21
Discover More:
🛠️ Explore AI Tools & News: https://futuretools.io/
📰 Weekly Newsletter: https://futuretools.io/newsletter
🎙️ The Next Wave Podcast: https://youtube.com/@TheNextWavePod
Socials:
❌ Twiter/X: https://x.com/mreflow
🖼️ Instagram: https://instagram.com/mr.eflow
🧵 Threads: https://www.threads.net/@mr.eflow
🟦 LinkedIn: https://www.linkedin.com/in/matt-wolfe-30841712/
👍 Facebook: https://www.facebook.com/mattrwolfe
Resources From Today's Video:
GPT-5.3 Instant: https://openai.com/index/gpt-5-3-instant/
GPT-5.4 Introduced: https://openai.com/index/introducing-gpt-5-4/
Gemini 3.1 Flash Lite: https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-flash-lite/
NotebookLM Video Overviews: https://blog.google/innovation-and-ai/products/notebooklm/generate-your-own-cinematic-video-overviews-in-notebooklm/
Canvas AI Mode: https://blog.google/products-and-platforms/products/search/ai-mode-canvas-writing-coding/
Anthropic Hegseth Statement: https://www.anthropic.com/news/statement-comments-secretary-war
OpenAI DoW Agreement: https://openai.com/index/our-agreement-with-the-department-of-war/
Claude Import Memory: https://claude.com/import-memory
Claude Free Memory: https://x.com/claudeai/status/2028559427167834314
Ditching ChatGPT for Claude: https://techcrunch.com/2026/03/02/users-are-ditching-chatgpt-for-claude-heres-how-to-make-the-switch/
Claude Switching Thread: https://x.com/boringmarketer/status/2029273109627334791
ChatGPT Uninstalls Surged: https://techcrunch.com/2026/03/02/chatgpt-uninstalls-surged-by-295-after-dod-deal/
Anthropic $20B Revenue: https://www.bloomberg.com/news/articles/2026-03-03/anthropic-nears-20-billion-revenue-run-rate-amid-pentagon-feud
Anthropic Revenue Chart: https://x.com/signulll/status/2028972745627975827
Altman Defends Pentagon Work: https://www.wsj.com/tech/ai/openai-ceo-altman-defends-pentagon-work-to-staff-calls-backlash-really-painful-76d769ec
Altman Pentagon No Say: https://www.bloomberg.com/news/articles/2026-03-04/altman-tells-staff-openai-has-no-say-over-pentagon-decisions
OpenAI Amends DoD Deal: https://www.engadget.com/ai/openai-will-amend-defense-department-deal-to-prevent-mass-surveillance-in-the-us-050637400.html
Anthropic Trump Criticism: https://www.theinformation.com/articles/anthropic-ceo-told-employees-openai-pentagon-deal-safety-theater
FT AI Article: https://www.ft.com/content/97bda2ef-fc06-40b3-a867-f61a711b148b
Anthropic Supply-Chain Risk: https://www.bloomberg.com/news/articles/2026-03-05/pentagon-says-it-s-told-anthropic-the-firm-is-supply-chain-risk
Qwen 3.5 Small Models: https://x.com/alibaba_qwen/status/2028460046510965160
Grok 4.20 Beta 2: https://x.com/grok/status/2028714422462448041
Phi-4 Reasoning Vision: https://www.microsoft.com/en-us/research/blog/phi-4-reasoning-vision-and-the-lessons-of-training-a-multimodal-reasoning-model/
Codex Windows App: https://x.com/OpenAIDevs/status/2029252453246595301
Meta Glasses Privacy: https://www.svd.se/a/K8nrV4/metas-ai-smart-glasses-and-data-privacy-concerns-workers-say-we-see-everything
ICO Meta Glasses: https://www.bbc.com/news/articles/c0q33nvj0qpo
Meta Glasses Lawsuit: https://techcrunch.com/2026/03/05/meta-sued-over-ai-smartglasses-privacy-concerns-after-workers-reviewed-nudity-sex-and-other-footage/
Spectre I Anti-Recording: https://x.com/aidaxbaradari/status/2028864606568067491
Daniel Eckler Profile: https://x.com/daniel_eckler
Let’s work together!
- Brand, sponsorship & business inquiries: mattwolfe@smoothmedia.co
#AINews #AITools #ArtificialIntelligence
Time Stamps:
0:00 Intro
0:14 GPT-5.3 Instant
1:40 GPT-5.4
9:40 Box AI
10:56 Gemini 3.1 Flash-Lite
12:37 NotebookLM Cinematic Video Overviews
16:00 Google Canvas in AI Mode
16:48 Anthropic/OpenAI/Pentagon Drama
29:02 Qwen 3.5
29:33 Grok 4.20 Beta 2
29:50 Microsoft Phi-4-reasoning-vision
30:25 OpenAI Codex app in Windows
30:57 Meta AI Glasses Exposed
33:20 Be Inaudible Spectre I
34:45 NVIDIA GTC 2026 + Giveaway
Оглавление (15 сегментов)
Intro
All right, so we got a bunch of new models this week and a whole bunch of news to break down. More government anthropic drama. I'm not going to waste your time. Let's get straight into it.
GPT-5.3 Instant
Let's start with the OpenAI models cuz we got two of them this week. On March 3rd, we got GPT 5. 3 Instant. Now, for the most part, this is kind of like a Vibes update. This isn't a brand new model. I think they just did some additional fine-tuning or some additional reinforcement learning on their existing GPT 5. 3 model and essentially got it to respond in a way that OpenAI just thought was better. It's a very minor marginal update. We can see here this update focuses on the parts of the chat GPT experience people feel every day. Tone, relevance, and conversational flow. These are nuanced problems that don't always show up in benchmarks but shape whether chat GPT feels helpful or frustrating. GPT 5. 3 Instant directly reflects user feedback in these areas. So, we didn't see like a leap in capabilities or smartness. Instead, it reduces unnecessary refusals, tones down overly defensive or moralizing preamles before answering the question. When a useful answer is appropriate, the model should now provide one directly, staying focused on your question without unnecessary caveats. So, it's basically getting more direct with you, or as OpenAI puts it, it's getting less cringe. This specific model should be available to everybody now inside of chat GPT and it's available inside of the API as GPT 5. 3 chat latest. If you jump into chat GPT, you should see it here under instant 5. 3. And if you leave it on auto, in most scenarios, this is probably what
GPT-5.4
it's going to give you. But then 2 days later after that model came out, OpenAI actually released another model. This one is now GPT 5. 4. So up until this week, we didn't have access to GPT 5. 3 inside of chat GPT at all. It was more of an API and encodeex model. But this week alone, they released GPT 5. 3 and GPT 5. 4 both into chat GPT. Now, this GPT 5. 4 model is quite a significant upgrade. 5. 3 was more vibes and less cringe. This one actually has real legit updates. Now, if I'm being totally honest here, I'll say the things that I feel like a lot of people probably aren't saying. For most people's day-to-day use, you're probably not going to notice much of a difference. Like, if you're just using these tools for brainstorming ideas or asking what this rash is or trying to understand how an engine works or things like that, you know, the types of things most people are using these chat bots for, I promise you're probably not going to see a huge difference. I feel like most of these updates when we get like these 5. 2, 5. 3, 5. 4, for most people, they're going to feel pretty dang marginal. But here's what you need to know about it. It's a little bit better at coding than 5. 3 codecs. Like very slightly better. It's better at using a computer on your behalf. tools, and it's actually better at searching the internet for you. They put focus on GPT 5. 4's ability to create and edit spreadsheets, presentations, and documents. And this is their first model that has native computer use capabilities. So it actually has the ability to navigate a desktop environment through screenshots and keyboard/mouse actions. And apparently it does it pretty well. Before you had to go use a separate computer use model. Now it's just built right into this model. That got better because they improved their visual perception capabilities. So it's got much better visual understanding and reasoning in this model. And again, it got better at coding. We can see their previous state-of-the-art coding model right here, GPT 5. 3 Codeex and the new GPT 5. 4 is a, you know, very slight improvement and also very slightly faster. The model got better at tool use and they added a new feature called tool search. Here's how they described that tool search. Previously, when a model was given tools, all tool definitions were included in the prompt up front. For systems with many tools, this could add thousands or even tens of thousands of tokens to every request, increasing cost, slowing response, and crowding the context with information the model might never use. With this new tool search feature that they just added in, GPT 5. 4 instead receives a lightweight list of available tools along with the tool search capability. When the model needs to use a tool, it can look up the tools definition and append it to the conversation at that moment. So, it basically makes using tools within your chat a lot faster and a lot more token efficient. Something you probably don't care a whole lot about if you're using chat GPT, but if you're using this in the API, then that's going to be really helpful to you cuz it's going to cost you a lot less whenever the model is doing tool calls. They improve the web search with it. They say that thinking is stronger at answering questions that require pulling together information from many sources on the web. it could more persistently search across multiple rounds to identify the most relevant sources particularly for needle in a haystack questions and synthesize them into a clear well-reasoned answer. And also, if you are using the API, one of the biggest upgrades, the thing that most people who are using these models probably care about if you're using it in the API is this new 1 million token context window here. So when you're actually coding, you can get a lot more of your codebase and previous conversation about the code you are writing into the conversation because you have this massive token window. If you want to use this new chat GPT model, it's not available on the free plan and $8 a month go plan, but starting today being March 5th. It is available in plus team and pro plans. It's going to replace the GPT 5. 2 to thinking model. And again, for most people will probably feel like a fairly marginal update. For power users that are using it for code or in the API, you probably will notice a pretty big difference. If you're just using it in chat GPT to answer questions, it'll probably feel very slightly different and better at searching the web for you. So, if you are on one of those paid plans up under the selector here, you should see Thinking 5. 4 and Pro 5. 4. These are the brand new models that you can use. We've got some examples on the website of some of the stuff they've done with this new model. So, you can see 5. 4 on the left, 5. 2 on the right here. And this spreadsheet looks a little bit more in-depth, a little bit better designed. We've got some document examples here. 5. 4 again, a little bit better design. 5. 4 is on the left, 5. 2 on the right. Again, a little bit more depth. It's got two columns with images in both instead of just one column with an image and then text on the right. So, you know, some different, newer, slightly better design. We've got an example here of it actually using Gmail on somebody's behalf. You can see the little red circle on the scent box there, and it's actually going and starring the emails and adding labels to the emails on the user's behalf. It doesn't seem like the fastest thing in the world, but it is going and doing the handling of it for them. We've got another demo here of bulk data entry. There's a whole bunch of JSON here on the right, and it's entering it all into the form on the left, it appears. These are some of the cooler looking demos, in my opinion. They have like a little mock theme park game where they're actually drawing on this screen and actually adding paths and you can see the little characters animated. I mean the characters are just dots but it's like a theme park game with you can see how much funs, how many guests, the guest happiness, cleanliness, park rating. I mean it looks pretty cool. They said this was from a single lightly specified prompt. So it appears to have built this little miniame here from just a single prompt which is pretty impressive. They've also got this RPG game example here, which looks like some sort of like turnbased RPG game where you can only move your character in a certain amount of spots and attack each other. Kind of like a real basic like civilization type game. They have this Golden Gate Bridge flyover simulation that they coded which, you know, looks pretty decent as well. Some like 3JS kind of graphics. I mean, it looks pretty good, honestly. The textures here are decent. So, it's making some pretty impressive looking stuff. Obviously, I always think these demos are most likely the cherrypicked demos. They probably have some demos that look like complete crap, and these are the ones that came out the best. So, these are the ones they share on their website. But, you know, everybody's going to do that. You're obviously going to share the best of the best when uh when you have examples like this. Overall, to me, this model feels like it was made more for the agents than anything else. Now, it is better at coding. It seems like it's going to probably replace 5. 3 codecs. slightly better at coding and slightly faster. If you are using the pro models and the API, it got quite a bit more expensive. But again, I feel like they're trying to build something that agents can use. I feel like they're going more and more in that direction of chat GPT users and just like the people that are using our chat bots, they're going to see like marginal improvements from this. it's going to think a little bit longer, give them a little bit more depth to their responses, but because they're adding all of this tool use where it can go and, you know, operate your browser for you and click things on your behalf and send emails look up tools for you and do better web searches, all of that to me feels like it's being designed for these agents, which makes sense. OpenAI just brought on Peter Steinberger, the guy who essentially made OpenClaw. It seems like that is kind of what this was probably built for. I've been seeing a lot of videos saying that this is like the most insane best model ever. And you know, it probably is. But saying that, just because it's the new state-of-the-art, the new best model ever, that doesn't mean that like normal everyday Chat GPT users are going to notice like a huge major difference. Like the improvements we're seeing now are more improvements that coders are going to notice, researchers data scientists are going to notice, engineers might notice, but like the everyday chat GPT user, it's going to be kind of marginal. Another cool AI product that I want to show you is from
Box AI
Box. If you don't know what Box is, they're an intelligent content management platform that has integrated AI into their system to help surface a ton of data and insights from your files. This is actually pretty cool because a lot of enterprise content is like all over the place. I run basically two businesses. My YouTube channel and the Future Tools website and newsletter and I've definitely experienced the problem where all of my files somehow ended up scattered across different folders and file types and systems and it's all just a big annoying mess. So, what Box AI does is it organizes all those scattered files and unlocks the content stuck inside of them to turn it into structured, usable insights for your business. It's not just storing anymore. Your content can now be analyzed, summarized, compared, extracted from, and actually put to work. So, it's like useful to you. And Box is also model agnostic, so you can use whichever one your business favors. I can see this being really powerful for industries like financial services, insurance, healthcare, government, media, basically any company where massive amounts of sensitive content need to be analyzed securely and intelligently. So, if you want to try Box AI for yourself, click the link in the description to learn more. And thank you to Box for making these AI news videos possible by sponsoring this portion of today's video. Google rolled
Gemini 3.1 Flash-Lite
out a model this week as well called Gemini 3. 1 Flash Light. This is designed to be a very fast, very efficient AI model. It's not going to be like that genius thinking level model. It's designed to do things really well, really fast. It's also meant to be costefficient. So, if you're using it in an API, if you're building tools with it, it's designed to be a faster model. I actually built a little app for myself called my YouTube thumbnail swipe file where whenever I find YouTube thumbnails that I think are really impressive that I like that I want to remember for later. I drag and drop them into this little tool that I built and then it looks at the image and describes what's in that thumbnail. And I actually use this flash light model for that because it's really fast. Like I drop the image in and almost instantly it gives me a description of that thumbnail. But also it's really cheap to use. Again, if you're just using Gemini as like your everyday chatbot, you're probably not going to notice huge differences. It might be a little bit faster for you now. But where a lot of people are going to notice a lot of these upgrades we're getting out of these models, again, is going to be more on the coding front and the API usage front. I kind of don't want to be in the game of trying to hype up every new model that comes out as the absolute best, amazing, coolest, rad, most impressive model you've ever seen that's practically a step away from AGI. cuz I see too many videos like that. And the reality is most people are not going to feel any different about these models. Like the everyday person using AI is just going to be like, it feels like it got a little bit smarter. And this is a good model. It's just a model that's designed to be fast and cheap. And when you're building little apps like the one I described, this is great for that kind of thing. But I think Google had something even cooler that
NotebookLM Cinematic Video Overviews
they released this week. Because while new large language models are always awesome to see them improve and get better constantly, actually seeing practical use cases and putting these tools into use is what I really get lit up by. And well, Google rolled out a new feature for Notebook LM. They rolled out this new cinematic video overviews feature. Now, we've had a video overview feature inside of Notebook LM for a while now, but this one actually uses Gemini 3, Nano Banana Pro, and VO3 to make like much more impressive videos. The old ones just kind of looked like slideshows where you were just watching like a slide and you'd hear the voices talking over the slides and it would go from slide to slide. This new one actually integrates like real animations into the videos. So, of course, I had to test this out. Now, one caveat that I do need to shout out real quick is most people aren't going to have access to this yet. You have to be on their ultra plan to use it, which if I remember correctly is $250 a month to be on right now. So, it's just rolled out to like their highest tier right now. Eventually, it's going to sort of work its way down the tiers, I'm sure. But right now, you do have to be on an ultra plan. Luckily, I am on an ultra plan, so we can demo it. And of course, I had to try it on my birds aren't real notebook. You know, the silly conspiracy theory that birds are actually really like flying little spy satellites that are spying on everybody. So, to use this, you go into your notebook LLM account, go into a specific notebook that you want to actually test this on, and then right here, you can see a cinematic overviews option. You can click try it or you can click on video overview and you'll have a new option here that says cinematic. If you're not on the ultra plan, you won't see this option. But here's what it created for my birds aren't real video. — If you look up into the sky right now, you might see a pigeon. But hundreds of thousands of people across the country will tell you that you are actually looking at a highly advanced governmentissued surveillance drone. This is the birds aren't real movement. They fund massive billboards in major cities and they march through the streets in coordinated protests. This chart shows the movement's foundational timeline. According to their official lore, the US government systematically eradicated roughly 12 billion living, breathing birds between 1959 and 2001, and they replaced them with exact robotic replicas. They have an explanation for everything. When birds perch on power lines, they are actually recharging their internal batteries. And when a bird drops waste on your windshield, that is a liquid tracking device designed to monitor your vehicle. — I mean, you get the idea. It's a 6-minute video, so I'm not going to play the whole thing, but I mean, like this beginning animation when it first starts off, that's obviously VO3. 1. These images throughout are nano banana, and they all look really cool. But the fact that we're getting like motion graphic style animations baked into this, this is impressive to me. Here's another example of the type of animations that are in here. some more like motion graphic style stuff like this is kind of something that would replace After Effects. I can actually see using this in YouTube videos where if I need just a quick animation, I could go make the full video, like the full 5minute video that it's going to generate, but then just pull this one motion graphic. So, I don't need to go make something really quick in After Effects. That's pretty sweet. Honestly, this is one of my favorite features Notebook LM has actually added. Like, look at these animations. This is something you would have normally done in After Effects and now we're getting it straight out of Notebook LM. Granted, you do have to pay 250 bucks a month to be able to get this feature right now, but it's pretty dang
Google Canvas in AI Mode
cool. And we did also get one other update out of Google this week. You can now use Canvas in AI mode to get things done and bring your ideas to life in search. This is available to everyone in the US in English right now. But basically, it appears to be the same kind of features you get in Claude or OpenAI where if you ask it to build something for you, you can actually see what it built inside of the canvas. So, if I go to Google and I go into AI mode here and I hit the plus, I can now select canvas and I can say, write me a simple tic-tac-toe game in HTML and JavaScript code. And we'll see over on the canvas side, just like you would expect inside of Anthropics Claude or ChatgPT, we can see it, write the code, and then we can actually demo the game right here inside of Google's AI mode.
Anthropic/OpenAI/Pentagon Drama
Okay, so last week I shared the whole anthropic versus the Pentagon saga. And as you probably know by now, I record these on Thursdays, publish them on Fridays, so the whole story hadn't quite unfolded yet. Well, there's been some updates. So, let's jump over to the AI drama section and let me break down what's happened. So, last week I really broke down the whole anthropic versus the Pentagon thing. And since last week, it's escalated quite a bit more, and there's been a lot more that has played out since then. Now, if you didn't see my video last week or the one I made about it the week before that, let me kind of give you the TLDDR of everything up through last week. So, Anthropic signed a deal with the Pentagon for $200 million. Anthropic had access to classified information. They were the only model provider that was actually able to work with classified information. Anthropic was actually used in the Maduro raid to get the president of Venezuela. Somebody at Anthropic reached out to somebody in the Department of Defense and asked if that was actually true that they used it. The Pentagon got pissed off that Anthropic was even asking. Started calling them woke. The whole thing sort of escalates. Enthropic basically tells the Pentagon, "You can continue to use our models for whatever you want. There's only two things. You can't use it for surveillance on US citizens, and fully autonomous weapons without humans in the loop. " Those were their two red lines that they couldn't do. The Pentagon said, "No, that's not up to you. we should be able to use it for all lawful purposes. You guys don't get to make that decision of what things we can and can't use it for. Enthropic basically pushed back. Ed said again, use it for whatever you want, but we won't allow our tools to be used for those two things. The Pentagon said, "Okay, well, if you don't drop it and let us use it for all lawful purposes, we're going to designate you a supply chain risk. " Meaning not only will the US government sort of blacklist Anthropic from being able to be used in the government, but also any of the companies that work with the Pentagon aren't going to be able to use Anthropic sort of downstream inside of their Pentagon related projects. Where it stood last week when I made my news video was Anthropic came out, made a statement saying, "We're going to hold our ground. Those two red lines, no US surveillance, no fully autonomous weapons, we're sticking to those. You can't use it for those two things. " And that's essentially where I left off in that video. Well, Friday things escalated. Trump put out a statement basically saying that Anthropic is blacklisted. And the Secretary of War basically said, "We're going to designate you as a supply chain risk. " According to Anthropic, they tried in good faith to reach an agreement with the Department of War, making clear that we support all lawful uses of AI for national security. Aside from the two narrow exceptions, Sam Alman himself actually took to X and told his own employees that he agreed with Anthropic's red lines and that they would do similar red lines. So on February 28th, OpenAI announces an agreement with the Department of War. In their agreement to the Department of War, OpenAI drew the same lines. In fact, they even had a third red line. No use of OpenAI technology for mass domestic surveillance. That was the same red line Enthropic had. No use of OpenAI technology to direct autonomous weapon systems. Same red line as Enthropic. And then a new third red line. No use of OpenAI technology for highstakes automated decisions. In our agreement, we protect our red lines through a more expansive multi-layered approach. We retain full discretion over our safety stack. We deploy via cloud. Cleared OpenAI personnel are in the loop. And we have a strong contractual protections. This is all in addition to the strong existing protections in US law. They even posted the relevant language from their contract on their website. And from everything, all outside people paying attention to this could tell. Enthropic said, "You can't do these two things with our technology. " and the Department of War said, "You're a supply chain risk. You're out. You're blacklisted. " OpenAI steps in almost immediately the same day all this is going down. " And says, "We'll take on the contract. We have these three red lines. " And the Department of War said, "Okay, you're in. " Like, this isn't making a lot of sense to a lot of people. So, most people started to assume, okay, OpenAI was very opportunistic. They saw the chance to get in. They're publicly telling people that they have the same red lines, the same safety standards as anthropic, but they're just better negotiators or something, right? Like that is sort of the perception. But most people kind of believe, are they really telling the truth? Are they basically just telling us outwardly they have the same protections, but behind the scenes they're telling the Department of War, "Don't worry about it. You could do whatever you want with it. " That's kind of what people started to think. And well, the fallout from that was pretty quick for OpenAI. Claude capitalized on this fallout and released an easy way to switch to Claude without starting over. You could bring your preferences and context from other AI providers to Claude. Now, they don't name OpenAI specifically, but all of the memory that you ever had inside of OpenAI. You can import that over to Claude and Claude will just pick up where you left off with OpenAI. This was a very opportunistic sort of launch on Claude's part. Not only that, but in the past week, Claude also made their memory features, which were previously only available on paid plans, available to free members as well. They put out this expost memory is now available on the free plan. We also made it easier to import save memories into Claude. So, if you're on the chat GPT free plan and you had memories and stuff baked into your plan, come on over to Claude for free and bring all of those memories with you and we will keep your memories as well on the free plan. now. So, they were seemingly making it very easy to get rid of your OpenAI account and move over to Claude instead. And well, it worked. Earlier this week, TechCrunch put out this article. Users are ditching Chat GPT for Claude. And they even instructed on how to make the switch. So, TechCrunch was getting behind the leave OpenAI and jump over to Claude. Guess what happened next? Claude jumped to be the number one most downloaded app in the app store. Now, prior to this, they'd been like way down. Like they weren't even in like the top 10. They were, you know, I think in the top 100, but they were like way down the list. And like seemingly overnight, they just jumped up and became like the top most downloaded app in the app store. And during the weekend, like from the Friday that Anthropic got blacklisted and OpenAI jumped in the same day and said, "We'll do a contract with you. " Well, between that Friday and Monday, chat GPT uninstalls surged by 295%. People were like, "Open AI, we don't trust you anymore. We're out. " Anthropic's revenue has been surging. Enthropic near's $20 billion revenue run rate amid the Pentagon feud. Enthropic is on track to generate annual revenue of almost 20 billion, more than doubling its run rate from last year. A clash with the Pentagon over AI safeguards has led to the defense secretary declaring anthropica supply chain risk which may impact its business. But sales on the like consumer side have continued to skyrocket because everybody was jumping from OpenAI because they don't really like how this whole situation was handled. Open AAI jumped in at the last second and made a deal with the Pentagon. And then check this out. This is a graph put out by ramp where we can see the blue here is all open AAI. The orange is Anthropic. The right side is February. So if we look back into, you know, midl last year, OpenAI was destroying when it came to chat for business. Like anthropic was just barely a blip. Fast forward to February and it's flipped. This is now Open AI's chat for business and this is all anthropic here now. So, businesses have been ditching OpenAI and moving more and more into Enthropic. And now, I don't think this is totally because of the whole Pentagon thing. I just think it's because Enthropic has had the dominant coding model with Opus 4. 5 and Opus 4. 6 recently. The Wall Street Journal put out an article this week about how Sam Alman defended the Pentagon work to staff and calls all the backlash really painful. A lot of Open AAI employees have not been happy with the fact that OpenAI was jumping on the Pentagon deal. And obviously they're all unhappy by the fact that they're losing customers at an insane rate. Like it's dropped 295% over a single weekend or something. Mind-blowing drop off in the amount of people refusing to use OpenAI. Oh, and then according to this article from Bloomberg that came out on March 3rd, Altman tells staff OpenAI has no say over Pentagon decisions. So everybody who is saying OpenAI is telling us outwardly that hey we're drawing the same red lines as anthropic, well they're telling their staff internally, yeah, it's not really up to us anymore. According to this Bloomberg article, the Defense Department will listen to OpenAI's expertise about the technology applications, but does not want the company to express opinions about whether certain military actions were good or bad ideas. Now, to be fair to OpenAI, Sam Alman is still actively and outwardly publicly pushing for the Defense Department to abandon its designation of anthropic as a supply chain risk. Even Samman knows that that's not a good idea for them to do. Oh, but this story just keeps on going. On March 4th, The Information put out an article from leaked internal company memos from Anthropic, where Anthropic isn't really being too nice towards the administration. The real reasons the Department of War and the Trump admin do not like us is that we haven't donated to Trump while OpenAI's Greg Brockman have donated a lot. We haven't given dictator style praise to Trump while Sam has. We have supported AI regulation which is against their agenda. We've told the truth about a number of AI policy issues like job displacement and we've actually held our red lines with integrity rather than colluding with them to produce safety theater for the benefit of employees. Now, on March 5th, the day I'm actually recording this, Daario did put out a statement on the Anthropic blog saying that they regret that those notes got out and that those messages were written on the day this was all going down and it was very heated in the office. And this wasn't a memo that came out later. It was just a memo that sort of came out in the heat of the moment. And according to the Financial Times, Anthropic's chief is now back in talks with the Pentagon about an AI deal. Okay, this is just crazy. But also on March 5th, the Pentagon officially notified Anthropic that it's deemed the firm a supply chain risk. So although apparently Anthropic is in talks with the Pentagon to try to work things out and make amends with them, the Pentagon officially moved forward to designate the supply chain risk. Jumping back to this Enthropic blog post that Enthropic put out after the designation, Enthropic does say, "We do not believe this action is legally sound and we see no choice but to challenge it in court. " Now, I don't know much about the law and all the legalities of this, so I can't really speak into this too much, but my gut tells me the supply chain risk designation will probably not stick. But I don't know. I mean, I've been surprised by a lot of this stuff before. It just doesn't feel like it will stick. I'd say in all likelihood, this is like one giant crazy overblown negotiation tactic between the Pentagon and Anthropic. and Enthropic and the Pentagon will eventually make a deal together to be using Anthropic in some way, shape, or form and that whole supply chain designation thing will sort of fade away. That's my guess of how it's going to play out. In this article from Anthropic, they do say, "I would like to reiterate that we had been having productive conversations with the Department of War over the last several days, both about ways we can serve the department that adhere to our two narrow exceptions and ways for us to ensure a smooth transition if not possible. " So, it does sound like that's confirmation. Even though they're designated a supply chain risk, even though they have statements floating around about how they're not willing to bend the knee, they are still working behind the scenes to try to figure this all out. So, again, my gut feeling is the supply chain risk designation will likely fade away and Enthropic will figure out some way to work with the government and some sort of compromise will be had. But again, I've been wrong a lot before. So, that's my breakdown of what's going on with that story right now. It's a wild one. Okay, so there's been a little bit more news this week that I haven't gotten into yet. And well, this video is already running long because I spent a lot of time on the anthropic pentagon thing. So, everything else I'm going to tell you in a quick rapid fire.
Qwen 3.5
We did get even more large language models than what I mentioned already. We have a new openw weight model from Alibaba, Quinn 3. 5. It comes in four flavors, an 800 million parameter model, a two billion parameter model, 4 billion, and 9 billion. I actually did a video earlier this week that shows you how you can run this model on your iPhone for free without connecting to the internet. It's good enough and small enough to actually run on a phone without connecting to the internet or any sort of cloud. It's a pretty decent model. Check out that video if you
Grok 4.20 Beta 2
haven't already. XAI made an update to their Grock model this week, Grock 4. 20 beta 2 with improvements to instruction following, less hallucinations, better scientific text, image search trigger precision, and multiple image render reliability. So, if you use Grock
Microsoft Phi-4-reasoning-vision
you've got those. Now, Microsoft put out another model this week, their 54 reasoning vision model. This is a compact and smart openweight multimodal reasoning model that balances reasoning power, efficiency, and training data needs. It's a broadly capable model that allows for natural interaction for a wide array of vision language tasks and excels at math and science reasoning and understanding user interfaces. So this is an openweight model, probably one that goes fairly head-to-head with that Quinn 3. 5 that we just looked at, but it's a 15 billion perimeter model. So probably still not small enough to run on a phone, but probably small enough that you can run it on a decent GPU. I
OpenAI Codex app in Windows
have another update out of OpenAI if you're a developer. The Codex app, which has been only available on Mac so far, is now available on Windows. It is a good sort of IDE to use. It's probably one of the simplest IDEs to use. Like if you try to use Visual Studio Code or Cursor or Windsurf or one of those tools and you find it complicated and overwhelming, well, the Codeex app is much simpler. You get in and it's just like a chat GPT kind of interface. It's very, very basic and simple to use. And now you can use it on Windows, but you can only use it with the OpenAI models.
Meta AI Glasses Exposed
Okay, this next one is kind of a crazy story. This comes out of Meta. I have no idea how to pronounce this. Sensa Dagladet. They did this deep dive research into Meta's glasses. Now, this is a pretty long in-depth article, but here's the TLDDR of it. Basically, they found out that if you don't have your privacy settings set up right on your Meta AI glasses, AI can use the data that's coming into the video for training. Well, that training is all going to a company in Africa where humans are actually reviewing that footage. And as you can imagine, there's a lot of footage that's coming in and getting viewed by humans. You know, people in the bathroom, people getting changed, people looking at their credit cards where the credit card details are in full view. All of this information that's being recorded by the glasses if your privacy settings aren't set right is all going to human annotators who are looking through all of this footage. And as you can imagine, that causes some problems. And well, now there's some more legal battles looming for Meta. A UK data watchdog is writing to Meta following a concerning report claiming outsourced workers were able to view sensitive content filmed by the company's AI smart glasses. In Meta's UK AI terms of service, the company says in some cases, Meta will review your interactions with AIS. And this review may be automated or manual, aka by a human. Now, according to Meta, unless users choose to share media they've captured with Meta or others, that media stays on the user's device. But from my understanding of reading that other article, you have to actually turn off the feature to share with Meta. Otherwise, it's kind of on by default. That may have changed recently. They may have made it turned off by default. Now, I'm not 100% sure, but it's still weird and creepy nonetheless. And well, this whole thing got back to even more people. And now Meta's being sued over its AI smart glasses privacy concerns. The news prompted the UK regulator, the information commissioner's office to investigate the matter. Now, the tech giant is facing a lawsuit in the United States as well. In the newly filed complaint, plaintiffs of New Jersey and California alleged that Meta violated privacy laws and engaged in false advertising. So, they're getting in trouble with the UK, New Jersey, California, and as this story comes out more. Yeah, I don't think the drama is going to slow down, but I'll keep an eye on it and share updates as these updates unfold. And here's a tech gadget that
Be Inaudible Spectre I
might be kind of interesting. This company, I think they're called B and Audible, I think that's the company name, introduced a product called Spectre 1. And it's the first smart device to stop unwanted audio recordings. So, basically, you put this device in front of you, you turn it on, and anybody that has any sort of recording device, like I've got this little like plaude thing here. If this recording device was on around this device, it's going to jarble up the audio so that this device can't pick it up. And as more and more people have listening devices and their glasses and these little pins and things like that, more and more people are going to want this kind of technology. But I do have questions. If it's jarbling the audio that it's picking up all around it so it can't be recorded by things, is that going to screw up with our phones here? Am I not going to be able to like use my AirPods and like talk to somebody? It seems like it might cause a lot of sort of downstream issues with other devices that legitimately do need to be listening. Interesting device. A device that's sort of in an AI world where everything is going to be listening eventually helps block out the conversations. How much is it going to screw up this stuff that needs to hear those conversations? Like me just having a phone call with somebody. I don't know. It's going to be interesting to watch how that plays out. But right now, it's not very economical. I think they're trying to sell it for like a,000 bucks or something, which who's going to pay that much for something like this. I don't know. But it's an interesting concept nonetheless. We also have
NVIDIA GTC 2026 + Giveaway
Nvidia's GTC conference coming up from March 16th to 19th. It's actually in San Jose, but they're doing a virtual version of this event where you can actually see Jensen's keynote where it's expected that he's going to show off a new chip they've been working on. And there's all sorts of sessions with AI experts and robotics experts. And it's going to be a really cool event where you can learn a lot about what's going on in the world of AI right now. Nvidia also gave me one of these GTX Spark like superco computer kind of things and they gave me a second one and said, "Hey, why don't you give this second one away? " So, I actually have another one of these GTX Sparks. It's right here still in the box and I want to send it to somebody. Now, in order to win this DGX Spark, all you have to do is register and attend one of the sessions at GTC. This is a deal I made with Nvidia. They said, "Hey, shout out GTC for me. tell people to register and attend one of the events and you can pick one of the people that registers at random and send them this DGX Spark. So that's exactly what I'm doing. So all you got to do is go to the GTC website, register for the virtual event or attended person if you want. Either way, it doesn't matter, but register either way. After the event's over, I'm going to pick one person at random that registered and attended one of the sessions and send them that DGX Spark. You can get all the details you need at this LinkedIn post that I made. You've got the link to where you can register and I will link that up in the description below so you can go and register and be entered to win one of these DGX Sparks. Thank you to Nvidia for doing that. Pretty cool giveaway. I'll be up in San Jose in person for the event. So, super excited to see what gets unveiled during that event. Also, I want to show you something really cool and I want to thank the person who made it. This is a custom Magic the Gathering card that somebody made for me with my likeness on it. This was actually created by Daniel Eckler over on X and it's really, really sweet. And I just wanted to shout him out because I got this in the mail today and was like, "Oh, that is sick. I need to show this off to people. " So, thank you so much to Daniel for sending that to me. I really, really appreciate it. I'm going to put it somewhere in my background. I really dig it. And that's what I got for you today. A big, crazy, hectic, dramafilled week with new models and government AI drama. And I'm here for it all. I do shoot these videos on Thursdays and publish them on Fridays. So, if any news came out on Friday that I missed, it will be in next week's breakdown video. And on that note, if you want to receive breakdowns of all the AI news that came out in a week, like this video and subscribe to this channel because that's what I do. There is so much AI news coming out every single day. You're probably getting bombarded by it. You're probably seeing a thousand AI videos on YouTube about every news drop. Well, if you just want a one a week digest of here's everything you're probably going to want to know about AI, that's what I'm trying to do for you in these Friday news recap videos. So again, if you like that kind of thing, like this video, subscribe to this channel. I'll keep you updated in a once a week update video so you are fully looped in on all of the most important stuff in the world of AI. I'm getting closer and closer to a million subscribers. And I would love it if you could help me get there. I'm just I want to get there so bad. That's like the new goal post that I'm shooting for. So if you haven't already, click that subscribe button. It really helps my channel. And again, I really appreciate you hanging out with me and nerding out with me today. Hopefully I'll see you in the next video. Thanks again. really appreciate you. Bye-bye.