# AI News :The First “AGI-Capable” Model, Prompting Changes Forever , Automated AI Lab and more..

## Метаданные

- **Канал:** TheAIGRID
- **YouTube:** https://www.youtube.com/watch?v=S0f0QiDu2Dc
- **Дата:** 17.12.2025
- **Длительность:** 35:11
- **Просмотры:** 23,188
- **Источник:** https://ekstraktznaniy.ru/video/12522

## Описание

Checkout my newsletter : - https://aigrid.beehiiv.com/subscribe
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid
🌐 Learn AI With Me : https://www.skool.com/postagiprepardness/about

Links From Todays Video:
https://x.com/Zai_org/status/1998003287216517345 
https://x.com/emollick/status/1998063517681799418 
https://x.com/godofprompt/status/1997651526635184410 
https://x.com/kimmonismus/status/1998119101785620814 
https://x.com/WesRothMoney/status/1998044901615800571 
https://x.com/xai/status/1997875236415676619 
https://x.com/WesRothMoney/status/1998037350702571909 
https://x.com/Xianbao_QIAN/status/1997997620355289335 
https://x.com/wallstengine/status/1998095148044189947 
https://x.com/rohanpaul_ai/status/1998109345931026762 
https://x.com/Starlab_Space/status/1998090293216743820 
https://x.com/Starlab_Space/status/199809029321 
https://x.com/CyberRobooo/status/1998008951829561734 
https://x.com/emollick/status/1998063517681799418 
https://x.com/godofprompt/status/1998046130245120

## Транскрипт

### Segment 1 (00:00 - 05:00) []

So, let's take a look at the 20 AI news stories you most likely missed. So, let's get into it. So, coming in at number one is GLM 4. 6V. This is Z's latest open-source multimodal vision model. Now, I think this model from China is actually really good, and it's pretty different because oftent times we do get vision models from state-of-the-art companies like, you know, Google or I guess you could say OpenAI. In fact, most companies aren't actually producing vision models because I think they're very tokenheavy and very expensive. So, the fact that we now have an open- source multimodal model from China that is enabling us to do high fidelity visual understanding plus long conext reasoning is super good. So, this model can essentially take in you know screenshots, images and document pages as an input alongside text and it can reason over them jointly. So this is currently state-of-the-art in visual large language models. Well, visual language models for tasks like document understanding, screenshot analysis, diagrams, math plots, and visual QA at similar parameter scales. So there are two versions. There's GLM 4. 6V at about 106B parameters for cloud use and then GLM 4. 6 flash at 9 billion parameters for optimized low latency and local deployment. So for me, this is actually quite good. I may actually start to use this model because the only thing I will say that is bad about Frontier models is that when you do video analysis or image analysis sometimes not really image analysis but video analysis it does take quite a lot of tokens. So if you want to use models like this for video analysis then that is probably going to be you know a decent opportunity for you guys. Now talking again about different model releases. Nvidia released Nvidia Neatron 3, a family of open weights 30 billion parameters mixture of experts language model designed to be fast but runnable. And you know it outperforms other 30B models like GPTOSS and Quen 330B on other benchmarks also while being even more efficient. So, this is, you know, incredible because I think somehow, I don't know how they do it, honestly, but Nvidia just find a way to make models much more efficient and much more streamlined when it comes to running them on device. And so, if you ever wanted to run an LLM on device that is super small for privacy reasons, then, you know, the Nvidia sets models, GPTO, OSS, those are going to be the ones where you are most likely to get decent, you know, intelligence versus your output speed. So, a decent decent, you know, risk-to-reward. So, you know, for the mixture of experts, you got 31. 6 billion total parameters, 128 experts with six active per token. So, overall, it's not like some crazy model. I mean, if you take a look at the benchmarks, you know, it's not going to completely crush all of the crazy benchmarks. But I think models like this that you can run locally on device. Once again, remember what I said in previous videos. I spoke about how privacy is going to be probably a big thing in the future and some individuals will have conversations that they do want to you know have private and of course that makes 100% sense so if that is you know consider running LLMs locally and privately offline you know not sending data to and from the cloud but just remember of course you know that there is going to be some functionally limited so you know maybe the latest image model maybe the latest updates the latest APIs you know but for those of you who wear privacy or maybe even small businesses where privacy is just not even a question at that point. Of course, these are going to be the models that you would likely use in many different cases. Now, we also in other model releases had GPT 5. 2. So, now the most interesting thing about GPT 5. 2 to was that this was probably the most mixed bag of results that I've ever seen in my entire life because on one side you had people stating that this model was amazing and it was good and then on the other side you had a huge consensus of people who were actively using the model essentially stating that it was terrible. Now I'm pretty sure I understand what happened here. So most people don't realize that this model GPT 5. 2 it wasn't the model that you think it is. See when they updated GPT 5. 1 to GPT 5. 2 too and then OpenAI showed us these benchmarks. A lot of people would have instantly thought that okay, this is just another frontier model. However, that is far from the case. You see, OpenAI specifically wanted to focus on one area that they could dominate and that area was the GDP vow. So essentially what they wanted to focus on was knowledge work tasks and humanlike reasoning. And if we look at the GDP vow benchmarks, if I highlight in the bottom area, you'll see that the GDP vow was basically doubled. And this is arguably one of the largest benchmarks jumps. And for economically valuable tasks, that is what that benchmark is. And this is where OpenAI is moving because I truly believe that in terms of consumer products, OpenAI may have saturated the area in terms of what you're able to do. I mean besides creating images and videos I think we're

### Segment 2 (05:00 - 10:00) [5:00]

already at human level realism for absolutely everything. So other than those I'm not sure what OpenAI is going to be doing. Of course there are different products and services that you can still make. There's still a lot that you can do with AI. But in terms of you know company focus GDP val was essentially what this model is doing. So the model can now make PowerPoint presentations. It can now do Excel work which I've got tutorials on. We'll leave a link to that in a moment. But the point I'm trying to make here is that GP 5. 2 2 was the first step into economically valuable work, which of course is those tasks that are powering the economy. So, it's going to be interesting to see where things go in the future because the moment they decided to lock in after Google had a crazy run with Gemini 3, we saw a huge uptake on GP 5. 2 thinking. Now, remember, if you're using this model, it tends to think for quite a lot. It consumes a lot of tokens, so just be prepared for that if you're using this in the API. Might be a big API build. I actually had the model, you know, work for literally for 24 hours and it just got stuck. It was just in a loop. And I'm really hoping I don't get some kind of big API build because I'm just in the chat interface. Next here we have is the Arc AGI being smashed. So most of you guys may not have realized, but Arc AGI was smashed. Now, it kind of was smashed by GPT 5. 2 thinking if you did just see the last slide. And so what we do have here is of course the Poetic. So, Poetic actually smashed the AJI benchmark by just building a reasoning layer on top of the Frontier LLMs. So, Gemini 3, GB 5. 1, and Grock 4. And once again, this was using test time reasoning, code generation, and self- auditing to orchestrate many small targeted calls into a meta system that basically solves AGI tasks more efficiently than the previous state-of-the-art. So they achieved a verified accuracy of 54% on the ARC AGI2 private set and this is the first system to cross 50% and set the new leaderboard at number one. Now what's crazy about this is that it set this at you know $30 per problem which is less than half the cost of the previous state-of-the-art Gemini 3 deep think at $77 a problem which is you know redrawing the cost for performance on that frontier. So I think you know the biggest thing that we can take away from this is not how it was done. In fact, it's actually But I mean, the fact that like frontier advances in AI often don't come just from the base LLMs themselves. Often times they come from how you structure the LLMs to get to the final output. And this is like, you know, a kind of quote that I heard a while back from, I think it was Logan Kilpatrick, the Google AI Studio developer or lead of Google AI Studio. He was basically saying that AGI is quite likely to come out of the product design like how you design it rather than just an LLM being able to know everything and you know do all that kind of reasoning. So this is you know really cool because it just shows us that if other companies are able to structure the models in a way where they can ek out other performance it goes to show that once again when these base LLMs are released that is never going to be the end of it. There's always still more that you could do with Agentic scaffolding. So really cool that they were able to do that. Of course, links in the description. Now we have Manis 1. 6. So Manis 1. 6 is Manis' latest iteration in terms of their agents. Now I don't think people realize how good AI agents are. Like Manis and Genspark and, you know, other companies. I think people really just, you know, forget that chat GPT is basically just a chat and Manis agents are actually out there doing work. Remember how we spoke about GPT 5. 2? I think they could really take some kind of you know advice from these companies because those projects when I really need like actual work done those are the platforms I'm using and I mean you can see the benchmarks here you know information retrieval spreadsheet web development data analysis parallel processing 1. 6 6 max. So I mean you probably did hear about Manis. Manis did kind of shake up the industry. So I wouldn't be surprised if you know we do see some man star projects from uh you know Frontier Labs in the future because genuinely these products are super useful and if I need like you know a complex task done I will just you know hop on over to Mattis or Genspark to get that done. Now once again staying in this AI releases area we need to talk about the world's first AGI capable model. So there is a company called Integral AI and they're claiming that they have the first AGI capable model. Now I think this is one of those things that's super interesting because they are claiming this and it's super weird. The reason why it's super weird is because on one side usually when companies claim this, it's usually just kind of, you know, some kind of marketing effort, some PR stunt to get people to look at what they're claiming. However, this is something that is, you know, founded by an ex Google veteran. So, this is the guy that was actually, I think, pioneering generative AI at Google. And I actually watched the interview where he talks about the architecture needed for AGI and the current limitations for current large language models. And I'm not saying I'm

### Segment 3 (10:00 - 15:00) [10:00]

the keeper of, you know, that information, but everything that he was saying genuinely makes so much sense. Just the way how he was describing reasoning, the limitations with current models, you know, how much information they have to see, comparing that reasoning to humans. It was truly fascinating. I made a video on this. Um, it is on the channel. So, you know, if you just search on the channel a few days ago, but um they basically talk about how they're, you know, saying that the model isn't AGI itself, but it is AGI capable, stating that it can learn new tasks autonomously without pre-existing data sets, labels, or human intervention, especially in robotics settings. And um yeah, I mean it's kind of interesting. The weird thing is that they don't have a lot of funding. The demos that they provided weren't that great. So I mean, you know, I guess we'll have to see, but I'm not going to completely write this company off because I will say that the uh the CEO, the guy, you know, uh Jad Tarifi does seem like his head is in the right place and he knows exactly what he's talking about. Most people haven't seen the interview. So, I know that while this might seem like some kind of, you know, performative headline, I think maybe they just have a marketing effort where they don't know how to, you know, truly showcase the product that they built. That's my claim. But once again, it didn't even, you know, compare the models to benchmark. So, maybe they're having a model that is completely different in terms of, you know, the kind of benchmarks that they're comparing it to. That's just my thing. Now, if we're getting on to cool demos, we can take a look at the world models. So this is the company called Spatial and this is their new system Echo. So this turns a simple text prompt into a fully explorable 3D consistent world instead of disconnected views. The result is a single coherent spatial representation that you can move through freely. So this is the biggest shift in AI. As most of you know, world models are coming. Instead of generating pixels and tokens to generating spaces, this is what we're moving forward. So it predicts a geometry grounded 3D scene at metric scale meaning that everything that you generate you can actually you know move into. So once you generate the world it's interactive in real time. You can control the camera you can explore from any angle and you can render this instantly even on low-end hardware which is important. So this is crazy because highquality 3D world exploration is no longer gated by expensive equipment and under the hood it infers a physically grounded 3D representation and then converts that into a renderable format. So they do have a web demo which you can actually use which is pretty cool. And I am actually wondering what world models are going to be like in 2026 because we've seen the first two iterations of Genie 2 and Genie 3. You've seen the iterations of Sema 2 and SEMA 3. So, Agents and Explorable Worlds, I think they're finally starting to have that moment. This is the third company that I've seen that actually is moving forward on that frontier. So, it's going to be super interesting to see where we lie once that does um happen. So, yep, once again, world models are a thing. So, let me know what you think about that. Now, we're going to move on to the research paper section of the video. So this is where we get into research that actually changes how you know AI is moving forward and this is one of the most interesting papers because it changes how prompting is moving forward. So I didn't want to you know include every single paper because there are 20 to 30 papers that hit arxif every single day but this one stood out to me. So Ethan Molik said that they tested one of the most common prompting techniques and it was so surprising what they found. So most people know that when you test large language models and you prompt them to try and get results, most people will say things like, "You are a physicist. You are a doctor. " Apparently, that doesn't actually give the LLM a more accurate answer in terms of answering your physics question or answering your lawyer questions. And the summary from this is basically that, you know, playing expert personas don't improve factual accuracy. So when you tell it to act like an expert, act like a marketer, it doesn't mostly, you know, give it that, you know, real quality that you hope for because I mean you're just formatting the information. So the researchers tested the GPQA diamond which are, you know, PhD level science questions, the MMLU Pro, and they tested it with GPT40, 03 mini, 04 mini, Gemini 2. 5 flash, and other models. So they had different prompt types, which are no Persona, which is the baseline. They had an in-domain expert which is you are a world-class physics expert. Then you have a domain match a domain mismatched expert my mistake physics expert answering law questions and then you have low knowledge personas uh you know a young child or a toddler. So they had 25 independent runs per question to avoid randomness and they had some key findings is that the expert personas almost never improve accuracy across models and benchmarks. performance was

### Segment 4 (15:00 - 20:00) [15:00]

usually statistically identical to no persona. So matching the persona to the domain didn't actually reliable help. I mean, let me actually show you guys this slide here. This is the slide where you can see. I mean, it's actually pretty hard to see. Um, unless you're probably on desktop or else, you know, if you guys want to zoom in, you'll be able to see it. But you can see that the results aren't really that statistically significant. It's not, you know, that much different. And so we have to start to wonder how much value do the personas still have. So the only thing okay and this is the only takeaway from prompting is that personas do have value but not in the way that you think. So personas don't grant you new information that the LLM wouldn't otherwise have. The personas only have the value in terms of the tone, the framing, the priorities or perspective. Now you could ask that you could argue that you know that is really important and you know that probably is important in some cases but they only help you to think more clearly about what you're asking but they don't reliably make those answers more correct. So telling it to act like an expert it doesn't meanly improve the factual accuracy on hard questions and in some cases it can even hurt performance. So if you want the best results, it basically says improve your task instructions, use better examples, and just improve on your workflows rather than just relying on the persona workflow. So just don't rely on AI personas. Um, and yeah, LLMs don't reason as experts. They just can listen on text and pretending otherwise, it doesn't unlock some hidden capability. Now, once again, research paper area. This one's important. And so if you want to know how to actually work with AI, this paper gives us a new takeaway. saw this on the timeline and I was like, "Yep, I need to add this one to the video. " So, this paper shows that working well with an AI is actually a different skill than being smart on your own. And the biggest driver of actually getting the, you know, real world performance out of human and AI collaboration, it's not raw intelligence. It's theory of mind. And theory of mind, I do think this is one of the biggest, you know, key skills that all humans should have. Um, and this is the ability to reason about what the agent knows, believes, and needs. So they introduced some rigorous basian framework to measure human AI synergy proving that AI meaningfully boosts human performance of course on you know harder tasks but one of the things you need to focus on is the theory of mind. So essentially if you have a strong theory of mind you're able to you know get much higher quality performances from the AI because you're able to anticipate the AI's limitations you're able to clarify those goals and you're able to adapt the prompt dynamically. So, what this means is that I think most people need to understand that like yes, you can use AI to learn things, but I think they're basically saying the experts are probably going to get the most out of this because if you're like a physicist, you already know what the AI is probably going to struggle at. But if you're just like a normal guy trying to learn advanced level physics, you don't know the AI's limitations, things it may or may not consider. You can't, you know, probe the AI for more information that it doesn't have. And those real edge cases is what I talked about in a paper ages ago which is essentially knowledge collapse where you know the real edge cases of intelligence those real key things that move you forward those areas you know get lost over time if people just continue to learn from LLMs because LM just always put the most statistically you know I guess you could say consistent information that's in the middle of the distribution and that isn't always the most relevant information to your specific question. So they're basically saying that, you know, being a genius who can solve problems in your own head is completely different to being solving problems with an AI partner. So you need to be able to understand what the AI is good at, what it's not good at, and you need to be able to collaborate with the AI. Your job is to be the bridge for, you know, your human mind. So they're basically saying that look, when you're working with AI, try to understand the limitations, anticipate the drawbacks of AI, and of course, prompt it in a way that leans into, you know, collaboration. And I think this is uh you know really important. So I think this kind of thing it's only going to improve with how you know however much more you use the AI. So of course you know um I think the more that you use the AI and the more you study on that subject I think you're going to learn more. So whatever subject you are trying to learn I don't think an AI is the end all be all. Definitely make sure you're consuming media consuming videos consuming you know books and stuff like that so that you can you know really grasp the entire concept. So um yeah it's super important paper and then there was one more paper that I think for this AI news video this one came out very recently and this was basically Apple producing a new paper that is somewhat better than rack. So basically this is where you know most people already know what rag is. It's a simple way the AI searches for the documents the robot reads a little text and the problem is with rag is that it reads too much sometimes and it gets confused and there's a search brain and answer brain and they don't talk to each other. So the robot doesn't learn what kind of info actually helps the answers and this is what this new Apple paper solves. So this paper teaches the AI to you know squish the document into tiny

### Segment 5 (20:00 - 25:00) [20:00]

memory blocks that only keep the important meaning and then it uses those same blocks for searching and answering. And then you let the AI you know tell the search brain the answer brain they basically work together and you know they think of what worked and what didn't. So this is basically, you know, a much faster way, a much smarter way, and you just get much smarter answers. So I think another takeaway as well is that, you know, Apple is still firmly in the AI race. They still have top AI researchers. They've been doing a lot of, you know, novel research. I've seen countless times and made certain videos on them. So still interesting to know that Apple is still there. And once again, this is the last paper in the paper section of the video. We have Meta. So Meta released this paper. And I think this is probably the best way that we can look on you know for the future of ASI and human intelligence. So this paper basically argues that instead of chasing fully self-improving AI you know recursive self-improvement where you get an AGI that improves itself all the way up until an artificial super intelligence that's so smart that it can keep improving itself. We should build AI that teams up with humans so that we get both smarter together in a safer way. So think of AI not as you know a robot trying to replace scientists but as the best calculator scientist you know helper that helps humans do research and also learns how to work better with people over time. So they're calling this AI and human co-improvement and they say that this is the path to co- super intelligence and this is easier to reach and this is also less dangerous than letting the AI rewrite itself alone. So they don't like pure self-improving AI because they point out that the most past self-improvement work was just models tweaking their own weights, using more data, not rewriting all their own code and goals. So truly autonomous systems could basically become, you know, world conquering, you know, so we don't really want that. So the ideally the best future we want is co-improvement. So this is where humans stay in the loop. They notice problems, correct mistakes, steer the research goals, and this just essentially lowers the chance of dangerous uncontrolled AI behavior. So they say that this can reach the highest level of intelligence but the safest way. And I think Meta are correct. Now, if we want to get onto the robotic section of the video, we have to take a look at the limb Xtron 2. This is an upgraded version of the earlier Tron Bipedal platform, allowing it to, you know, evolve into a more complete humanoid. So this shifts away from mostly leg focused research towards a general purpose humanoid form factor. So this is the new humanoid evolution of Limb Xtron's line, adding an upper body and head to the earlier legcentric Tron 1 platform. This is aimed at more complex whole body humanoid tasks rather than just those prior locomotion experts. So humanoid upper body, two arms, two dextrous hands, a head on top of the existing bipedal base, giving it a traditional humanoid appearance and enables manipulation tasks that the Tron one could not handle natively. Super interesting. So, we also have the AJI bot, and this is the first manufacturer to produce 5,000 humanoid robots. They rolled off its 5,000th humanoid robot, the Lingix X2 today, and this is a huge milestone for the company and the humanoid robot industry as a whole, making a significant step towards large scale deployment in the real world. So they basically stated, the AGI cobot founder stated in a live stream that these 5,000 units include the Zangzang series, the Ling series, and the Genie series. So these humanoid robots will be deployed in various settings including scientific research, entertainment, shopping malls, factories, warehouses. And this shows that this company actually has the capacity for mass production and is ready to meet those surges and you know future customer demand. Now in strange robotics, this video was going super viral and this is from the Yamaha the lambda. This is an experimental motorcycle concept that can balance itself. It can twist its frame and use AI to learn and adapt to its rider. So this is, you know, kind of crazy. I do apologize for the video freezing here, but this is, you know, kind of gone around in terms of showing people how flexible robotics are. I remember the video I made the other day where I said that robotics is moving in ways that we've never seen before. And once again, this is, you know, their prototype using lightweight, durable exoskeleton frame engineered to handle repeated impacts during an AI trial and error while remaining enough flexibility to define this crazy look. So, I don't know about you guys, but I feel like 2026 is going to be the year that we get just super weird robots that move in ways that we haven't seen before. And it's kind of scary because uh I think it's CGI, but it's not. It's real. Talking of CGI strange robots, we got the Hobs 1. So the HOBS one is described by noetics and robotics observers as a generalpurpose professional-grade service robot aimed at real world interaction in varied environments. This is positioned as a professional scene allrounder meaning it is intended to handle tasks across customer service, education, events and other public or semi-public spaces rather than single narrow use cases. So the platform mounts Noetics Hobs Bionic robot head which can mimic human facial

### Segment 6 (25:00 - 30:00) [25:00]

expressions and maintain immersive eye contact on a wheeled mobile humanoid base. Now it's got that displayed display embedded in its chest area that provides visual information. You know you got UI branding enabling the robot to speak and show content during the interaction. And I think this AIdriven connection is super interesting because there are many different humanoid robots that are out there. But for me personally, I I'm not sure what people are going to think of this because I think robots just play robots. I think some robots enter that uncanny valley when they look human and they don't achieve that quite human. So your human brain knows that it's not human and it just has this like weird kind of caveman freakout. So for me, I think to be honest, this robot does actually look like one of the ones that is actually human. You know, one of the rare cases where it actually does look like an actual human. But I think robots for the most part should just stay looking robotic because I don't know. I I think it looks a little bit strange but at the same time uh human AI robots you know that relationship it is evolving and it will be interesting to see how these things are moving forward. Now in AI news so these are the headlines some of the headlines you actually miss. The times has sued perplexity. I mean honestly like the New York Times they've been suing everyone and fair enough. I mean if you make work and AI companies train on it you know fair enough go after them. They filed a lawsuit against perplexity AI for copying their journalism to deliver it to their customers without the compensation and they've repeatedly asked Perplexity to end unauthorized use of the content, but Perplexity apparently allegedly continues unlawful use of the Times copyrighted material. Um, they also sued Opening by the way. So, I think they're um, you know, one of the only companies that are like going after these companies for trying to, you know, really get their compensation. So, super interesting. Now also we have who's using co-pilot? Who's using copilot? Are you using copilot? I mean I'm personally not using copilot but Microsoft are scaling back the AI goals because apparently almost nobody is using co-pilot. So Microsoft this is the article. So Microsoft has reduced its sales targets for co-pilots and other aentic products by as much as 50% in some cases after struggling to find customers. Despite being the early AI leader through the investment in OpenAI, they actually failed to turn that investment into widespread adoption or strong revenue with reports suggesting that most users actually don't find C-Pilot particularly useful. Now, the independent testing has shown that AI agents fail to complete tasks up to 70% of the time, limiting their value as replacements for human workers. And at best, they offer minor productivity gains for skilled employees, often duplicating tasks already handled by junior staffs. Meanwhile, the competitors are outperforming Microsoft. Chat GBT is the one that dominates the market share with over 60% and Google Gemini is rapidly growing, matching Copilot's market position. So, Microsoft are disputing the claims it has lowered overall AI sales quotas. But the broader picture kind of suggests that Copilot is kind of lagging behind the rivals. And this is no surprising. I mean, do you actually lose copilot on a day-to-day basis? I personally cannot find, you know, a reason to use Copilot. However, I will say this. Since Microsoft's C-Pilot is embedded into those, you know, products, I do think that on the next iteration cycle, maybe the GPT6, GPT7 era, we'll probably get really useful agents designed in Microsoft. I don't think it's that hard to design actual useful products. I mean, genuinely, sometimes I do think there is some disconnect from these companies because maybe they just don't use them and maybe they don't see like, you know, what's on the AI tool space. Like I think realistically they should just acquire some of those AI companies that people are actually using because this kind of behavior where you're trying to get people to use the tool. It is so hard to predict how people are going to co-evolve with these technologies. So I would say you know these companies just need to like look at what is actually working look at what companies are exploding by those companies and actually integrate those products into Microsoft's already lineup. I think that's what the big companies should do because it it's really hard to predict what's there and you know you you're going to lose billions and millions of dollars um trying to figure out what people are going to like. Now also we had Deep Minds Research Lab. So Google's AI unit Deep Minds has announced the first automated research lab in the UK. So the first automated lab is coming to the United Kingdom planned in a 2026 as part of a new government AI partnership. So this is described as a fully integrated with AI and robotics meaning the AI systems and advanced robotics will run experiments autonomously rather than relying on humans purely to do each step. So the lab will actually focus on material science and their initial mission is to discover new materials especially. So they're really working on superconductors that work at practical temperatures. And this is huge implications for electricity transmission, medical tech, quantum computing, and next generation semiconductors for faster, more efficient computer chips and advanced battery and energy materials. And instead of piece metal human cycles where you've got to, you know, human cycles of experiment analysis redesign, the robot AI systems run hundreds of experiments per day and identify

### Segment 7 (30:00 - 35:00) [30:00]

promising candidates much faster than a traditional lab. So this is going to be interesting because British scientists are going to be able to get first dibs on this and I think this is exactly the kind of research that you know OpenAI promised. This is the kind of research that those companies with billions of dollars says oh AI is going to be solving doing scientific research and here we have Google actually going to be pulling this off. So this is going to be super interesting and I hope something spawns out of this that genuinely helps out society. Now in more Google news and this is fake news by the way. Next story here, Google is telling advertisers it's going to bring ads to Gemini in 2026. By the way, this is fake news, by the way. I know a lot of people saw this and were like, "Oh, Gemini, screw Gemini. " Nope. This is fake news, by the way. So, this was news that was, you know, going around. That was unreal. We also had Gemini Live. This is insane, by the way. So, Google is just absolutely crushing it. So, you might want to watch this. — Wait, wait. Just give me one second. Okay, as I was saying, the Gemini model now does translation live. It works on the Google Translate app with any headphones. As you can hear, I am being translated in real time. Let's see what it's like when I ask for advice for my next trip for my colleagues. Let's go. — I need a vacation, but I don't know where to go. Do you have any ideas? — You should go to Korea. There's lots of food and things to see in Soul. If you go there, you love food. You can try fishshaped pancakes and hot talk, soy marinated crabs. I think you'd like Seoul. Actually, China has a lot, too. You could go to Beijing for peeking duck or go to Chongqing for hot pot. That would be nice. Or go to Shanghai for our brazed pork and pan fried buns. I think you'll really like it. — Well, to change things up, we could definitely go to Germany, especially at this time when it's dark and cold. We have to go to a Christmas market. There's mold wine. You can eat steamed dumplings. It's really beautiful. So, out of these, where do you want to go? — I don't know yet, but I have some good ideas. Thank you. It works well in over 70 languages and even over 2,000 pairs of languages and even when we're on the move. Let me show you. Oh, at You can try it now. We hope this will be useful during the holiday season, whether it's for traveling or talking with your in-laws. By the way, it can whisper, too. Check it out on Google Translate. And then we had the internet's outrage over this ad. So, people already hate AI, and they've just given them another reason to hate it even more. So, essentially, XAI had a hackathon, and I'm going to show you guys why this was so outrage. So, people already hate advertisements. I know, I get it. I hate ads, too. But the thing is here is that like um they essentially had a hackathon where they were basically going to use Generative AI to implant ads into a section of a TV show. Now, I don't know why this video isn't working here, but you can see how the ads are. And I don't know, what do you guys think about this? So, you've got a TV show where it's like, you know, standard. You're watching a TV show, it's just a standard, whatever, yada yada. And these ads are what you would call, I guess you could say, smooth ads or less intrusive ads. So if we look right here, and the only reason I'm dragging this, you know, timeline along is so that I don't get like a copyright strike from the company that made this TV show or something. So all I'm saying here is that like as soon as you can see him pull this up. So this isn't part of the show cuz I've watched this show and this isn't part of the show. So you get a generative AI to, you know, take the first frame, input your product, and then you can see you can click that learn more button over there if you want to know more about that product placement. I mean, I don't know about you guys, but I don't know. Like, I'm indifferent to this. I mean, if it's genuinely a good product that kind of makes sense for the show, I don't think it's that bad. I just think that the way that this was marketed and delivered wasn't necessarily the best. And I would say that this is only a hackathon, so this is only an idea. If humans kind of reject this, it's probably not going to work that well, but I do think this probably would be better than traditional ads, just

### Segment 8 (35:00 - 35:00) [35:00]

stopping your focus and just breaking you. Something that keeps you inside of the show and then is actually a part of the moment. I don't know. Just my opinion. Let me know what you guys think. If you enjoyed the video, I will see you the next one.
