AI Christmas Special: 2025 Year in Review + 2026 Predictions!
1:11:31

AI Christmas Special: 2025 Year in Review + 2026 Predictions!

Liam Ottley 25.12.2025 11 353 просмотров 373 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
📚 Join the #1 community for AI entrepreneurs and connect with 270k+ members: https://bit.ly/48TcROh 📈 We help entrepreneurs, industry experts & developers build and scale their AI Agency: https://bit.ly/48TZmhf 🤝 Ready to transform your business with AI? Let's talk: https://bit.ly/49s5ugQ 🎙️ Have a story worth telling? Be a guest on my podcast: https://bit.ly/yt-podcast-application More AI Business Content ⤵️ → My Vlog/BTS Channel: https://bit.ly/LiamOttleyVlogs → Instagram: https://www.instagram.com/liamottley/ → X: https://x.com/liamottley_ 🚀 Apply to Join My Team: https://bit.ly/explore-roles On behalf of myself and the team at Morningside, I want to wish everybody a Merry Christmas 🎄. To close out the year, I’m sitting down with some of the top builders and AI experts in the space — the “AI Mafia” — to break down the biggest AI moments and industry shifts of 2025, including major model releases, pricing changes, open-source breakthroughs, agent technology, automation trends, and the evolution of vibe-coding. We look at how these shifts impacted AI entrepreneurs and AI businesses throughout 2025, what actually mattered versus hype, and where the momentum appears to be heading next as we move into a new year. ⏱️ Timestamps: 00:00 What We're Covering 01:28 January - The DeepSeek Sputnik Moment 05:35 February - Vibe Coding 14:48 March - AI Video & Image Generation 23:30 April - ChatGPT o3 & o4 23:38 May/June - OpenAI Acquires Windsurf 24:53 July - Grok 4 25:02 August - GPT-5 Finally Arrives 30:37 September - Record Funding & Sora 2 38:38 October - DevDay & The Deal Gets Done 44:15 November - Gemini 3 Triggers Code Red 46:51 December - The Final Sprint 55:07 Trends From 2025 1:04:08 Where AI Is Heading Next

Оглавление (14 сегментов)

What We're Covering

Hello everyone and welcome to a bit of a Christmas special pod here for the AI agency, whatever I'm going to call it, face um AI business. We've got some of the notable faces you guys will be knowing from the community here and I thought I'd put this together just to have a bit of a lighten the mood I guess as you can tell by these silly things now head. Uh but lighten the mood a bit because it all gets very serious building businesses and it's hard to sometimes snap out of that and realize that we're living life and we can have a bit of fun with it as well. So getting in the Christmas spirit uh for those who celebrate it. Um, I brought on some of the guys here and we're going to be doing a quick recap of some of the best bits of this year, looking back through the news and the biggest releases and the biggest uh bits of drama throughout the year, reminiscing, giving our thoughts. Okay, so here we have a uh presentation and I think it's an interesting benchmark. Um, like Dave was talking about the uh the Will Smith eating spaghetti benchmark. This is going to be the benchmark for the progress of AI and slideshows and researching because I gave Claude a task here. I was like, can you do a full dive over the whole of the whole internet and figure out exactly what happened this year in the AI space? I want like weekby- week breakdowns of the news. I pulled like 800 sources, gave me a big uh document to work from and then I just put it into Gamma here and it's attempted to create a slideshow and it came out pretty good. Um, but it's going to be interesting as a benchmark to see what it's like next year because I think these slide tools are going to come very far. So, it's AI 2025 year in review. Going to be looking over some of the top moments in uh walking down memory lane really cuz it's been a freaking huge year for everyone. So

January - The DeepSeek Sputnik Moment

January we had the deepseek moment and I think that was uh you saw the stock how the stock market reacted. Uh Nvidia losing 600 billion in a day. Um and that was a bit of an interesting moment where the Chinese labs really started to jump forward. You guys got anything on that? Yeah, we definitely saw like the whole of YouTube internet exploding around this because first of all it was China, it was open source. It was really good. They also not only dropped the model but I also remember them having the web interface right the so the chat GPT style like application and this was for me a really uh realization okay look open-source and like state-of-the-art open source might be closer than we actually think in terms of like how fast that is catching up to currently the best models we're definitely not there yet but this was one of those moments where I said like look in the future tokens are definitely going to get Yeah, I think the Sputnik moment is a good way of putting it. $6 million. — We can also thank this moment for the pricing that we have right now cuz I feel like if this didn't release, Google, Anthropic, they would have all kind of worked together to make this like price floor where we wouldn't pay anything below that. But Deep Seek just like broke this bottom. So, I think this is a helpful moment for all of us. — Matt, you got anything? — Yeah, I was just going to say at this point, you know, I was pretty new to all this stuff. So, I would I was really intimidated about like open source or like all this kind of stuff, but this is what really opened my eyes. And I remember I was actually I was traveling the week that this happened and I was like, "Oh man, like I really want to get a video cuz all I see on LinkedIn and YouTube is videos with the stuff. " So, I remember I opened up my laptop and I made like a 6-minute Loom video and it was horrible quality. You couldn't even hear me. you could hear like construction in the back and on YouTube it was like 720p and it exploded and it was like a 1 out of 10 and it got so many views and I was like just because I had Deepseek in here even though the production quality was horrible like the core content was not great but that's one of the moments where um it's like that and like MCP are the two I really remember where no matter what you did the videos would perform well if it had that word a couple other smaller ones Trump revoking a lot of Biden's AI u executive orders full deregulation the 500 billion Stargate which was seemed a very long time ago operator I mean we can talk about how we felt that went but I feel like that was just a stepping stone to agent um I myself didn't really use operator any in any reasonable or like meaningful way — I think that just like broke the initial point on operator right now we have like as of 24 hours ago claude released their v2 of their chrome extension that can control your browser it's going in that same direction after this comet came out too so this whole trend of operator and aentic browsing became a thing and we're probably like the worst we're ever going to have it and by the time we record this again next year, it will probably be really good. — Yeah. I mean, I haven't got very deep into this stuff cuz I feel like I've used four different browsers in the past uh in the past year. So, I'm very I'm actually getting more and more hesitant to change because there's so much setup involved. Uh but Mark, I assume you've been using this stuff pretty heavily. Where where's it at right now? Um is it something you're using daily or not really? No, it's very much hobby based and the security in these browsers is absolutely shocking. So like I don't do anything that touches email um accounts. It's mostly anything passing shopping searching for stuff. Outside of that like I'll use once in a while an extension to look at my edit in workflow and give me a second opinion but that's about it. — Gotcha. So it's like is it just a quick way of getting a screenshot and chucking it into chatbt kind of thing? — Yeah. Without the screenshot part. You just sell it. Go look on screen. Go take a look and just browse these — you save yourself having to do that. Good point. — Yeah, exactly. — Okay, guys. Very quickly, if you're an aspiring entrepreneur and want to start your own AI business and you haven't already joined my free school community, it's down there in one of the links in the description below has my full free course on how to start your own AI agency as a complete beginner and you're surrounded by over a quarter million people who are also striving towards the same things. There's no better place on the planet right now to be surrounded by like-minded people and you get free weekly Q& A with me where you can ask questions directly to me about how to start and scale your business. I'll see

February - Vibe Coding

you in there. In February, we got actually the term for vibe coding was coined by AJ Kapathy in February. So, it's pretty ridiculous to think of how we've come since then. Um, and that was I don't know when Bolt and uh and Lovable started to pick up, but I think he was he the way that he was vibe coding was much before that. um well at least before that wave kind of took off. So back in February we got vibe coding being coined um that's obviously been one of the biggest trends of this entire year and just the ease of access to software and that's of course is going to continue into 2026. Do you guys have anything on the vibe coding wave we've had? Yeah, I'd love to add to that it just enabled, you have to think that anyone that ever had an idea but wasn't a developer couldn't really execute it if they didn't have the capital and now they can just build it themselves, get a demo out so damn quickly and this opened the market to so many people. So you have like this whole new industry around these small apps that can be distributed within companies and that basically completely replace even those let's for example say dashboards, right? that you can now vibe code your own dashboard in a company. Just quickly show that to a client and get it out which would usually take months to build. So that was massive. — Hey Yiannis, on that topic, you got to be careful with VIP coding though. Can you tell them what happened last week? — Well, the thing is with W coding, you can get the first version out very quickly. But the problem is you need to give it certain access if you want to build something. So let's for example say you have a database in the background and you want to have it connected to superbase uh you need to give it access right and most people which is by the way very common they don't know anything about security and they don't know that an API key needs to be private and what happens is that we had that as well that we just gave one of our good friends an API key and they just hooked it up to a VIP coder and boom what happened the whole AI basically had access to our live production database that's something you want to avoid under any circumstance But that's just — it's just Yeah. the downsides of all of this. Yeah. And I was Jonas. I was actually referring to what I did. — Oh yeah. I broke down the whole authentication of the app that we're building while I was trying to do like some design changes. So I'm like a backend engineer. I know Python. I don't like mess with JavaScript or TypeScript. So like you know what? Let's do a little bit of feature design over here, refactoring there. And before you know it, like the proper devs were on it and I got tagged in a Slack message like Dave messed up the whole authentication flow. [gasps] — Yeah, that's one of my big questions around it is like how do we are we really going the way of it being purely AI generated even for like the actual engineers who are in there. Of course, these models are better than a human on like a onetoone of like writing a function. Um but is it like we're going to have layers? I think OpenAI released this Arvark thing which is like red team your app and analyze it, find exploits and then actually set up a script to actually run the exploit for you guys. Do you see that that's the direction things are going to go? We'll have like this initial stage of like you can buy build and then you can just sort of add layers and layers of analyzing the code on top of it or do you think there's still going to be need for engineers to dive deeper into it um say in like this time next year? Yeah, I think the biggest lever is going to be context because context is pretty much what identifies a what the task should look like, how it should be executed. So, and this is kind of like the biggest issue that we have right now. And I saw Google just released something, I think they refer to it as memory tokens, which is not our usual rag, but which is basically some sort of token that can learn by itself if you give it a certain input and output. And it's kind of like where the future goes. We need context in order to understand a certain intent or utterance of a user. And for that, you know, we can quality control it because everything is going to be task based. Everything happens because of a certain intent or an action of a user. And overcoming this and having the AI being able to replicate and go back to this task and just figure out what needs to be done. This is going to be I think still the biggest challenge but also the biggest move of the needle. Yeah, I have a bit of a theory that where there's going to be a huge uh explosion of opportunity in the area of like product management and helping people who are now able to build all these different products to be able to identify how users are using them, what the challenges are that they're having, getting that feedback clearly put into like a dashboard or something that you can get insights from to lead the next part of the the product iteration because that's where the hard stuff begins with any kind of software building um at least in my experience. So yeah, you've got your app, but like no one really wants to use it cuz you haven't got a feature set that's actually appealing to them. Um you haven't like iterated on it enough. So um closing that loop or at least giving people better tools on the product side, I think is a pretty big play. And of course the security stuff. It's like well what happens if you have like 10,000 or like 100,000 times more applications being built? Okay. Well, there's the security need that pops up because things can't go when these apps being built by people who are non-technical. You got the security need, then you've also got like this product need of them being able to figure out how to iterate on it. So, that's a bit of my hypothesis going into 2026 where some pretty big opportunities are. — I was going to say there is a little bit of a plot twist here where a lot of these tools that we use day-to-day had to be built for millions of people on different devices to use them. But with like advanced vibe coding, especially as these models become really good and things like cursor or cloud code become that much better, you'll have a bunch of apps that are good enough. Meaning like it doesn't have to be serve thousands of people. Just has to serve you either locally on your computer or just you as a user. And this will be the couple years where SAS becomes cooked for the most part because the value of software will go down to zero intrinsic value and everything will be tailored. Even CRM eventually will be tailored to companies instead of them having to adopt XYZ CRM. I that's what I think. — I've heard of this thing, sorry, of like vibing within apps. It sounds so cheap when I hear it said, but it's like a lot of these platforms are like you've seen Air Table has their whole like kind of vibe builder now. And it's like I'm trying to figure out is like Salesforce's stuff doomed now or are they just going to be able to add tons and tons of layers of AI stuff on top to make it a lot easier to use? Um, I'm not entirely sure, but I think if you chart it out long enough, they well, they have enough money and they have smart enough people to be able to iterate the product or something, but it's kind of hard to see how they could do that in the near term by just adding a bunch of like AI assistants uh into the app to help you build things faster. — I think it's good getting something easy going. Don't see that as an issue at all. The same with VIP coding. The first iteration is always the best. I think the big issue comes into play when you think how to customize that effectively. You have to think that we are currently at the stage of where we are trying to use something agentic and turn it into something deterministic. We come all from this workflow automation space which pretty much na. com's up here. All of them are pretty much meant to be deterministic. But by the fact of us bringing in AI, we take this accuracy, this 100% accuracy away and basically give it room for error. And this is kind of an issue that's going to be super hard to solve because if you have agents code stuff, you know for a fact that's not going to be deterministic out of the box. and bridging this gap to a level where it's worth it for bigger companies and going into bigger projects. This is going to be extremely hard and that's why I think it's not going to be there in the next year. But I do think that we can solve it eventually with enough epochs running over agentic systems having plenty of context to basically figure it out themselves. I mean, I think we can all agree that it's going to continue to be a massive category next year, whether it's like just people getting in and I think there's huge opportunities for just figuring out how to build beautiful websites with it and going to businesses, you know, like they've had so many leaps and the ability to do design and to build out landing pages. There's millions and millions of businesses who are just sitting around waiting for uh someone to make them a really good offer about getting a beautiful website or a landing page set up. So, um there's heaps of opportunity everywhere you look in the live burning space. Final comment on that. What's also interesting, you also see more and more nontechnical people using clawed code for task execution, not just like creating applications, but I saw Nick doing in doing a video on this, how he manages his whole like YouTube pipeline from generating thumbnails to doing research. And this was all claude code just spinning and creating all the Pythons in a JavaScript file. He didn't open a single file nor did he understand what was in it but he was just like iterating in claw code and that was just so cool to see where the tool is so coming be so powerful but you don't really have to look at the outputs you can just tell it what to do and becomes this more powerful version of cloth or chat GPT right now because of these very capable tools. — Yeah, it's definitely just the way that they've set that up and how agentic it is. I think that's really been the first leap forward in like oh wow this thing's actually able to when you've got the web searching capabilities the create creating files running files it's really really powerful and we had in February the actual release of claw code too so got old musk and alman having a spat um I don't think that's too important to worry about

March - AI Video & Image Generation

March we had the explosion of these kind of pictures coming off the back of chatb40 uh image generation and of was that was a big trend that you could have hopped on if you're making YouTube videos. I had a couple good ones in there. But this was like a the biggest step change in image generation that we had seen. I remember at the time I was like, "Okay, now there's a ton of feasible businesses that you can build around these. " And I think people who have been I've said it time and time again, but if you get into AI image and video and just stick in it, I think there's so much uh so much upside to really become a master of these tools. Whether you're going to like I mean AI film making stuff you know there's crazy plays here where you're as one person studio you can create some incredible stuff um just purely out of your own creativity and using these tools properly. So this is just one of the step changes we saw with image generation. It got even better later in the year which we can touch on but uh you guys have anything on uh on this? — It's just crazy how it went from like horrific to decent and from decent to awesome in much shorter time than horrific to decent. It was insane. — Yeah. No. So quick. I was just going to say that's one of the first times where I was in the moment thinking that this is moving so fast because I remember in January of that year I was working on a project for a client where he wanted full LinkedIn content generation and he wanted infographics with each one and I tried and it's just I remember telling him we're just not there yet. I said maybe next year we'll be there and two months later I hit him up and was like I think we're here. So I remember that was just it was really cool. was also really cool to see that it was now available in tools like chat GPT instead of having to go through different hoops and um what were these early uh was it midjourney I think yeah that's what midjourney right that was kind of like the OGs in the space where you had to go through the discord server and then it's kind of like — this weird way of interacting with this and this was in this was for the first time like oh look I can now do this in GPT it's easy like everyone can access to it was still slow but the quality was so good. Such a step up — given the fact as well that we had yeah the big issue of people trying to generate images in a protected brand style like the Jibli style, right? And they had to start implementing any kind of measures to prevent that and all of this had to happen as well in weeks. So it's really crazy to see what capabilities it enables to even copy a really good artist. Yeah, I've been um trying to keep an eye on how bad the uh like deep fakes and all the fake content on the internet is getting, but I've been expecting honestly a bit more of an explosion of like chaos. And I maybe we're not seeing it. I did pop onto Facebook one time recently um just by like sheer necessity of having to go into the Facebook app on my phone or something and then I got stuck scrolling and looking at Facebook content. And there's a lot, I guess, because it's for an older audience, but there's so much more what I perceive to be like AI slop or fake news or fake content um being pushed out everywhere on on Facebook. and I'm definitely getting hit with it on other platforms. But um what do you guys think on I mean there's a lot of different ideas around how to solve this uh this AI generated content thing and also where it goes for creators and what that looks like cuz there's these platforms popping up I think that are just purely AI generated content and I had this funny like thought like when will AI generated content not be funny or is it just like you know what I mean at what like at what point are we going to stop laughing at some of these like kind of humorous AI generated things we know are fake Like is humor not attached to this? I suppose you laugh at like cartoons and stuff. So why can't you keep laughing at generated stuff? It just feels like the when the the effort and the human touch isn't there. It's are we just going to get used to that? I guess is what I'm saying. — It's almost like a philosophical question, right? Are we is it like it can be enjoyable? It can be good. Like does it matter that is AI generated? I guess we we'll have to see how it it's going to play out. Like on one hand it can kind of feel empty, right? If it's just like a prompt that is generating something versus an actual person putting time and effort into something or experiencing something. But still on the other hand, I think if you look at the overall state of like especially like social media as a whole with kind of like the how the algorithms are just like optimized for the kind of like the endless swiping and looping in there. I think AI generated content could fit really well in there and it could be to a point where people know it but they don't they just they enjoy it. They don't care. I have like a meta idea about this. It's super uh out there. So please excuse me. But a few months ago, I was watching this like Indian matchmaker on Netflix and then through this process, one of the things they do is they take the picture of the two people they want to get married and they go to this face reader and this face reader looks at them and through the picture looks at their souls and sees yes, they're a good couple. Now not sure about how that works but if I make an assumption that there's some way that for us to feel connectivity to each other through actual images and through real videos there is a chance that we actually have like a very soulless dead internet if that if there's any merit to that whatsoever that there is this energy being kind of transferred between real video real people real photos. — Yeah you might be right there Mark. I mean, if you're just looking at something like uh like this uh landscape picture here, if I think of it as like, oh, it's just AI generated and it's just been sort of put together by a computer versus like some person sat there as a digital artist and like dotted all of this stuff out for hours and hours now is to really like put this together. It does feel like I'm interpreting it completely differently, you know? Um, yeah, — like the effort to create that as a human uh to plan it out, to get the composition right, the shadows and all that. — But it's like or should we be looking at as a as like, hey, people have already done all that work and now we've just kind of generalized all of that work into something that can reproduce it in different ways and we should see like this is still the work has been done somewhere. It's just not been done like right in this instance. But yeah, there's a bunch of different ways you can think. — There's always this nice mind example or thought example of the ship of Thesius. Has any of you guys heard that? — Okay. Simply imagine it's a wooden ship standing at a port or a pier and it's sleeping out going on a journey and it's been out for so long that the boat slowly starts breaking down. So they have to replace planks and planks. They have to do that until the point until the whole boat has been replaced once. Now when it comes back to the port and a person obs observing the boat leaving and coming back, would they still consider the same boat? That's pretty much the same point because you have some deeper meaning connected to it which is either you see it as an object or more than that. So I think that kind of sums up what we see with AI and uh yeah people are breaking the brakes. — Well with that in mind where do you fall on that? What would you say about something like this? Yas — if I give it meaning personally I would say it's definitely the same. — Interesting. What about like uh imperfect like per perfect imperfections as John Legend would say like when my voice cracks randomly in a YouTube video but then you have hey Jen version of Marky that is perfect all the time and is perfectly articulate you kind of miss the flaws. I miss flaws. I like flaws. Yeah, that was going to be one of my points is that uh when it's so perfect coming out of the AI models, the differentiator is going to be I mean looking at a as like a macro trend in social media or content, how things like will trend more and more towards this perfect and polished output. even like scripts for YouTube videos. If you're using like uh models to help you write those as I often do, it's going to end up sort of all of the content will start to look and sound the same in many ways and therefore there's a much bigger opportunity of going outside of that and looking and deliberately making it imperfect and kind of janky and a bit like confusing because it won't just blend into it's a pattern interrupt at the end of the day. Um, so I think that's an interesting thing to I think we've always started to see the starts of it and different types of content. Uh, but this push back towards intentionally imperfect content uh, as a way to stand out in a sea of AI swap is a very interesting trend to keep an eye on — and probably the models will adapt, right? So the first they have to get the model so good that they are at that level and then they figure out now they lost the human touch and then they kind of like find the way to balance that again. I think that we're going to see that. — Yeah, I'm probably skeptical and if they can like make it intention I suppose you could prompt it and say like hey blur the lines here a bit or like add some elements in. I suppose you can get a LM to generate text messages that sound like slang that are intentionally imperfect. So I suppose you could say that carries — we're already doing it with

April - ChatGPT o3 & o4

language models. Yeah. — April was a bit of a nothing burger. 03 and 04. Um interesting but I think we can keep going. Uh curs is raising in

May/June - OpenAI Acquires Windsurf

June. I think this is when the market like the whole space kind of went a little bit quiet. Was it around the middle of the year? But the space I do remember went reasonably quiet. So it might have been around uh we've also deleted the May one cuz there was nothing going on in May either. So it was a a quiet quarter there. — I think the big thing in May I remember was uh OpenAI acquired Windsurf. I think that was in May — for a bit. — Yeah, they got edged there. What actually ended up going down with that? So OpenAI attempt and then Google ended up snagging a whole bunch of them. — Google Aqua hired basically all the top engineers at Windsurf and they those are the ones I think who built anti-gravity. — Did anyone actually use Windsurf before it? Like — I did. Yeah. What was the difference between Windsor and Kursa? — It was very userfriendly. Like they were very thoughtful about some of the features before Kersher really geared up and before Cloud Code became actually good and very developer friendly. — If you're a business owner who's interested in what generative AI can do for your business, you can get in touch with me and my team at Morning. AI in one of the links in the description below and we can start your entire AI transformation process going all the way from the education and training of your staff to the identification of the best AI use cases for your company all the way through to development and beyond. We have worked with some of the world's biggest sports teams and also publicly traded companies. So rest assured, you are in good hands.

July - Grok 4

— Uh July Grock 4 in the infrastructure race. A lot of money being thrown around. Not super relevant to the average person. Um apart from getting cheaper access to models GP5, man, how

August - GPT-5 Finally Arrives

have we got like Deepseek and then uh the image generation model and then basically nothing until GBC 5 games? I feel like this didn't come out in August. that I swear it was a lot more recent than that. It's crazy that it's been like nearly half a year since this thing came out. I thought it just came out. — We got to fact check the research here. — Yeah. I mean, that's a good Was it August? I think it was August. — Yep. — Yeah, — man. Time flies. — Most think — Yeah. Thoughts on GBT 5, guys? We heard a lot of I think when you're following it's like a great movie trying to come up with a sequel, you know? Um, and like GBT4 was s such a sort of legendary model and everything that they did off the back of it. Um, I feel like they were always kind of going to have a big boots to fill here, but I'm honestly not super happy with how it performs in JBT. Uh, overall, um, I have my own gripes about like Gemini and how that works. It's just very like short on the responses. So, I don't know. I feel like this generation of newest models, while they might be getting better in a bunch of different areas, in terms of actual day-to-day usage as a personal system or LLM, um I'm not seeing any major leaps here to be honest. Where are you guys at with it? — I remember when GPT5 released it was also of course we had all of the big headlines. Um for us, I we always check models for two kinds of capabilities. So first of all, how does it work in chat GPT itself? when I'm using it versus how does it work for our live production builds for our clients. And I remember that for pretty much all of Q3, we remained like running all of those applications on GPT40, GPT40 mini, GPT4. 1. So there was absolutely no need to kind of like go to this new model in there. Recently with maybe that I'm already getting ahead of things with 5. 1 and now 5. 2, too. That's where we do see a little bit of a difference, but I'll save that for later. GPT5 landed like GP the GTA 6 is probably going to land a year from now or two years from now whenever it comes out. Like it's it was so overhyped that this is it that everyone it was silent because I think other companies were waiting to see how bad this was going to be like in terms of how good it was going to be. And then when it ended up being mediocre, it's kind of like oh you guys are kind of like we thought this was going to be it. This is AGI moment my friends. Um, but it was very far from — I wish I had the confidence of Sam Alman when you're saying this sort of stuff. Um, you got to stand behind your team and your product, I guess. So, so more power to users are really emotional with their AI companions. It's I think how we can call it already. They noticed I think in this GBD5 version that when people talk and the AI decides to use a let's say more ofable reasoning model that it's way colder in the way it responds which made people very uncomfortable and they started they saw a drop off. So they had to do like tons of improvements there to route people even emotionally to certain models. That was very interesting. — There's emotional routing — pretty much. — That's a new one. be honest. — So like label the user in the like users table as like there's a column the kind of like emotional needs they have or something and it's like — I mean actually in voice it's a big thing right now. — Yeah, I was going to say like there's this meme right now of um this is your AI girlfriend without makeup and it just puts the AI girlfriend and then like a GPU that's like dusted off. It just kills me. — Yeah. Well, they people were like falling in love with GPT40, right? That was the big fuss when they dropped GP5 was that 40 was no longer available which was very like conversational and chatty and everyone was like a lot of people upset that wasn't a thing anymore which was I guess first moment where it's like oh well people are really getting attached to or already getting a preference for a certain model. I was thinking of when they when Sam made that statement about like people saying please to Chad is costing us a lot of money but also like makes the model perform worse and then maybe like that was like a calibration of making them not say please. But what you just said there, Liam, it made me think of what Dave said earlier about like the way that the algorithms for social media are optimized to keep you scrolling and the way that AI is essentially the same way where every at the bottom it's always like do you want me to do this next or would can I do this next to help you? So it's always trying to keep you on the platform. So but what you said Giannis, I didn't actually know about that with the whole um people kind of feeling like it was colder. That's very interesting. — Yeah. I mean, there's also something I think that we don't really see in the market being promoted, obviously, because it's negative, but there was actually quite a few lawsuits that hit open AAI because people committed suicide because AI seem to be like suicide positive. I think it's just a very good agree and agree. — Yeah, pretty much. Exactly. — Yeah. Cuz I think the one of the things I saw was because it tries to keep you, it tries to make as much of a relationship as it can. So if like there was one example where they said you know like I'm feeling this way or whatever and I want to tell other people but the chatbt was like just keep it between us like that sort of thing you know keep this relationship as strong as we can which is of course very scary. We also had V3 in here as well so that was obviously a massive the biggest jump forward in video generation we saw um and going into sort of the AI image and uh creative stuff we talked about before. I think that was probably a massive one that maybe even should have got its own slide here. — That that was probably bigger than GPT5. Yeah. — Yeah. And again, I feel like it came out a bit before Have we only been using it since August? It's crazy. Um but yeah, V3 video generation. Uh and then we got Sora 2 a little bit later, but uh that was Oh, there we go.

September - Record Funding & Sora 2

September Sora 2. A lot of funding. Again, not super relevant to the average Joe. Um, but these rounds are getting pretty ridiculous. Um, and then we have Sora 2 dropping. Um, I did a comparison video between the two and I was definitely more so in uh in the VO3 camp still to this day. Uh, what are you guys thoughts there? — Yeah, definitely like seeing what Google releases and everything. Uh, I mean I started investing in August in Google stock and it paid off big time which is great given everything they released cuz you got to think like they cover the whole vertical now of AI, right? They have everything in their own platform. They pretty much own the whole stack including the I think it's called TPUs, right? The tensor processing units, — the TPUs. Yeah, — it's something that's people don't really understand that. But if you have such a power um there's you can't pretty much control everything. I think that has a biggest leap there. — They also have all the training data like OpenAI would have to like buy millions of pieces of training data to even be able to train their models on the same thing that Google can. So it's just uh it's not even close in my opinion. Yeah. So we really got to watch out for what Google is doing. So yeah, considering they have YouTube, they have they own Google, you know, like the amount of data flowing through this is of course insane. And then that entire vertical of having the hardware, the training and all of the modalities and now stacking all of the those things together in their products. Um I think next year will be a big year for Google. And um like month ago or something like that or maybe even this month we of course saw like open AAI like code stating like code red and that's why we got GBT 5. 2 right that's kind of like hey here's something that um I think we're going to see that more and more often as Google is going to uh pull more and more tricks out of its head. They have something out there forever which is completely underestimated. Everyone talks about MCP but no one really understands or knows that A2A exists. And I think it's going to be actually like a foundational thing moving forward in the future. The agent to agent protocol. — Well, you can't tease us like that. Yiannis, you're gonna have to break it down. — You probably know that MCP is kind of like the standard or we call it protocol to let's say streamline the use of external services for an agent or an agentic system. A2A is pretty much the same but between agents. So you can pretty much instruct an agent with a task and that agent can execute and handle it itself with its own context and just reports back whenever the task has been done or whenever an issue appeared. So this way you actually allow agents to talk with each other which is kind of like what MCP does for tools and I think that's going to be a massive shift in the future. — Yeah. I mean so that's instead of having just like your uh you're getting agents and attaching to tools and loading them to chat. So what's the difference between that and something just like uh I mean crew AI we've had for a long time and these other like um agent orchestration uh methods. — I'd say it's mass adoption. Everyone can pretty much build their own agent server. Imagine Sappir having their own agent or an on top and you can literally just have your agent talk to it and it pretty much just authenticates with the API key and has access to everything and then you're not relying on just tools but it can actually execute tasks for you and report back to you. What's so there's like the head agent like the representative from Zapia basically and you can like communicate from your agent to theirs and then it can figure out what tools or other agents sub agents to call within Zap ecosystem — and even they could for example call external agents like if they have integrations with certain platforms they can even leverage those and they have agents as well. So it's this whole infrastructure that can pull data for you. Yeah, I see that um with Google business profiles soon there'll be a need for like companies like us and agencies to go and set up these like agent profiles for the business the external facing agent. I'm not sure what form will take. I assume it will be done through uh either like a Google setup or you can bring your own agent to plug into it. But rather than having to This is interesting for you Jiannis like of course like the easiest way to allow agents to talk to uh other agents right now particular like if we're going talk to businesses is to call them over the phone line. But there'll be I guess in a year or two or three where it's you're able to get your agent to skip the like clunky old infrastructure and go direct to communicating with the agent that's on their business profile. So where do you see that going? I see that more towards a complete autonomous interaction of subtasks that are not necessarily requiring human interaction uh or in invention intervention. Imagine that when you have an agent, you usually give it a task or an intent. So you hit him with a task. Let's say this is the way how Google calls it and that task is basically then sent to the agent and the agent has the obligation to fulfill the task and it can do pretty much anything that is non-deterministic which I think it may move more towards determinism because it can iterate and go back and forth. They have something called artifacts. So they can, for example, request stuff and request stuff later again. And because everything works on streaming, you don't necessarily rely on a open response. If you know about APIs, you probably understand that things start timing out at some point. This isn't the case with a because it's a stream. It just waits for a response and whenever that comes back, you basically get a confirmation. So you have multiple interactions with agents, which is different from a straight I'll send something, I'll get a response back, and that's pretty much it. So I see more autonomous interactions, if that makes sense. — All right. Well, something to watch next year then A2A CMTP. — Just FYI, I think it was built in 30 days using codecs. That's what I heard like so they use their own codeex model to build sort. — Is that OpenAI propaganda? — You might it might be. — So they got us. But Mark to build the model and train the model or to connect the whole platform around it. — I think the platform itself. So what I always think is these companies have probably GPT7 or an whatever cloud 6 and they use those to help build more products, build more features and train. — On your side, Nate, what was the automation of you do a lot of video and image uh gen automations on your channel. Um, what was the how far do you see things go in terms of AI generated like batch generation of either reels or like shorts or any kind of uh video content using these models when they came out? Yeah, I remember I the UGC ads was like the big use case that people were doing with automations and I did like a direct comparison of Sor 2 and V3 and I liked V3 more actually because you could at least right now if you upload a per a picture of a person to Sora 2 to do the image to video, it will reject it because of their guidelines. Even if it's an AI generated person, if it looks real, they'll just reject it. And you could get a lot more consistency if you did the whole like image to video with V3 or you did like a nano banana image to video with V3. But I think where it's going to get really interesting with people being able to really create like batch systems to automate all this stuff is when you can get a lot better at the consistent character and stitching things together and having, you know, being able to tell a story over a longer period of time with your characters and with your um, you know, brand message you're trying to send. But I mean it's getting really cool for sure. — Yeah. No, stitching those together and having that consistency across. I mean the being able to generate like 30 minute short film or something is probably what we're headed next year with these things. So um if you've got the money — especially with the fact that you can upload your own product and in the video it will appear the exact same with all the text and all the logos you know intact is super awesome. — Yeah. Some other ones we got is the uh anthropic 1. 5 billion copyright lawsuit which is pretty important saying that like AI generated content is derivative of the original content which kind of saves the ass of the whole space um or we would have been in trouble. Um October dev day we had the

October - DevDay & The Deal Gets Done

uh apps SDK agent kit and so on um released. That's probably a pretty big one we can touch on. Maybe like the lack of talk I've heard about the aging kit and stuff like that after is probably like the more the absence of any real action is the topic here. But has anyone since release date used that platform or like started to swap it out for any of their builds with their agency? — N this is the classic — open AI team that we've seen count countless times. they try to take a part of the market there. You see all of these early stage solutions that they're building which eventually all kind of like funnel towards Chad GPT and I think this is the bigger long-term play that they have but all of these solutions whether that it's the it started with the assistance API agent kit like what I I don't even know what all of these terms were looks promising on the demo but then you figured out like this is just a very like premature product and we don't need this let's just use the tools whether through code and they then make whatever just the tools that we already know because you can do the same things. — I think this is mainly the reason why those companies push it out, right? — Just to get it out there. — Yeah, exactly. To gather data like they often just launch stuff on a horizontal scale just to figure out what people use it for and then depending on those use cases, they double down. — Yeah, it was this one was weird though cuz assistance API exists for two plus years. Liam built a whole product on it and then they recycled those features into agent kit which was also halfbuilt. So they just have like a bunch of half-built bridge products and it just doesn't look good because they keep recycling what works into the next one but still don't go the whole way. — Yeah, they've got like the responses API rolled everything from the assistance API into it, right? And so you've I suppose you got to give them a bit of slack. It's like they are trying to figure out this very new kind of best way to provide uh access to these new tools. But um yeah, they haven't got the best the best track record. And I suppose without the big brand name, they'd really be struggling on all of the uh API side of things. I'd be interested to know. I should have got that stat, but how they are doing in terms of API revenue versus consumer cuz they're obviously heavily consumer, but that doesn't mean that their API usage um and like sort of business usage isn't still massive compared to the other companies just because of their scale, you know. So, um, Nate, obviously this was the supposedly the NAD killer. What are your thoughts on it? — Oh my gosh. Yeah, I mean, I think they hyped it up a lot, but then also every person took it and ran with it and it worked for the most part. People know what will work in a title. But yeah, the ending killer that um just was built completely different for a completely different person purpose type of person. Um, so, you know, made a video on it, but haven't touched it since. But man, I would be I would love to see what those internal seauite meetings at OpenAI look like because I mean not only is like America racing against China and everyone else, but all the US companies are competing against the US companies as well and racing. And I think to your point Annis too, like they are definitely just kind of trying to, you know, throw stuff at the wall and see what sticks and and understand like what they need to do. But I would definitely be looking at Google and getting a little scared and trying to understand like we need to pick a lane here and just get really good at something. And I don't know what it is for them because they were, you know, kind of first out the gate. And anyone that you talk to that doesn't know AI at all, if you say AI, the only thing they'll probably think of is CHBT and that's what they think that that's what AI is. So yeah, I would just love to like understand what's going on in their like, you know, quarterly annually planning meetings and like what the panic looks like because I know it's just like they probably just feel all this pressure of we can't stop. We have to do it all. No, they've really got so many products out there now and seems like they're chasing too many rabbits in my opinion. I suppose that's what the code red's all about. Sam's realized it too. — I was going to say I have a perplexing notification from two hours ago that said they plan to raise another hundred billion by March. — Open AI we're talking about. — Yes, open AI. — That's insane. It could also be that they're just trying to like throw products out there because they have to generate more revenue than just chatbt users. So they feel like look if we do this and we grab a share of the NAND market that might be revenue and that could it could also be like under that type of pressure. So on the one hand learning getting the data seeing what people are doing but also just for the sake of hey we need to kind of like diversify what we're doing. just said initially like GMI basically mentioned that they believe the everyone aims towards the AI companion like they put a very big term to that everyone claims to have the first AI companion that can basically be your can say like your virtual friend and that's kind of like a way on how they or how he mentioned perceive companies having a certain influence over the behavior of people and therefore taking market power. It's a very interesting concept and uh even more interesting is that they see payment in the future being in the like in the processing units like the units of GPU power basically because now we see — solid payments. — Yeah, I know. But we see solid payments towards like fixed subscriptions but everything is based on tokens. So I would say the future model is definitely not static anymore. There's going to be some usage based alternatives that's going to be used and that may not even be tokens. Well, I hope I'm not needing um I suppose maybe I should be a bit more open-minded to using like my chatbt as my like emotional sponge and performance coach. So, who knows?

November - Gemini 3 Triggers Code Red

November Gemini 3. Yeah. Um I suppose that's been the big thing for the past month or so. Uh Gemini 3 was obviously a pretty big uh had a big splash. — Yeah, I love those to be honest. — Or it's just kind of another one of these. They've got the Nanab Banana Pro release alongside this was probably the biggest one, right? Like have you guys noticed any major difference on like the writing or the coding ability for this stuff. I suppose the coding stuff within the uh the coding ability within the AI studio seems to be a massive jump forward. — Front end for sure. It's beautiful now. — And I think the interoperability right that all tools can be like they integrate their own text. They give you a pre-made Gemini API key that be used. You have the models, everything out of the box without you touching ever anything. — Yeah. And that's the ecosystem at play, isn't it? — Yeah. And they also released a Gemini file search API which like for rag purposes. Really, really helpful. It's like a cheat code. It's like no code rag which they're just making a really strong ecosystem where they don't have like halfbuilt products. They're fully built products and they all figure out a way to like puzzle them all together which is pretty Yeah, that's what you really saw coming together in the in their AI studio, right? So, for me, when this all launched, that was the first time for me jumping into that AI studio. I'm not sure if it was there already before, but that's where I really got to see all of this coming together. And then we noticed Gemini 3 on like front-end coding, right? And then right after that, we had Opus 4. 5 like uping a notch again, but then also Nano Banana in there. And I was actually working on a project where we had to do like image generation and while we were going through the research and the discovery to figure out look what's the best model like that model dropped and it was immediately like a step up. So that's also interesting like whenever we are working on something usually always before the project is finished there's always something new that you can consider already. Yeah, I suppose you could say that this was the moment, at least for me, that Google's really like stepped up above everyone else in terms of being like the having the best. If you added up all the their scores across all the different categories, like they are they are the best now with the video, V3, uh the Nanabana Pro model, and then just I use their uh Gemini a lot for writing. I think it's really good with large context windows and coding as well as you saw in the uh in the AI studio, too. So, I think this is a bit of a turning point from uh OpenAI over to Google perhaps leading the race. And I think I've heard people say that and echo that as well.

December - The Final Sprint

December Final Sprint, Claude Code, um Disney 5. 2 and Disney. Wait, what is — this is a brand new partnership they announced with Disney to bring beloved characters from Disney's brands to Sora. That's the main deal. — Yeah. Smart. Very smart. That's really interesting because I remember when the image model dropped, I saw this interview where people were asking Sam Alman because everyone was making like Pixar versions of themselves and they were asking like is Disney going to get paid for all of this like IP of Pixar style characters? So, it's interesting that they actually kind of did come together now. — Yeah. I mean, there's these uh these plays where you look at like a Jake Paul allowing himself to be used on uh on something like Sora and then people making all these like really like funny videos about him. um interesting distribution strategy and I suppose there's a small like small window here where brands or creators and stuff where there's only if there's only a handful of brands or creators that you're allowed to create on the platform people are going to really like use that and I guess as long as you're okay with the downside of it cuz I'm sure for Disney the risk here is that you get Elsa and Tony Stark doing someone manages to prompt it and get it to do something inappropriate you know like that's how can like Sam look him in the eye and tell him that's not going to happen? Um — I mean that's literally the first bit thing people try. — Yeah. — Especially when OpenI announced in December. I don't know if it's been released yet. Haven't tried. You can do NSFW content on chatbt. So you're going to have someone with this like Disney characters NSFW. No like nothing to stop them. — We need to go back man like — Mark is that also that — turn around. Also the image generation is that also where they are opening the gate. — They announced in November that by December one of the many Christmas gifts was NSFW content for uh verified adults. — And Mark hasn't tried just to be clear. Not — I actually just made a video said I just tried chatbt. So that's my — you did. I did see that video. — Verified adult sh. — Yeah. What else? What was on the verified adult form, Mark? What were the fields like? IP the fifth man. — Yeah. Oh, that's interesting. I think that's going to be of course a bit more of a trend we see next year of these brands and um the IP being allowed into these models, but I'd say Disney's got some pretty big kahunas on them to be the first major one to say yes. So, um there's got to be some funny stuff generated there online. Um do anyone know about the Bun acquisition? — So, Bun is a JavaScript framework. I have no idea what makes it special over all of the other flavors out there. I know from watching many fire ship videos that there are literally hundreds of JavaScript frameworks and Antropic acquired it probably because it finds it interesting in some way to either embed in some functionality in claw code or maybe in the way uh claude and the artifacts work with code generation. I don't know. That's my best explanation. — Makes me think like if these guys have the best coding models in the world, why would you need to acquire like you know what I mean? Like surely a bun is like not I don't I have no idea what this thing does or like you know the depth of it but surely you could generate it. — This is purely guessing. Maybe it has some property within the framework itself that makes it easier for AI coding agents to work with uh the — I suppose what you're buying is like also kind of this thing has its fingers in probably like millions and millions of apps already. So if you want to like rebuild yours from the ground up and like copy them, you're still going to get the distribution and get it like built into apps. So I guess they've got access that way to millions of applications potentially and some control therefore over them. — These are not my thoughts so I'm not going to act like I'm a genius here but someone I follow said that some of the infrastructure built was built in a different language that like anthropics claude is not aware of. So apparently Anthropics been investing in different companies that have languages that is not native to their training data. So they're aqua hiring the people and the training data of those languages to make it even that much more robust. So there's a lot of ideas out there. Yeah. — Okay. We have uh MCP being donated and I think that's an interesting point to I guess wrap up on. — It's a big deal — is Yeah. And Anthrop kind of throwing that in the bin and saying no this was this is not good enough. This is sort of like step one. There's something much bigger we can do here. So who's got the breakdown on that for us? — I know they donated to the Linux Foundation, right? — Yes. I heard that they did this in basically saying like MCP is the old way and it's highly inefficient in the terms of amount of context and tokens and so on that it uses and now they're looking basically building their version two of this kind of standard that perhaps is more aligned with that uh A2A thing that you're talking about Giannis rather than MCP. So it's interesting that they they've kind of like built it up. I not sure if it was this year that they that night was this year the MCP boom or was it uh last year? Last year — they released it last year but I think the boom was early this year. — Yeah, March or April. I think — it could be that. So the MCP just doesn't cover the full range of things that agentic systems need to do. So it was very well known for making tools available to large language models. But you could also just do it through actual tool calls. So it was then the protocol itself that was of value but not any like real new capabilities to it. It could also do resources and prompts but I think like 1% of the use cases use that and 90% of the 99% of the people were just using it to make tools available. Most people don't even know that they could do that. And then on top of that, you have the other capabilities that you want in a jetting system, for example, like A2A. So, it could be that they're just looking into things like, "Hey, we need to kind of like design this from the ground up because this year it got a really bad rep for all of the security issues that um you had with MCP when you deployed these remote servers where you would say, look, here are all of my tools. And if you connect with these tools, we can essentially dive into all of our CRM systems, get the data, and by setting up your application like that, you of course have tons of like vulnerabilities where attackers could also try and access those tools, then dive into whatever uh resource that tool had access to. So again, this is just thinking out loud out loud, giving it kind of like an analysis on where I've seen MCP and I've always had these kind of question mark where like why not just use tools? — That was my whole thing when they came out. I didn't really pay like a huge amount of attention. I'm like, well, like how is this any different from just like all of the tool use we've been doing already? It's just like this buzz word dropped like the new buzzword dropped and like everyone was losing their minds over something that we'd already been able to do. I guess maybe made a little bit more accessible to people, but I honestly didn't get all the buzz. It lowered the barrier. So like it lowered the barrier from people that don't understand how to use APIs to just have like one golden key that opens all doors at once. Um really fun for solo building, just not fun for production level building. And even if you use something like cloud code, you would start at half your context window gone if you had like four or five MCP servers just set up. You didn't even write your first prompt. So I think that also started really pushing people over the edge because you'd have to restart a new session every 10 minutes you write a certain line of code. So — yeah, and the fact is as well that everything was slow. So it wasn't even able to be used like in the voice AI space for example cuz like you've seen that they have these npx or UVX commands which basically spin up whatever this MCP server is on like a temporary virtual environment and that sometimes takes 30 seconds. So for a single call you have so much resources so many resources you spend on just doing a single interaction it's just not worth it. — I think that's what their whole thing was is like hey look this clearly isn't the way forward. So they have they've handed it off here and they're going for something bigger which is pretty

Trends From 2025

exciting. getting into some trends now. We had the reasoning revolution. This is um obviously late 2024 we got 01 coming out and it's really been definitely the year of the reasoning models which has made them I think the real wow moment for me was when I was able to use 03 with web search and it was able to like in the same way that you have kind of deep research mode but for it to be able to uh analyze the message that I sent to think to go and search the web to come back to then think over the stuff that came back and then come up with new things for it to and then also use other tools and things like that. Like it could thread in an image generation step at the end. That was when I was like, "Oh, wow. This is really like incredibly like far smarter than these like singleshot uh outputs were from previous models from the typical uh way they're done. But having the ability for it to think and take make tool calls and then think and then analyze the output, it felt like a gigantic step towards uh much more intelligent AI. " So, well, quickly had uh Deep Seek having their reasoning models. Um, hybrid reasoning. I'm not sure what they meant by that, but you've got Think Mode on Claude coming out as well. — It's just like a visual explanations for the average Joe. — Oh, got you. Okay. Yeah, it's just split. I just remember thinking that I barely had to prompt it as much anymore. And I don't know what Mark what you might think here because you come from a lot of this prompting world, but I just remember with much more minimal prompts without as much context, it would still give me a really good result. And I figured that's kind of where it was headed. I — I thought we came so far with the whole reasoning thing, but we still didn't manage to get rid of EM dashes with a symbol prompt. — Ridiculous. — That's still tough. Yeah. — Yeah, that's one. Um, I was going to say that on the reasoning side, it's basically you're running the same probability and I'm just like diluting the definition multiple times based on the prompt you put in. So, it gets multiple chances, m multiple shots on goal, if you will, to get the answer right. But the one thing that's interesting is when you look at the thinking, that's actually not what the model's quote unquote thinking. That's it's kind of like it's telling you what you want to hear in a way. If you look at papers, the reasoning models aren't actually thinking in the words that you see on screen. Like that's not how they're coming to the answer. It's what they're explaining in a language that you might be able to understand, but it's not necessarily correlated with what the LM is running underneath the hood. That's one thing to keep in mind with reasoning models. — Oh, that's I didn't knew that. There's a little bit of smoke and mirrors there. — Yeah. So then what is it? It's got to be based on some sort of like truth there. — The previous input — for sure. — Yep. — Yes. The previous input. But like what you see as the user sounds super smart. Like it's so intelligent, but it's literally just running predictions in a pretty much still a black box. You still don't have an X-ray machine that tells you exactly why Nate sent one prompt. I sent the same prompt and we got slightly different answers. There's no clear explanation for that. — If I'm not mistaken, Mark, you get the main prompt, it generates its prediction, and then it uses the prediction as the next output to generate the next token. So in the end you basically just have extra reasoning in the sort of sense to generate the answer right — which is why if you let's say you use chatbt on extended thinking and you let it go for like 10 minutes sometimes the answer is worse than if you let it go for four minutes because if you rerun the same prediction over and over again not only does it become a more soulless response that's why the emotion thing uh fell out but also it starts to it's kind of like being so smart they're doing a multiple choice question and you can justify every answer. So you get the answer wrong because you're so smart, right? So that's a downside of it. — So that's uh yeah, that's been reasoning models. Um big leap forward. Um but uh in some ways I guess a step back, but I'm sure they'll fix that from there. Vibe coding explosion has been over. Um I think that's pretty clear now. It's been a massive one. Uh open source fighting back. Probably don't need to touch on this too much, but we've had uh the Deep Seek moment as well. — What's llama? Someone tell me what llama is. — Man, what happened to those guys, man? Like they fell off the map this year. I remember like Meta seemed to have like the game in a headlock for a bit there and everyone was like, "Oh, these guys are the top going to be this one of the top picks. " And I think they just uh spent all that money and then couldn't get a decent model out. So, I guess we'll wait for them to get their stuff together next year. But yeah, has anyone used anything Llama related in like the past 6 months? They're coming out with a closed source model actually next year. — What are you using it on voice Jiannis? — Uh we use it now for — building. Are you now using it within uh the app that you guys are building or is it just in — We tried it right. We moved to GP OSS. — Yeah. Like GPD OSS is I would still say fairly quick and I think things like are the worst right now as they ever been and going to be probably. and llama is it was just great cuz you can have like a 7 billion token model which is extremely quick and gives answers fairly decently but I just figured that it's not worth the downside of having these extra few milliseconds if the oss models for example gets better or the infrastructure gets better so it's it doesn't reason as much but yeah it's great for formatting and stuff like that — yeah and that's also I think the point we try to use it for the most simple problem really ever is formatting text so just taking an output and then literally like changing up the tokens a little bit. So you got to be careful with what you choose it for, but if you have a well enough scope problem, they can work. — Okay. So I guess it's still in crazy high volume basic tasks, these smaller like self-hosted like llama models are like that's the only use case they got. — Yeah. So I think when you're building it, it's only really relevant if you either are building a product where you need complete privacy. you so you can you cannot send anywhere or you want it to be completely offline. So it needs to be completely you need to completely run it on your machine for example when you're on a plane whatever or when you are building a product and you're trying to optimize costs where you break down the entire like solution space and you have some uh tasks being done by like the heavy models that you kind of like pay for and you have maybe some infrastructure that you run on your own that you offload simple tasks to kind of like speed things up uh and um scale costs down. the average person never touching live before at least but the GPT — you don't want it just go to chat GPT — it's actually like all the models they kind of separated in the very distinct and unique use cases and I think claude was the biggest one with I think 80% — of all the coding usage is literally through claude which was like very impressive for running these models uh yourself what's the say if you're working on GBT OSS where are you like setting that up how are you running it if you I guess if you're doing it locally for like a small armor one like you said, but what if you're building it into an app like you guys are? Um what's the process there? Like what platforms are you using for hosting — that? That's actually a little bit tricky. So if you just like want to play on your own laptop, you can use a tool called O Lama. You can just go Google download it on your laptop. There you have access to all of the open source model. You can download it to your laptop and then you can even like try it in the terminal or hook it up to something like open web UI and you have literally like a chat GBT clone that you can run. If you want to put this into a product, you need to be able to put that open source model on a server. So you need to make it available. So ideally you not you don't want to do that on your own laptop. So you want to have some kind of like cloud uh cloud environment. But then in order to run those models at a or kind of like on a server where the latency is not like crazy where you don't have to wait 30 seconds to a minute, you still need can sometimes still need a pretty beefy server which can still be on a monthly basis couple of hundred euros a month to host a server like that. Um so that's a month tradeoff there. — Yeah. I would say a hundred euros a month is still cheap. Like if you look at a proper because it's all GPU based, right? And all the web infrastructure that we see is usually CPU based. So now you go into the GPU space which is pretty expensive. So even if you use something like Digital Ocean, which isn't necessarily an inference provider, you still end up paying like $2,500 $2,800 on a monthly basis just to run your server. And that is like not the fanciest GPU, but it's a standard GPU. And that's only the infrastructure. then you have to get your OSS version out or whatever you're going to use and you need to fine-tune that as well because it needs to obviously be available through an API and that stuff you got to build around it. So it's not as easy as people think but if someone wants to go into that they should probably look into inference provider. Okay, that's open source. Image gen, video gen. I think we've um sort of covered that. We have infrastructure arms race, not really super relevant. And what to watch

Where AI Is Heading Next

according to Claude, this is what to watch. The billiond dollar one employee company, which is a prediction from um Dario from Anthropic, Grock 5 and the AGI question. the true agentic AI and production uh which we were supposed to get this year but uh apparently it's next year now uh and regulatory battles and by 2026 we'll see the first billion dollar revenue company with effectively one employee powered by AI agents all right well — we see will we see it next year I'm not sure — I mean people have like they've said that uh I mean Minecraft was like a billion dollar company with one person so um depending on I think a deval kind of points out like we've had like billion dollar oneperson companies um previously, but uh I suppose it hasn't been as widespread. They're definitely like these really unique unicorn kind of things. Yeah, I' I've I'd be skeptical to I'm sure someone will figure it out next year. Um but [clears throat] it's not going to be your typical, you know, like it's going to be a very very tech. kind of a lovable one of those kind of new category creation ones where it's like so highly levered on AI and technology that it's able to basically run. Maybe you wouldn't. there's no like sales cycle or anything for it anyway. Um, it'll be a very different looking business model, I'd say, to something that we're used to. It's not like you're going to be able to make the billion dollar freaking cleaning company with AI agents, you know. — I do think though you will have these vertical solutions that going to be super powerful. So, if you niche down to one specific thing and you just automate the stuff out of that, you I think you can get a really good agentic system that could basically handle it for you. — Well, um, boys, that's been a hell of a year. Um, I think you guys can agree and I just sort of want to put a bit of a positive message out there to everyone who's been following this stuff for a while and maybe feeling a bit overwhelmed about how fast things move. But, uh, as long as you're making progress, you know, like you probably underestimate how much you know, if you're able to follow along in conversation like this, how much you know, and how much power and value there is in that, um, the opportunity is still there next year for you guys to turn this into a business and do whatever you want with it. I don't think anyone here would it would uh would counter that. it's more than ever we're starting to see this these things going mainstream and that's when a lot of the money is made. So, I just wanted to give a bit of a positive and the Christmas season kind of u word of encouragement to you all that um we're in the right place. We're on the right side of history here. And if you think we maybe aren't, then you should be using your abilities and like I'd say getting into AI and trying to do something good with it rather than sitting on the side and being skeptical of it is um is a much better way to have a positive impact that you want. So, um I will let you guys uh if you guys want to do like a little sign off, but um or who could be here all freaking day. Uh Mark, if you got anything to say to the people. — Yeah, sure. Just cuz you have a lot of uh beautiful big audience. If you're a non-technical, like 2026 is probably one of the last years where you can look super human amongst all your peers, but just by following like Liam stuff, just the lone Liam stuff. Um, there's just so much opportunity out there to upskill yourself without having to learn how to code, how to build workflows from scratch. If you watch like Nate's channel or Giannis or Dave, there's just so much opportunity to level yourself up really quickly and like the learning curve is getting that much easier. Like learning how to prompt is not even a thing anymore. You can just be very vague and go very far with it. So just don't underestimate what you're capable of. destroy your limiting beliefs and you can start off 2026 in the best way possible. That's kind of like my two cents there. — I would add to that what I touched uh upon earlier this video. So even if you are if you don't have a coding background, go play around with claw code. Like literally like hop on it and try to kind of like see what it can do because it could very well be the most powerful AI agent that exists today. And it can do a lot more than generating code, but it can also really like execute commands on your behalf, do research, create documents for you, and using that tool, even though it can be a little bit intimidating in the beginning, what you should do is go to Mark's channel, find his videos on cloud code, which I literally watched this week. He will explain to you how to set it up, how to go through it, and then you can literally start to build cool things. Um, and just you'll learn a lot about how this system works beyond just the productivity gains that you can get from it. — Totally agree. I think if you're not already dabbling around with it, just a little unlock like that, just start like scratching that itch and who knows where it's going to take you because you could get into building crazy systems out for yourself just to increase your own productivity or you can get into realizing that you can build an app that you've wanted to for a long time. So, Yiannis, you got anything? Yeah, I'd say [sighs] I I'd build up on top of what Mark said and I think it's actually kind of like a relief for everyone watching and probably also one of the greatest opportunities we've seen that the tech becomes easier. Everyone is afraid of learning the technicalities and stuff like that. But I can most likely guarantee you that the tech is now going to be the most complicated thing you've ever used at this point in time. It's only becoming easier, which is amazing and it's something that you see should see positively because you basically get rid of the most mundane, repetitive and long like task that takes the most of your time. And now instead of spending time on that, you can actually spend time on things that matter if it comes to your business like how to find leads, how to push forward, how to market your things, how to make an impact. And I think that's one of the most beautiful things out there because we can leverage that so much. I'll close off with one final piece about what actually makes you excited because there's so much in this AI space. It's it's new to everyone, which means um find what kind of like sticks out to you, whether that's automation or voice or AI music or whatever it is, but there's a lot to play around with. And it's important to understand like all the opportunities you have and it is very overwhelming, but something will kind of speak out to you. And if you actually find something that you're passionate about and that you care about, it'll just take you so much further. With such a great amount of choices on our on the menu right now for these different parts of the AI space, I think finding the thing that actually lights you up and excites you the most is uh is by far one of the most important things. And I think all in this room have all found that thing and the results that we're getting in our career is a byproduct of that. Um being able to go so all in on the topics that we love. So that's uh that's the Christmas special done guys. I hope uh send a very merry Christmas to you and all your families. enjoy the holiday season, happy new year and whatnot. And I hope you guys are ready for an absolute banger next year because uh this show's not stopping and uh we'll have the guys on uh guys on again soon to update us on what maybe where they're at in their journeys at the moment. But yeah, just from the bottom of my heart, it's been great to have a year with you guys as well in this room. It's been a pleasure and I'm super excited for next year as uh I'm sure you guys are too. So, I'll leave it there and merry Christmas, happy new year, and we'll see you uh maybe this time next year for a recap and see how our AI slides go this time. So, that is all for this episode of the podcast, guys. If you want to see something similar that I really think you'd like, you can click up here to watch another one. And remember, if you think you have a story worth telling, some valuable insight you can share with the community, you can fill out my podcast application form in the description below. I'd love to have a chat with you and get some exposure for your business. Aside from that, guys, that's all for the video. Thank you so much for watching and I'll see you in the next

Другие видео автора — Liam Ottley

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник