How A.I Changes In 2026 - Major Predictions
27:21

How A.I Changes In 2026 - Major Predictions

TheAIGRID 31.12.2025 45 118 просмотров 1 098 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
Checkout my newsletter : - https://aigrid.beehiiv.com/subscribe 🐤 Follow Me on Twitter https://twitter.com/TheAiGrid 🌐 Learn AI With Me : https://www.skool.com/postagiprepardness/about Links From Todays Video: Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos. Was there anything i missed? (For Business Enquiries) contact@theaigrid.com Music Used LEMMiNO - Cipher https://www.youtube.com/watch?v=b0q5PR1xpA0 CC BY-SA 4.0 LEMMiNO - Encounters https://www.youtube.com/watch?v=xdwWCl_5x2s #LLM #Largelanguagemodel #chatgpt #AI #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #Robotics #DataScience

Оглавление (6 сегментов)

Segment 1 (00:00 - 05:00)

So AI in 2026 is going to be a lot different to how it is in 2025. And so these are my top eight predictions for the future of AI. So let's get into it. Now, one of the first things I really, really want to talk about is AI backlash. And this is what I think is going to take up a majority of the AI head space in terms of the normal average person. I do believe that individuals like yourselves who are watching videos like mine are probably in what I'd like to call the minority of users who are engaged in AI updates. I've recently started to realize over time that the AI backlash has continued to reach a tipping point. And although this isn't a directly AI model prediction, I do think that it will be very hard to ignore this considering just how bad it's going to get. So let's dive into exactly why this is. So, you can see a tweet here from someone that says, "It's pretty obvious to anyone paying attention that AI backlash in 2026 will be completely off the charts. " And they highlight three different articles that show complete negatives towards AI. The consistent theme that I see is that AI doesn't actually help the mass user base. For the average person who uses artificial intelligence, beyond the average chatbot, there isn't much use for them. They're getting all of these second and third consequences of AI and it subsequently makes their life worse. So, let's get into this tweet which I found from Grums. And this tweet sums up a lot of the AI backlash perfectly. It says, "I think the backlash is going to be peak in 2026. The feeling that AI is costing us hardware prices, shortages, the sums of money being invested. People are worrying about electricity prices. What if it all crashes? There's also overpromising by AI companies. How many people are tired of hearing AI will solve all of our problems and then it just fails half the times. AI being forced into everything. Nobody wants co-pilot being everywhere all at once in Windows. Many companies are trying to shove AI down your throats. And then it talks about the fact that the real issue is that when AI starts to take people's jobs, there's going to be no perceived benefit ready to offset this. There's probably going to be literal civil unrest in the streets. AI companies need to start toning their messaging and fitting it to better people's needs, not their distant utopia, but what will be great for them now and help them with their daily struggles. And I agree with a lot of these points. For the average person, AI is simply making their life worse, not better. And of course, you can see right here, there was another viral tweet, and trust me guys, I probably have already made a video, if it's, you know, not out already, it will be out tomorrow on why the AI backlash is probably only going to get worse. And you can see this tweet here where it says, "It's amazing that in the last 6ish months, almost every single outspoken person on the topic of AI who are not owners or employees of an AI company have turned against AI. " You can see here Lena's tech tips is saying go away copilot. Now, we can also see here that someone also highlighted the CEO of Khan Academy believes that AI will displace workers at a scale that most people don't yet realize. And he says that his prescription for companies that benefit from AI is to devote 1% of its profits to retain workers or else there will be tremendous public backlash. Two thoughts. First, this is a problem that needs to be solved by the government, not the private sector. And of course, second, the public backlash needs to get going ASAP. This is a tweet from someone on Twitter and it talks about the fact that you know pretty much every sector of AI. You know, if we even look beyond large language models, think about autonomous vehicles dominating ride sharing. You know, delivery routes are going to be, you know, AI trucks that are just going to be doing those long haul things. There's going to be large economic displacement. There's going to be fueled by a wave of automation that's going to hit faster and cut deeper. And of course, there's going to be a lot of backlash from individuals. We can also see this tweet here which is just someone tweeting, "My opinion on hottake is that I actually do not desire to see technology advance any further. " That one got overwhelming support. And honestly guys, since I'm already going to make a video on just how bad the backlash and since I'm is getting, I'm not going to continue talking about this for too much because I could genuinely talk about this for an hour because I've seen quite a lot. This is I think one of this clip from Brad Gersonner who's an investor is on the All-In podcast talking about the fact that it's going to be really important AI backlash because AI backlash essentially will determine politics. If politicians want to get voted in, they're of course going to, you know, start running with narratives that are popular with people today. And AI backlash is one of those things. So this is what he talks about and I think this is really important for 2026 in terms of how policies are going to shape the world moving forward. — One of the major concerns I have is that AI is becoming deeply unpopular in America. Silicon Valley is losing the battle around AI. Doomers are now scaring people about jobs. They think all these job cuts that are going on in America are the result of AI. And number two, they're seeing their electric bills go up and they think that's also the result of AI. I've talked with a lot of

Segment 2 (05:00 - 10:00)

Republican senator and House members who say they are afraid to mention the words AI because their popularity ratings go down. We need to get on the other side of that because that is a losing proposition for America. If what takes hold here is that, you know, it's politically popular to push back against AI, then David, I think that's what you're seeing at these state levels with Republican governors as well. I think both of those are false narratives, but we need to get on the other side. in China, they're not going to slow down. So, if we do an own goal here and slow down because we think somehow that this is the the path to greater economic growth, it's going to be a real problem both national security as well as economic security. And so the last point I will make here is that the AI backlash is getting so bad that this is the you know main point I want to make about this is that I truly do wonder and this is one thing that I don't know is that is AI backlash going to actually reach the point where it financially makes sense for companies to not invest in anymore. The point I make by that is that like is the backlash going to reach a point at such where companies realize that it actually hurts them to invest in AI rather than actually use AI for certain things. For example, McDonald's actually pulled down a 45se secondond AI generated holiday ad that viewers deemed cringey. Now, this completely hurts McDonald's brands. The worst thing you can do for a brand is hurt your brand image because brand recognition takes a very long time to build up. You know, think about some of the brands that you know and love. Maybe you wear Nike. Maybe you have a watch brand that you love. favorite shirt brand. Usually, it takes quite some time to build that up. Now, if companies are rushing into AI and they're using it and then they're figuring out, wait a minute, this is actually, you know, reducing our brand, then of course there is an incentive to stop using that because if their brand gets worse, their sales drop and that is actually reflected in the data that is going to be something that they just literally steer away from. Now, this leads onto my second point for 2026 and I know this isn't a direct AI prediction. This is more so like a second order consequence of AI, but I still think it's a relatively important thing because you probably want to know this if you're watching this video so that you don't get completely consumed by the entire AI bubble. It's important to understand the many different perspectives that exist. And I think this is one of them. Humans are going to be, you know, premium. So essentially what I mean by this is that if something is made by humans, it is likely going to be perceived as a higher value considering the fact that a human takes more time, more effort to create said piece of content, even if it is fundamentally worse. So basically, so Porsche essentially released this car ad which was completely done by humans. And the way how this was marketed, at least from my understanding, was that it was done with a studio. And they explicitly said that no AI was used in the making of this advert. And you can see from the responses, from the comments, we can see that it says that it's crazy that we've reached a point where not using AI is a flex. You can see also someone said, "Everyone liked that. " I'm pretty sure that's a quote from Fallout. And you can see someone also said they're mentioning that not AI is going to be the biggest selling point for a company right now. And you can see right here that this one had 430,000 likes. And it says, "I remember seeing a post about how normalized AI and advertisements making the company look cheap, will mean that real art will start becoming the standard for brands to seem more luxurious. " And it's already happening. And I do think that I wanted to include this in the video because some of you guys in many of your industries, this may actually be useful for you to realize that, okay, rather than rushing to AI in this specific scenario, maybe I could use a human. And in my personal, you know, own example, for example, on this channel, I could choose to use an AI agent, a digital avatar, and I'm not completely against it if I'm completely unable to act on the channel for whatever reason. But I think that there is fundamental value to me actually using my own voice so that you guys can understand it's a real person. There's more authenticity, and of course, there's more trust. Now, this is something that is a second or third order consequence of AI, and I do believe that there is going to be a sort of blue collar revival. I think this one's going to be super interesting. And full disclosure, there will be AI related stuff later on in the video. This one is of course related to AI, but just indirectly. And I think the one that, you know, this is, you know, largely surprising is because most people are simply overlooking how AI will fundamentally change the labor force. And I do believe that there's going to be a large blue collar revival. And that's because, you know, one of the predictions I'm going to make later on in the video is a prediction that companies will start focusing on economically valuable work. And so you can see here a tweet for the prediction of 2026 says that blue collar will have a renaissance. College ROI is being questioned. AI has everyone rethinking job security. And the stigma of blue collar work is fading. And corporate culture has us rethinking of success. The demand for data centers will make electricians, plumbers, and construction workers and some of the most sought after talent in the industry. So I think this one is pretty true. If

Segment 3 (10:00 - 15:00)

you've been paying attention to what the AI companies are actually doing, they are really going after economically valuable work. We already know that if you've been spending any amount of time with Opus 4. 5, that model is genuinely incredible with as to its capabilities. So most people are already starting to realize that they somewhat live in a parallel world. And once those companies really start to automate realworld tasks, I think this blue color revival is going to shift into place. Now, I did come across a video that I found that basically says that although there will be a blue collar revival, it revival that might be powered by AI. Take a look at this video that I found on Twitter because it was a perspective that I didn't consider. AI isn't coming for my job. I'm an HVAC technician. No, it is. Now, you're all screwed. You better get rich quick because I literally was working on a unit. I had not a damn clue. It better it was better off speaking Chinese to me. I had no idea what was going on with it. Zero zip. I couldn't understand it. See these? I took a picture, took a video, explained what was happening. Uh, you know, it gave the symptom. Within 35 seconds, it told me exactly what was wrong with the unit and how to fix it. AI is coming for your job, buddy. It is. Now, another prediction that I do have is that probably Google is going to take over. So, everyone knows that Google is really, really good. There's no other way to put it than they are really, really good. I split test AI models constantly to ensure that I'm not missing out on any of the nuances and models. And I would say 80% of the time is spent with Google. In fact, let me be honest. So, probably around 50% of my time is spent with Gemini 3, 30% of the time is spent with Claude. And the remaining 20% is probably spent amongst different AI tools like Chad GBT. But chat is no longer my daily driver. And that is just not being performative or just saying that for the use case. It's that in many split tests in my daily life in work and professional. I just find that Gemini 3 is just incredible at what it's able to do. And this has taken everyone by surprise. Now, it's not just Gemini 3 that makes me think Google is going to take over. Google's complete ecosystem is incredible. Nano Banana and VO3 are already better product offerings and Google already has better cheaper pricing in terms of what they offer. Think about the entire vertical integration that Google already have. I'm not saying that OpenAI are going to be completely done. I just think that Google manages to gain a lot more ground than OpenAI and everyone else thinks. And I don't think this is a bad thing. Competition is good for the end user because when companies compete, they have to improve their products. And this means that the end user like you and me, we essentially get better products at probably similar prices. Now, of course, on the, you know, software front, I think Google are going to take the cake. Nano Banana Pro is absolutely incredible. I use it literally every day. You've also got VO, which I also use, which is a pretty crazy video model that's really good and it's only going to get better. They've also got world models and stuff like that are really good. And of course, the AI stack that most people don't even know exists in terms of Google's actual AI tools that are not just image and video. Things like Notebook LM, things like Google Flow, Google Whisk, all of those tools are really, really good. And then, of course, you have the vertical integration from Google that most people aren't considering. So, Google has, you know, some of the best researchers. They've got, you know, Whimo, literally self-driving cars. There is so much to talk about how far ahead Google is in terms of, you know, pound-forpound what they are able to do compared to the other companies. So, they own every single layer of the AI stack from the atoms in the chips to the pixels on your phone screen. And we think about that compared to OpenAI and Microsoft, they rely on that partnership model, Google is just a self-contained AI superpower which allows them to optimize the speed, the cost, and the capability in ways that a fragmented stack cannot. So when we think about it, they've got the foundation infrastructure which is where other companies are primarily relying on Nvidia GPUs which is like bottleneck demand, you know, but Google's got their own GPUs. TPUs and of course they've got Gemini, Gemini Flash, they've got Gemma which is open source. Then of course they've got their own platforms, Vert. Ex, Google AI Studio, their own developer layer. And you know this also has incredible distribution. And you know one thing that Google's been doing is they've been rolling out Google Gemini for students. You've also got Google Gemini for Android. Google Assistant is going to become Google Gemini. You've also got Google Gemini for workspaces integrated into Docs, email, Sheets, and Slides. You've also got search, which is AI overviews. This is Google's massive data mode. So, I mean, I could go on and on even about, you know, the research, deep mind, and the future. Like, think about alpha 43, think about alpha proof, alpha geometry, think about, you know, world models. I mean it is just game set and match in my genuine opinion. There is no

Segment 4 (15:00 - 20:00)

way that Google really loses this. So I think it's not that you know Google doesn't beat OpenAI these other companies because Gemini is always going to be smarter. It's because Google can deploy AI cheaper and faster to more places than OpenAI can. Remember OpenAI is a product company and Google is basically that and then some. So I think Google's dominance is going to be really surprising to most people but I can see it from here. I just don't see how Google doesn't take over with their complete product suite. Maybe OpenAI pulls something out the bag that's completely incredible, but I do believe that Google are completely locked in. So, I expect incredible things from Google in 2026 and beyond. Now, one of those things is going to be kicking off continual learning. Now, this is more like an AI prediction, but continual learning is the ability for AI to not have to be frozen at the time of release. So currently the paradigm is we train a model, pre-train a model, do all of the training and then of course we you know fine-tune the model then release the model and it's finalized. That's the model, that's what it is. We can't change it, we can't update it, you know, and if you try and retrain certain parts of the model, you have catastrophic forgetting which basically, you know, undermines that section of the model and now the model is probably going to be worse or just forgets, you know, certain things which is really frustrating. So this is where we have Google and Jeff Dean tweeted an exciting new approach for doing continual learning using nested optimization for enhancing long context processing. Long story short, the big summary here is that Google is working on a way for models to you know update after they have been trained. And if this is the case, this is going to be really effective because this is how humans learn. Humans don't just you know download all the internet files and then you know a baby pops out and it's able to you know reason at a certain level. Humans are constantly evolving, learning, and growing. I'm sure in the last 5 to 10 years, you probably have learned a few different things, as have I. And so, they're trying to move towards this stage where AI systems are able to, you know, update their weights every so often rather than starting that entire training process. And remember guys, that training process cost millions and millions of dollars. So, if they do manage to figure out a way to do this, it's likely going to save them a lot of money. Now here we have someone at anthropic basically talking about continual learning being finalized in 2026. — The most striking thing about next year is that the other forms of knowledge work going to experience what software engineers are feeling right now where they went from typing you know most of their lines of code at the beginning of the year to typing barely any of them at the end of the year. I think of this as the claude code experience but for all forms of knowledge work. I also think that probably continual learning gets sold in a satisfying way that we see the first test deployments of home robots and the software engineering itself goes utterly wild next year. You have another tweet here from Mark Crestman saying that 2025 was the year of agents. 2026 will be the year of continual learning. And then of course here we have Fain Leg from Google Deep Mind saying that continual learning there aren't many things blocking this from happening. — They're not very good at continual learning, learning new sort of skills over an extended period of time. I don't think there are fundamental blockers on any of these things and we have ideas on how to develop systems that can do these things and we see metrics improving over time in a bunch of these areas. We can also see a tweet here that someone says continual learning is the next big thing in AI. Today's deep network show catastrophic forgetting where new data points called old knowledge to get erased because back propagation updates all weight in the network causing old useful info to be overwritten. If we contrast this with the brain, it doesn't do back propagation. No global updates, only local updates with each new data point. The day we have a credible alternative to back propagation is a day we can solve continual learning. And so, as I was making this video, there was literally a paper that was released on continual learning, which is super interesting. And earlier this year, Meta did release a paper in which they talk about sparse memory fine-tuning is a method for continual learning, which was super interesting. And I covered this and also in the video that I made covering this, Elon Musk actually spoke about continual learning with Grock 5. So it's quite likely that they are probably banking the success of Gro 5 with continual learning. And considering that I'm seeing more and more research papers around this area, it's probable that in 2026 we will likely see some huge progress in that area. And then this is where we get Deis Sarvis, the CEO of Google DeepMind, also touching on the subject. So I think that we are maybe you know I would say sort of 5 to 10 years away um from having an AGI system that's capable of doing those things. Um another thing that's missing is continual learning. This ability to like online teach the system something new um or some or adjust its behavior in some way. And so a lot of these I think core capabilities are still missing and maybe scaling will get us there but I feel if I was to bet I think there are probably one or two missing breakthroughs that are still required um and will come over the next uh five or so years.

Segment 5 (20:00 - 25:00)

— Now I probably should have included this point earlier on in the video but it's better late than never. So I do believe that AI agents are actually going to improve at work. I mean, one of the things I've realized is that companies are starting to realize that Anthropic is largely the first company to really hone down a specific area of actual task execution and that task execution being coding. And in doing so, in that one singular focus, they've managed to nearly monopolize the entire coding, you know, AI coding industry as just basically being the industry standard. And you know, it's one of those things where it's like the product is so good that it just sells itself. Literally everyone on Twitter I know is just like clawed code claude code claw code. That's all I ever hear. And the point I'm trying to make here is that companies have realized that wait a minute like coding is of course a huge industry. There are many different SAS products and stuff that are all using anthropics models to power them. But what if these companies actually started to focus on products that started to automate work that was really meaningful? Now, of course, this isn't good for job displacement because if they managed to automate work, then of course people lose jobs because employers would actually use those pieces of software. But the point I'm trying to make here is that I think they've already saturated the use case for the average person. In my personal opinion, I don't think the average person needs more than maybe a voice assistant, a chatbot. you know, maybe you want to generate some images here and there. a funny video, but I think the last thing that they would probably need is an AI that's able to actually go off and do things for them. Maybe, you know, fix a resolution, a dispute like an email agent. Other than that, the average person isn't going to need AI that much in their daily lives. So I think that companies are now going to shift well I mean open AAI is that going to shift their focus on you know things like the GDP vow which is where they measure across knowledge work tasks across 44 occupations and so they actually did from GBT thinking to GPT 5. 2 two, you can see that there was nearly a double in performance. And I think this is where they're going to really, really focus because these models are really on the backbone for enterprise. Those are the companies that are going to be paying for the models. Right now, everyone I know that has a Chat GBT subscription, at least standard people outside of the AI bubble, don't have a subscription. They just use chat for free and that's reflected in the subscriptions. Like Chat GBT, they don't really make that much money, you know, on the product. In fact, that's why the company loses a lot of money. So, I think AI agents are actually going to start work and start working in areas that they previously haven't. And the GDP val is going to be one of those. Of course, you can see GPT 5. 2 thinking was one of the first models that raises the bar for professional worth, state-of-the-art long context reasoning, major improvements in spreadsheet creation and formatting, and early gains in slideshow creation. So, for me personally, that is what I just believe. And so we also got this tweet from Ethan Molik, the AI professor, and he says, "I keep coming back to GDP val. There is a lot in the paper that sheds the light on the coming impact of AI on knowledge work, especially when agentic work starts to become a real thing, replacing the back and forth cyborg central prompting we have used for years. And I really do believe that is true. " Now, the next thing that I do believe that is going to really pop off in 2026 is world models. I believe that world models have been the secret source to Google's impressive AI capabilities. All of us have some sort of word models when we are solving problems and their recent world model, the genie 3 world model has been absolutely incredible. For me personally, I just believe that this is, you know, really like sci-fi. You're the fact that you're able to just generate an interactive world and you're able to, you know, explore that world. The world has a memory. You know, if you look around, it's really interactive. if it really remembers everything that was there and it's not just like AI generated, you know, nonsense where things don't add up. Things just mesh together. Of course, I haven't used the demo, but from what I've seen, it does look really cool. And I do believe that some kind of world memory that these models are using. I have no idea what Google are doing behind the scenes, but it's clear that they doing something different that these other companies are doing. And I do think it's the world models that they are focusing on. And I do believe that this will become a very large part of how reasoning models are able to you know just get more effective whatever it is you know AGI system they're trying to build. I believe that world models are going to get a lot better in my genuine opinion. And then I believe that we also and then of course with world models I believe that we will also have agents in virtual worlds. So previously we had agents in virtual worlds where it was just you know restricted reinforcement learning environments. But with Simo 2, we're seeing the general AI agents that are able to play and reason in virtual worlds. Now, the reason that this is bigger than most people think is that most people don't realize it that the world is the biggest game that you can play. Like, if you just think about the world, you know, there are stats, there are things like the speed of light, which is like, of course, I guess you

Segment 6 (25:00 - 27:00)

could say frame rate. But the point I'm trying to make here is that AI agents, if they're able to successfully work in virtual worlds, there's no reason that we couldn't eventually transfer over those capabilities into the real world or at least use some of those capabilities to power robots in the real world. So AI agents in virtual worlds I believe is going to become a major focus for Google because they realize how you know being able to transfer those skills especially into robotics in the real world and of course other reasoning tasks. I think it's going to be the backbone powering them. absolutely incredible. So remember it's going to be an entire year of AI progress and I think AI agents in virtual worlds is probably going to surprise people. Maybe the AI agents do something pretty crazy. they find out a unique hack or, you know, they do something super interesting. But I cannot wait to see what occurs. And I do believe that robotics may have its chatbt moment in 2026. The reason I do believe this is because there is a company and not the company you're seeing right now on screen. This is more so a demo showing you guys that, you know, robotics demos are pretty incredible. And, you know, I'm going to go off track here just for a little bit. But the reason this demo was, you know, so crazy to me is that the demo was so unrealistic, as in people could not believe that this demo was real. They had to cut open the leg of the robot to reveal the, you know, not exoskeleton, but the actual hard shell to reveal that there was not a human inside. And I think that in 2026, we probably get more incredible robotics demos that are fully autonomous in a real world environment. And I believe that the company Physical Robotics is probably going to be pioneering that because every single update I've seen from them, and yes, right now a Tesla bot is running on screen. But every single update I've personally seen from the company called Physical Robotics has been incredible. I'll probably leave a link to them somewhere, but Physical Robotics, uh, their Pi 0, Pi 1. 5, every update building on the last one. I mean, they've got billions of dollars. You know, even Amazon's investing in them. I just believe that, you know, there probably isn't a company that I'm more bullish on. So, I wouldn't be surprised if, you know, a robotics demo comes out that goes completely viral or maybe chat GBT moment for robotics is there. Now, with that being said, let me know what your predictions for AI 2026 are. If there are any things that I didn't consider or things that you think are wrong, I'd love to know your comments down in the comment section below. Without that being said, I'll see you guys in the

Другие видео автора — TheAIGRID

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник