Is AI making you dumber? New research is starting to show how AI tools actually affect the way our brains think, focus, and remember. In this video, we break down real studies, including an MIT experiment and large education research, to explain why using AI sometimes hurts learning and sometimes helps it a lot.
The difference is not AI itself, but how you use it. We look at what happens when AI does the work for you versus when it helps you think, why vague prompts lead to boring, generic answers, and how relying on AI without real effort weakens understanding. The video also explains how AI can slowly push everyone toward the same average ideas if you do not bring your own context and experience.
We also touch upon the two modes of using tech: consumer mode and creator more. And how using an AI PC powered by Intel®️ Core™️ Ultra Processors can help you regain your lost creativity.
If you use any AI tool for studying, writing, or work, this video shows how to use it in a way that actually makes you better at thinking, instead of replacing your thinking altogether.
Learn more about Intel® Core™ Ultra Processors here: https://www.intel.co.uk/content/www/uk/en/products/details/processors/core-ultra.html
00:00 - MIT Study - AI's Impact on Brain Activity
03:06 - Context Problem with AI
04:20 - AI is Making You Generic
06:10 - Why Everyone Gets Same Answers
06:55 - Real World Experience First
10:21 - Closing thought
Оглавление (6 сегментов)
MIT Study - AI's Impact on Brain Activity
Okay, so scientific studies are starting to shed light on how AI assistants are actually influencing our brain activity. An MIT media lab experiment in 2023 had participants write essays under three conditions. One, using their own brain, no AI. Number two, using Google search, the old way of coming up with essays, or using chat GPT. The chat GPT assisted group's essays were very similar to each other, showing that AI surfaced the same generic ideas and EEG brain scans showed lower executive control and attentional engagement in this group. By the third essay, many of these AI assisted writers had given up and let Chhatti do most of the work. In contrast, the brain only group, no AI, demonstrated the highest neural connectivity because they're actually doing something in alpha, theta, and delta bands associated with creativity, memory load, and semantic processing. And they all produced of course more original work. The group allowed to use Google felt somewhere in between, still showing active brain function, and they also were very satisfied with the work they created. But the true magic happened later when they were asked to rewrite the essay without AI. All those people who had relied on Chad GBD remembered almost nothing of their own essays and they showed weak alpha theta brain waves indicating that they didn't really understand what they were writing. Their brains had bypassed what is generally an effortful encoding process. So basically studies are now showing that doing any work encodes stuff in the brain. The brain levels up after any successful task especially a writing task but using Chad GB or Gemini that effect is not happening. A 2024 UK study of 669 people found a significant negative correlation between the frequent use of AI tools and critical thinking abilities. But it's a little more complicated than that. A 2024 meta analysis of 51 studies was a big sample size in education found that Chad GPD had positive effects on student learning outcomes. How do you balance these two? Because in this study specifically, Chad GPT was associated with improved learning performance which had a mean effect size G of 0. 87 which is a large effect moderately better learning perceptions and gains in higher order thinking skills. So you have a bunch of studies that are saying you're becoming dumber. say you're becoming smarter. But there was an important point that people missed out in the second set of studies. What the meta analysis showed was that when Chad GPD is integrated thoughtfully into teaching as a helper rather than to do the final work, it can enhance students motivation and even critical thinking. What that means is if you're brainlessly using AI tool to give you output, you're not even caring what's actually written. You're just copy pasting something else. Of course, you're not going to learn anything. But if you're putting in effort and you're using AI as an assistant to ask you questions, to answer some of your questions, and you are deep diving into the rabbit hole yourself, then you're integrating into deeper memory. So, how you use the tool is mattering much more than whether you use the tool or not. And I'll tell you what I think is really happening.
Context Problem with AI
See, I've made videos before where I've told you the problem with using AI effectively is feeding it context. By default, AI is like a sly genie. If you ask a sly genie for money, he may put it directly into your bank account only for the police to show up next day demanding where you got this money from and then maybe you end up in jail. This type of genie needs to be told, please transfer it from big company account and handle tax issues and transfer it in this way, not in this way. Make sure I'm legally safe. Basically, context. If a genie is sly or evil, they fill it in with bad context. But an AI tool is sort of the same. Most of you are getting the same generic results from AI because your prompts are oneliners and AI is not getting enough context from you. So it's surfacing the most generic popular context from inside it. For example, if you say write a LinkedIn post about productivity, this has no context. So it pulls from the most common highest probability patterns it's seen. So it produces a generic output you've seen 10,000 times. Like for example, productivity isn't about doing more. It's about doing what matters or focus on systems, not goals. Small habits compound over time. Some rubbish like that. This is not bad AI. It's contextless AI. For example, if you ask
AI is Making You Generic
"How do I grow in my career? " AI doesn't know much about you. It thinks you're a generic office worker in a generic western context, generic ambition, generic time horizon, no context, right? So, it says you have to upskill continuously, network with people in your feed, seek feedback and mentorship. This is statistically correct advice. It's also useless. So what's happening here is that you aren't feeding AI any context and hence AI is feeding its generic statistically likely outputs to you. But here's where it gets tricky. Just like we prompt the computer and give it context, AI can do it to you as well. And I think this is the fundamental problem with AI and that's what all these studies are trying to capture. For example, if I ask AI, tell me what books I should read about psychology, it'll basically surface the most generic books. Thinking fast and slow by Daniel Kiman influence the psychology of persuasion and slow very common psychology books. The good thing is that if you go to the bookstore you're likely to find these books but the problem is these are the most generic outputs. Everyone who has ever googled psychology has probably read these books. You're not really developing any psychological insight. You're developing psychology flavored opinions. Chad GPT is making you generic. You were supposed to be feeding it context. Now it's feeding context to you and manipulating your worldview. For example, if you want to learn about business will give you some LinkedIn slop, start with customer obsession, build more network, long-term thinking, read biographies. But you know, if you actually run a business and you already had that context, you would know that well there's so many things in a business that matter like what are power laws, what is timing asymmetry, what's a regulatory arbitrage that all the stock brokers in India have or what is narrative control or what is a monopoly design. These are all things that you learn after you've run a business for a while. And if you don't know these keywords, how are you going to get this information from Chad GPT?
Why Everyone Gets Same Answers
So, you become the guy who knows all the words and is competing with like 10 million clones. AI is trained on what is most said, what is most accepted, what is least offensive, what is surviving moderation. It's compressing all of the people towards the mean. It's prompting you into the same worldview that everybody else has. The problem is that all kinds of genius in our world is anti-conensus. All the smartest people have ideas that at first are ridiculed, have execution that at first nobody else believes in. Truth, all sorts of truth is first unpopular. If you don't put in your constraints, your personal history, your context and ask it to think in the opposite direction also. It will take you to a place that everybody else in. That's its job. And my solution
Real World Experience First
to this is very simple. Go out in the real world, follow interesting people, try new things, and then let that be your context. Then you feed all of that into Chad GPD. Don't let Chad GBD feed you the context with no real world exposure into something. Chad GBD cannot help you. If we didn't start making games to get started and learn about the space and then ask AI, well, how do we improve this? How do we make this better? If we didn't have that context, we would just get the most generic context from Chad GPD. But you can use this context you've learned from real people in the real world from real experiments and then use it as an extension of you. AI is best used you, not to replace your entire thinking. And if you see the MIT study, it's made it clear that the problem is not AI but how we use it. We need to get into the habit of using AI as an assistant to help us think through problems and using AI to extend our ideas, to refine our ideas. And I'll tell you something, right? And I want to bring us to today's sponsor, right? If you look at why I've been so pro laptop, a mobile phone is a great consumption device. You don't really have to think about what you're doing. But a laptop, you know how you use a laptop, right? You have multiple tabs open. You're doing research into multiple things. AI is one of those tabs of research. When I'm doing research on this laptop, I have 10 tabs open and they're open at the same time. I'm clicking through all of them. With the mobile phone, it's harder to switch between tabs. One of those tabs could be as simple as Wikipedia. a research paper. One of those tabs could be Claude helping me use something. Another tab could be Gemini. And I'm comparing the two AI is this hallucinating like I'm always doing that. And I think the mobile phone is made as a distraction engine. You want to get stuff done fast with the AI tool. Finish your homework, finish the work, whatever, so you can get back to WhatsApp or get back to scrolling reads. And that's why I really believe your learning environment matters. You know, the Intel Core Ultra PCs are something that I use every day because they solve two problems at once. Firstly, we are definitely in an attention deficit. So, it helps me regain my lost attention span by pulling me out of consumer mode and back into producer mode. And I really think there are two modes, right? Sometimes when I'm sitting on reels and scrolling, I'm in consumer mode. I'm not using my mind. And you know this, you know what I'm talking about. When you're using a laptop, you know that you are in active work mode. When you're on a phone, you want it to come to you. TV is the same. You want the information to come to you. When you use a PC, your brain knows you are not using it to scroll endlessly. You're continuously react to notifications. You're there to build, write, research, and think. And I think that shift changes how your brain behaves. And by the way, it's no excuse to use a laptop and then have Chad GPD output something, not even read what is there, copy it, paste in your assignment. That's not what I'm talking about. And I want you to think about how you can have longer stretches of focus, even if you're using an AI, if you're using it as an assistant, fewer interruptions where you actually care about what you're doing. And you know, you could use open source models on the laptops. You can use close cloud, but you need to put in effort. And I really want you to put in effort whenever you're doing something. AI needs to be an extension of what you're doing. It cannot be a replacement for what you're doing. Because at some point, your job or your school or whatever, why are you even doing it, right? If AI is doing the entire thing for you, a job will eventually figure that out and just replace you with AI. But the minute you're putting in that work, the minute you're learning, the minute you're trying to even understand what it's saying, that's why I really like study mode and GPT. Study mode prompts you. It gives you questions. and it keeps asking about your awareness of something and I think how you use AI will matter more than whether you use AI or not. So back to the
Closing thought
question in the video is Chad GBD making you dumber? No. But people are using Chad GPD in ways that make themselves dumber. There's a massive difference between those two statements and understanding everything from your work environment whether it be a laptop or a phone understanding how much effort you're putting into it understanding whether you're truly understanding and you would know this and understanding whether you're the one that fed it context or the context being fed to you right anyway this is a simple point that I want to put across which is that I feel like a lot of people are using chat GB in the same way that you're doom scrolling you're not really paying attention you're just going through the motions you can choose either to do that or to think along with it and I feel it requires energy G and focus. So please, if you're letting AI do the job, everything for you, then yes, Chad GP is making you dumber. Otherwise, you have an unfair advantage and you plus the machine will always beat another person who's just letting the machine do everything for them. Anyway, that's it for me, but I want to leave you with a statement. If your prompt is asked by a million people, your answer will make you indistinguishable from them. You are not the answers you give. That's an Indian viva mentality. You are the questions you ask. And those questions come from the world around you and other people. not necessarily from AI, but they help shape you and then you use that to shape the AI. That's it for me. Bye.