# How to Learn FASTER using AI (without damaging your brain)

## Метаданные

- **Канал:** Justin Sung
- **YouTube:** https://www.youtube.com/watch?v=4gQIAXjraLo
- **Дата:** 18.01.2026
- **Длительность:** 40:47
- **Просмотры:** 66,929

## Описание

Join my Learning Drops newsletter (free): https://go.icanstudy.com/newsletter-gptlearnfasterwithoutdamage

In this video, I will share my findings from thousands of tests and student conversations on how to use AI for learning in 2026 while avoiding key cognitive risks.

Take my Learning Diagnostic Quiz (free): https://go.icanstudy.com/diagnosticgptlearnfasterwithoutdamage

The AI Learning Paradox: Results from a survey of 923 learners (article link): https://www.linkedin.com/pulse/ai-learning-paradox-results-from-survey-923-learners-dr-justin-sung-mb0oc/?trackingId=DqThkJVASC%2Bl3LbO3bDZ2A%3D%3D

=== Guided Training Program ===
I’ve distilled my 13 years of experience as a learning coach into a step-by-step learning skills program. 

If you want to be able to master new knowledge and skills in half the time, check out: https://go.icanstudy.com/program-gptlearnfasterwithoutdamage

=== About Dr Justin Sung ===
Dr. Justin Sung is a world-renowned expert in self-regulated learning, a certified teacher, a research author, and a former medical doctor. He has guest lectured on learning skills at Monash University for Master’s and PhD students in Education and Medicine. Over the past decade, he has empowered tens of thousands of learners worldwide to dramatically improve their academic performance, learning efficiency, and motivation.

Timestamps: 
00:00 - Introduction: AI and Learning - Benefits and Risks
1:22 - Structuring the Video: Issues, Implications, and Solutions
1:46 - Issue 1: Information Accuracy and Hallucination in LLMs
2:01 - Survey Findings on AI Use in Learning
3:00 - Understanding LLM Limitations: Probability vs. Truth
4:32 - The Illusion of Accuracy: Fluency vs. Truth
7:17 - Solution to Information Accuracy: Risk vs. Complexity in LLM Usage
10:48 - Where LLMs Are Most Useful (Low Complexity)
11:05 - The Cost of Misusing AI for Complex Learning
13:42 - Good News: Most Learning Stays in Low Complexity
14:54 - Issue 2: Over-reliance on AI
15:55 - AI Doesn't Solve Core Learning Issues
17:08 - The Deceptive Helpfulness of AI
19:34 - Professionals vs. Students in AI Use for Learning
22:20 - Non-Productive Over-reliance Explained
23:33 - The Problem with Unclear Learning Metrics
25:34 - Avoiding Non-Productive Over-reliance
26:05 - The Value of Human Brain vs. AI
27:06 - Understanding LLM Capabilities (Probability vs. Conceptual Understanding)
30:01 - Where Human Value Concentrates: Beyond Basic Application
30:56 - Human Thinking Processes: Bloom's Taxonomy
32:00 - Memorize and Understand (Low-Level Thinking)
34:00 - AI's Role in Low-Level Thinking
34:39 - Analyze (Higher-Order Thinking)
36:17 - Evaluate (Critical Thinking and Prioritization)
37:57 - Create (Synthesis and Novel Solutions)
38:29 - Why Humans Must Develop Higher-Order Thinking
40:15 - Conclusion: Strategic AI Use for Effective Learning

## Содержание

### [0:00](https://www.youtube.com/watch?v=4gQIAXjraLo) Introduction: AI and Learning - Benefits and Risks

A few months ago, I made a video saying that using Chat GPT is slowly destroying your brain. I said there's a risk of it making you lazier, reducing your problem-solving ability, and damaging your memory and depth of understanding, all while making it feel like it's actually helping you. And this was, as it turns out, quite controversial at the time when I uploaded it. But over the last few months, as more and more research is starting to come out around how AI affects learning, we're starting to see that AI is not the savior to all of the learning problems that we thought it might be. Having said that, AI is revolutionizing learning. That's a fact. At this point, there is no going back. And so, for me, as a learning coach, it has been a huge focus over the past year to really understand what is the best way to use AI for learning. I've been using AI for my own learning. I've been testing different models and different versions. Literally running thousands of tests on this. I've been talking to with students and professionals on how they use AI for learning, what's working for them, what isn't. And in this video, I want to share with you my findings and insights so far. This is basically my current status update on the best way I think that you can use AI for learning right now. Getting all the benefits of AI and mitigating the key risks. So, I'm

### [1:22](https://www.youtube.com/watch?v=4gQIAXjraLo&t=82s) Structuring the Video: Issues, Implications, and Solutions

going to structure this video is I'm going to go through the key issues, the major problems with AI that either I've identified or I've gotten them from my data, my conversations, and my surveys. And then I'm going to say what the implication of that is. Why you actually need to care about that. And then what you can do about that. Either my recommendations on how to use AI to mitigate that risk or whether you should just avoid it. So, to start it off, we're going to begin with the biggest

### [1:46](https://www.youtube.com/watch?v=4gQIAXjraLo&t=106s) Issue 1: Information Accuracy and Hallucination in LLMs

issue, uh which is concerns around information accuracy. So, that's issue number one. Now, before I jump into the actual point

### [2:01](https://www.youtube.com/watch?v=4gQIAXjraLo&t=121s) Survey Findings on AI Use in Learning

of information accuracy, uh for context, in order to explore this topic a little bit more, uh over the past 4 5 months, I've been having dozens of conversations with my students, talking about the way that they use AI and the problems that they're facing. I also ran a survey on uh my YouTube and my LinkedIn, collecting information from people that are both using AI or not using AI and getting their perspectives. I had 923 responses, uh which if I actually had published that as a study, that would have been a pretty large study. Uh but the findings were very interesting, and I'll share some of those key insights with you throughout the video. One of the key findings from that was that the number one biggest concern that people have around using AI for learning is information accuracy. For people using AI for learning, this was the thing that they were most worried about. For the people that are not using AI for learning, this was the biggest reason why they're not. And it's also, I think, one of the most interesting points. And that's because

### [3:00](https://www.youtube.com/watch?v=4gQIAXjraLo&t=180s) Understanding LLM Limitations: Probability vs. Truth

issues with information accuracy and the problem where an LLM like Claude, Gemini, Chat GPT, DeepSeek, uh they will just tell you something as if it were true, but actually it's completely made up. Uh this phenomenon, which is called hallucination, this is an issue with the technology itself. LLMs, large language models, use something called the transformer architecture. And the transformer architecture, I mean, even though it is kind of like this amazing new milestone in AI development, it's fundamentally a probability-based word generator. It looks at your query, it looks at the training data, the massive amount of training data that it's got access to, uh and then it will create a network of what it thinks that you're looking for, and it will match which words, based on the training data, would be the highest probability to come next. And so, it doesn't have any sense of truth. Not only does it have no sense of truth, it doesn't even really have a concept of full sentences. Each word is simply the highest probability word to come next in the sequence. And so, I see these posts sometimes that say, "Chat GPT is lying to you. " But in a way, I feel this is a little unfair to Chat GPT because you can't lie if you don't even have a concept of the truth. These large language models are doing exactly what they are designed to do, which is create fluent, cohesive-sounding sentences that are contextually appropriate for the interaction that you're having. Now, one

### [4:32](https://www.youtube.com/watch?v=4gQIAXjraLo&t=272s) The Illusion of Accuracy: Fluency vs. Truth

of the low-hanging fruits of trying to increase the accuracy of this information is to give it access to the internet, allow it to search, get more information to make a more informed probability estimate. But this doesn't really change the problem. By giving it access to more information beyond its initial training data, yes, you allow it to build networks of probability based on maybe newer, more up-to-date information sources, but it's still missing a lot of pieces that need to happen to actually have a concept of truth or accuracy. For example, it needs to validate and prioritize the different sources. It needs to think about whether that information is reliable and how reliable it is. And then how does the reliability and the context in that information source compare to a potentially contrasting information source? Is the opinion of 100 people on Reddit stronger than the opinion of one expert that wrote a blog article? And how do we even know that any of that information is true or not to begin with? And even if that information is true, how does that information fit with the existing training data, aka the existing body of knowledge? Often, if you're an expert reviewing new information on a complex topic, the way you interpret that new information is really important. You have to be careful not to paraphrase or extrapolate things in a certain way that is going to lead someone to form a different conclusion. It's a very fine intellectual process. An LLM doesn't have a concept of that. Even if you tell it to be careful, all it does is it uses words that are probabilistically closer to what a person being careful is likely to say. There is no change in the way that fundamentally reasons through that information. And this is especially problematic because the way that an LLM produces its text is held to a different standard than how a human interprets that text. Here's what I mean. When an LLM is trained, its gold standard is text that is fluent and coherent. Now, for a human, when we read something that is fluent and coherent, we are led to believe that this is true. We have confidence in what we're reading because of its fluency and its coherence. And often, what can happen is that an LLM will generate some text that sounds very convincing because it's built to be fluent, and we, as humans, are drawn to believe it. And so, fundamentally, this issue with information accuracy is a problem that is not really going to be resolved because of the fact that there is no underlying mechanism of accuracy in these models. And so, what can we do

### [7:17](https://www.youtube.com/watch?v=4gQIAXjraLo&t=437s) Solution to Information Accuracy: Risk vs. Complexity in LLM Usage

about that? The solution to this is to reframe this not as an issue with information accuracy, but to reframe it as a issue of risk versus complexity. And this is across every task that you're going to use AI like a large language model for. Anytime you work with an LLM like Claude or Chat GPT, you always want to have an understanding about the complexity of the topic. And this is mentally the relationship you should keep in mind. Here on the x-axis, we're going to have the level, and this green line is going to represent the complexity. So, as the complexity of a topic goes up, what I mean by complexity is that there are a lot of moving pieces. There's potentially lots of new information that's evolving. There may be a lot of competing opinions and lots of different schools of thought. There's a general lack of consensus. Or, the way that you're trying to apply this knowledge is in very, very specific context where there's lots of different factors at play. And again, there isn't a very clear, well-understood way of exactly how to use those factors and what that relationship is. So, for example, if I'm trying to update myself on the latest learning science research, I know that there are a huge range of different opinions, and there's conflicts, and even among the researchers and experts, there's differing opinions there. And so, when the latest research comes out, you have to interpret that with huge grains of salt and see it from different perspectives and see how that relates to your own existing knowledge. Now, if I were to try to get an LLM to tell me the most important things about all the latest research that's come out, that's not going to be a very accurate way of doing it. It's going to look accurate because it sounds fluent and coherent and looks like it's considered all those things, but, and I've tested this multiple times, when I actually go through to read the original articles myself, the conclusions that I come to are slightly different than the ones that the LLM generated for me. And they may be 90% the same, but that 10% difference is important for me, who's trying to generate that top-level expertise. And if I have a 10% error in my understanding of a topic, over time, that is actually going to compound. So, that would be one example of complexity. The topic in the field itself is really complex. The second example is applying it in those different contexts. So, you may be trying to apply a very well-understood marketing principle. Someone published about this decades ago. Marketers have been using this for years and years. It's a universal truth and a law about marketing. However, you're trying to apply it in your own particular business, for this group of people, with these problems and these challenges and these preferences. And so even though the knowledge is well understood, the way you are trying to apply and connect it together is not well understood. And so that would also be an example where the complexity is high. So as that complexity goes up, the risk of using an LLM and having accuracy also goes up. And as a result, the overall usefulness does something like this. This is the LLM usefulness. So if you're dealing with really simple topics, well-understood fields, simple applications, then the training data and the data it's able to get is probably going to be accurate. There are hundreds of thousands of people that have all come to the same conclusion saying this is the way that you need to think about it. There is no real argument about this case. So naturally, that is what the LLM is

### [10:48](https://www.youtube.com/watch?v=4gQIAXjraLo&t=648s) Where LLMs Are Most Useful (Low Complexity)

likely to generate. Or you're dealing with an issue that's so simple that it doesn't need to really have rigor in understanding the conceptual truth, as long as you get a conclusion that is roughly in the ballpark of the right answer, if you're happy enough with that, then it's going to be very useful for you. But the main implication of

### [11:05](https://www.youtube.com/watch?v=4gQIAXjraLo&t=665s) The Cost of Misusing AI for Complex Learning

understanding this relationship is to make this decision proactively and upfront. This is the part that actually saves you time. The point of using AI for learning is to save you time. You have too many things to do. You've got competing priorities. You have stuff to learn. You're potentially getting overwhelmed. AI is meant to solve those problems. Not just give you more problems to worry about. If it didn't solve those problems and now all you're worrying about is information accuracy, this would be a losing game, obviously. But the problem I've observed from talking to dozens of professionals and students that are using AI for learning is that they will go through without really proactively considering the complexity of what they're trying to use the LLM for. And they'll spend 30 minutes, an hour, 2 days, 3 weeks trying to learn with the AI for something that it is not well geared for. And then only afterwards realizing that it's not very effective or it's leading them astray. Or in the worst-case scenario, they've actually wasted time building an understanding and building a body of knowledge on inaccurate information. So they've actually made it harder for themselves to learn it. And ironically, relying too much on AI to begin with has actually made them waste time. So for me, again, as of right now with the existing technology, things can evolve, blah blah, but for me right now, if I assess that the thing I'm trying to get out of the LLM is nuanced, complicated, multifaceted, I'm going to have to put things together in a way that's not well understood, I wouldn't even bother trying to get the LLM to do that. And I have spent probably over 200 hours on just trying to create custom GPTs and rag models and combinations of different models with different versions, all sorts of different things to try to get it to be able to beat this information accuracy thing on its head. And my conclusion so far is that investment of time is just not worth it if all you need to do is learn this new knowledge. Now if you're trying to create a knowledge bank, you know, you're creating your sort of own personal answer machine because there's this body of knowledge, like a huge amount of documentation for example, and you're going to have to come back to this again and again as part of your work, you know, for the next months, then that can be worth it. But if all you need to do is just learn about this thing so you can make your decisions, solve problems, be in a good place where you can work with that information and just be good at your job or do well in an exam, I don't believe that the time investment in trying to set this up to make it perfectly accurate is really worthwhile. You can

### [13:42](https://www.youtube.com/watch?v=4gQIAXjraLo&t=822s) Good News: Most Learning Stays in Low Complexity

spend 5 minutes just loading up your resources into Notebook LM and then just studying off of that and just accepting that there is going to be a level of information inaccuracy that you may come across and that risk is going to get higher as you go into the deeper, more nuanced aspects of that topic. Now, the good news is that most people probably don't need to go that deep on most topics. 80 to 90% of people, the time, only need to learn a superficial the 50 the top 50% superficial part of a topic. They don't need to get that final level of expertise. That's where the complexity goes up. That's where the error and the risk goes up. But if all you're learning is the well-established part of that topic, the stuff that's unambiguous, the simple stuff, then your risk of information accuracy is in reality going to be very low. And so again, coming back to this graph, if you are always staying on this side of the complexity line, then you're going to find it's generally pretty useful for you and your risk is going to stay relatively low. Now the

### [14:54](https://www.youtube.com/watch?v=4gQIAXjraLo&t=894s) Issue 2: Over-reliance on AI

second biggest concern, based on this survey, was over-reliance on AI. You know, I posted this survey on my YouTube, on my LinkedIn. The people that are likely to do the survey are probably people that have already seen some of my content and I already talk about this kind of thing, but regardless, I'm very proud of you guys. I'm proud of you for being worried about becoming over-reliant on AI. And this is probably one of the most consistent themes that I got through in the interviews that I did in the consultations that I've been doing. There is this general sentiment that problem-solving ability is going down, critical thinking is going down, people are getting lazy, they're losing basic knowledge that they used to have. And they sort of feel like if the AI can't solve the problem and do it for them, they also can't. One of the questions I asked in the survey was, "What are the issues you still have with your learning despite using AI? Yes, it's helpful for some things, but it's not solving all the problems. What problems are still left for you? " And the overwhelmingly

### [15:55](https://www.youtube.com/watch?v=4gQIAXjraLo&t=955s) AI Doesn't Solve Core Learning Issues

common theme across several hundred responses was that using AI doesn't fix the core of the issue. Difficult stuff to understand is still difficult to understand. It's still hard to remember and retain information. And if you're a professional trying to use that knowledge for your work, because that depth is not there, they're finding it hard to apply it to their own context and their own domains. And these are exactly the same issues that have existed before the AI hype. When I was first starting to teach people how to learn, 14 years ago, this is the same list of major challenges people had. Whether you're learning from AI giving you information or a Google search teacher giving you information or whether you're reading it from a book or from a hand-drawn tapestry written by a monk 400 years ago, wherever the source of the information is, the bottleneck is still in our brain. And the major issue, this is the biggest problem of all of this, is that we're often completely unaware

### [17:08](https://www.youtube.com/watch?v=4gQIAXjraLo&t=1028s) The Deceptive Helpfulness of AI

of the fact that we are becoming dependent. And there are two questions that I asked in that survey that for me were not surprising at all, but very concerning. The first question was just, "How helpful do you think AI is for your learning? " It was rated on a five-point scale. The median answer across several hundred responses was four. 63% of people rated either four or five, helpful or very helpful. Compare that with only 8% voting one or two out of five, which is not at all helpful or just not helpful. In the question after that, I said that sometimes something can feel helpful even if it isn't actually helping us. So when we think about using AI for learning, well, the outcomes we want with learning are better retention, being able to understand something, being able to apply it the way that we need to apply it. These are the outcomes we are learning for. And there are a lot of pseudo outcomes that feel good, but don't actually matter. When you're studying conventionally with a textbook, the number of pages you get through, the pages of content you cover in a day, that feels very productive, that feels important. Doesn't matter. Unless that translates to the equivalent amount of information you retain, understand at the right level, and it can apply the way you need to. And it's the same thing with AI. You might cover a lot of content, ask a lot of questions, understand a lot of the explanations that it gave, but then at the end of the day, do you remember that? Is your knowledge actually deep and can you use that knowledge in the way you need to? So the second question was asking about the outcomes. How helpful is AI when you actually think about the outcomes that are meaningful? And just by asking that question, the survey results changed dramatically. So in that same five-point scale, the number of people rating it as five out of five, very helpful, halved for both students and professionals. The number of people rating it as neutral went up. For professionals, up by 100%. So it doubled. For the people rating it one or two out of five, uh among the student cohort, that tripled. So it went from 39 people thinking that it was one or two out of five helpfulness up to 120. And it more than doubled for professionals, going from eight to 18.

### [19:34](https://www.youtube.com/watch?v=4gQIAXjraLo&t=1174s) Professionals vs. Students in AI Use for Learning

Now one thing I will quickly note here is that professionals generally did find AI usage to be more helpful than students for learning. Probably the reason this is the case is that professionals have a very high amount of task-reactive learning that they need to do. What this means is that someone gives them a project, a task that they need to do, they need to learn just enough to complete the task. It doesn't really matter that they build great expertise. They just need to deliver on an outcome. And so this type of task reactive learning is really well suited for LLMs because, again, you're not usually working with information that's a high level of complexity. You often just need to learn enough to get the job done. You're operating at low risk and it's saving you a huge amount of time to do that. And so task reactive learning is something that AI is really well suited for. Students, on the other hand, don't really have a lot of task reactive learning. A lot of their learning is about having knowledge in their heads to be able to use and apply and remember. But ultimately, when we think about how useful AI actually is for achieving the outcomes that we actually need, it is objectively only a third to half as helpful as we feel like it is. And that is the issue. That is the thing that creates over reliance. And there are two ways I want you to think about over reliance because over reliance isn't always a bad thing. There is productive over reliance and then there is non-productive over reliance. Productive over reliance is when I'm relying on something that I don't technically need, but it's saving me time or giving me some other benefit. I rely on my phone to communicate with people. I could send them a letter. I could go to my computer and write them an email. I could — [clears throat] — bike or drive over to their house and then shout through their window. I rely on my calculator for doing arithmetic. I don't hold on to random bits of facts and information that I don't need to because I can look it up. These are all examples of reliance. And you could say that it's over reliance because if I don't have access to my phone, my ability to communicate with other people goes down a lot. I certainly can't do as much arithmetic without my calculator. And the body of knowledge that I have access to, if I can't search anything on the internet, goes down by like 99. 9999999%. But is that an over reliance that bothers us? Probably not because in achieving the outcomes we need to, it is actually a benefit. This contrasts with non-productive over reliance, which is where the AI thing starts falling into.

### [22:20](https://www.youtube.com/watch?v=4gQIAXjraLo&t=1340s) Non-Productive Over-reliance Explained

Non-productive over reliance is when we are relying on something to give us a certain outcome, but it doesn't. In the learning space, uh great example of classical non-productive over reliance is relying on other people's notes and writing notes in a certain way or writing notes through a certain software. And then we feel like if we don't have our software, this person's notes, whatever tool it is, that we're robbed of our ability to learn effectively. This is an example of non-productive over reliance because relying on those tools probably doesn't actually produce any benefit in the first place. But what it does do is it provides a benefit to metrics that don't matter. So if your metric of success in learning is how neat my notes look or how many notes I can write down in 1 hour, then using your favorite software that allows you to type things much faster and then auto summarize things and, you know, you can highlight different things in different colors, that could be the solution to that. But it's non-productive because that doesn't translate to the outcome that we actually needed, which was retention, depth, and application. And so usually

### [23:33](https://www.youtube.com/watch?v=4gQIAXjraLo&t=1413s) The Problem with Unclear Learning Metrics

the situations where we find ourselves going into non-productive over reliance is when the metrics of success are not clear. For something like learning, the metrics of success are hard to measure. It's hard to measure your retention. You actually have to wait and then test yourself. It's hard to measure the depth of understanding. To measure your depth of understanding, you have to try to apply the information at the level of depth and the level of interconnectivity that you need. A lot of times, it's hard to apply your information in the first place unless you're being examined on it. If you need to learn something and just use it at work and solve problems and make decisions, it's hard to simulate those types of challenges very accurately. And because these metrics are hard to measure and they're not very clear and we're also not used to thinking about this, it is more effortful to track our progress on this. So instead, we use these other metrics that are easier to track and that feel better like content covered, how long it takes to cover X amount of pages in however much time. And so the trick here is we have to protect ourselves against going into non-productive over reliance. And one of the things about that is just recognizing the difference between a productive versus a non-productive metric. And this allows us to make a more accurate judgment about how useful something actually is. And by the way, if you're interested in having a look through the results of that survey, there's some other findings in there that aren't in this video and you just want to have a deeper look into it. I've also got a full article that I've written up going through the findings, the key insights, uh which you might be interested in. And I've also summarized some of those into my newsletter as well. So if you haven't joined my newsletter, it's a free weekly newsletter, uh this article will be in there as well. So you can sign up to the newsletter in the link below. I'll also leave a link to the full article if you want to check out the data for yourself. So just being aware of the difference between these different types of metrics is going to help you to avoid falling into this trap. Now, there is another way for you to

### [25:34](https://www.youtube.com/watch?v=4gQIAXjraLo&t=1534s) Avoiding Non-Productive Over-reliance

avoid becoming over reliant on AI. And this is probably the most valuable. This is the strategy that if you lean into this, not only will you avoid over reliance on AI, you will be able to use AI in a way that other people can't. It allows you to have a competitive advantage through using AI as opposed to just keeping up with everyone else who's using AI. And to show you the strategy, I need to teach you just a little bit more about how learning works and how thinking works. Uh but I'll keep it brief. When we think about productive

### [26:05](https://www.youtube.com/watch?v=4gQIAXjraLo&t=1565s) The Value of Human Brain vs. AI

versus non-productive over reliance, one of the big questions here is how do we know what is going to be useful and relevant 5 years from now, 10 years from now? When the calculator first came out, you know, people said that you shouldn't rely on that too much because it stops you from being able to do arithmetic in your brain. And that's actually true. But as it has played out, there's just really not a lot of situations where I'm far from a calculator where I also need to be doing advanced arithmetic mentally. And so how do we know that the way that we're using AI is actually going to be harmful? Maybe just the secret is just be able to use the AI better and faster than everyone else, then you can just bypass the whole need to remember things and understand things and apply that knowledge. Like why do you even need to do that if the AI can do that for you and you can figure out how to make AI do that for you? That's actually a very valid line of reasoning. And to understand the answer to that, we have to understand a little

### [27:06](https://www.youtube.com/watch?v=4gQIAXjraLo&t=1626s) Understanding LLM Capabilities (Probability vs. Conceptual Understanding)

bit about how the human brain works versus how AI works. The first thing is, as I mentioned, uh the way that an AI, like a large language model specifically because AI is actually a huge field. There's many different strands of it. Um but what we're talking about right now is LLMs. The way that LLMs work, and this is a thing that has taken the world by storm. Like most of the hype around AI is actually because of large language models. As I mentioned, this is mostly working off of probability. What it doesn't have is it doesn't really have a conceptual understanding of information. Uh and it doesn't actually have a sense of reasoning per se. The reasoning is very, very basic. It's not It doesn't have a great deal of problem-solving and reasoning capability. And this is uh something that is technology limited at the moment. The current technology and the architecture is far actually far from being able to do this really well. And there's some interesting stuff coming out with incorporating knowledge graphs and things, but it's still really, really far from matching a human. What we're talking about would be a major new breakthrough on the same level that LLMs and ChatGPT was to begin with. Actually, maybe a couple levels beyond that for it to get to a conceptual reasoning ability that is equivalent to that of a humans. And this is a technology that, and I might be wrong on this, but from my conversations with AI experts and my understanding of the industry, this is not going to be happening for years. But what will happen is that LLMs will get better at what it's already good at. It's already great at working with a very high volume of input, high amounts of information. It's really good at looking through uh finding trends uh from that information. It's really good at recalling from that high volume of input in a way that's contextual. It'll get faster and cheaper and hopefully more environmentally friendly at doing this stuff over time. And what that means is that this LLM contributes a certain value. And the value it contributes is its ability to work with high volumes of information and recall this type of stuff and to do that very basic level of knowledge application. And what that therefore means is that value is already occupied by AI. As a human, it's not valuable for you to be able to do that. You know, those videos of those people who can like do the mental maths like really, really fast and you know, there are people next to them with a calculator and then they're like, you know, just doing like four-digit multiplication faster than the calculator can keep up. Extremely impressive. Superhuman ability to do that. Really impressive. Not practically, commercially that useful. And it's the same thing. If you graduate from university and you have lots of facts, you can recall lots of facts, uh you can do like basic knowledge application, not going to be very useful because the AI can do that a million times faster

### [30:01](https://www.youtube.com/watch?v=4gQIAXjraLo&t=1801s) Where Human Value Concentrates: Beyond Basic Application

and cheaper than you. And so, where is the gap? The gap is the things that the AI is going to struggle with, which is this here. So, this is where human value is going to concentrate. If you can't do this, it's going to be hard to get a job. progress through your job. And more your employers are going to expect that you can do this because why would they expect you should just do the same thing that the AI is doing? They don't need that from you. And so, then we think about, well, what is it that a human does that allows them to do reasoning and thinking conceptually and putting together complex things in a big picture? What is a human doing that allows them to do that better? And how can you get better at doing that? And this is how we avoid this uh over-reliance. Is that we become aware of the processes of thinking that are actually productive, that help us to do this stuff, and we get better at it. We do this. We deliberately don't rely on AI to do this

### [30:56](https://www.youtube.com/watch?v=4gQIAXjraLo&t=1856s) Human Thinking Processes: Bloom's Taxonomy

type of thing for us. Number one, because it sucks at it. But number two, because we should be getting better at it. We need to And a simple way to think about what those processes are so that you can use this as your mental checklist, and this is the way that this is the same mental checklist that I use. It's to use the top three levels of the thing called Bloom's taxonomy. Really quickly, and I've got other videos going into this in more detail if you want to check it out. But really quickly, there are these six levels of thinking that were identified by Bloom and then it was later revised by other researchers, blah blah. Anyway, the bottom level here is called memorize. Memorizing is just about trying to read something again and again to stick it into your head. Very low level, not very effective, very passive, pretty much a waste of time. Uh this is not a process that you should really be using. The next level here is called understand. Understand is literally just trying to understand what you're reading or listening to. Probably right now, most people, as you are listening to me, you are just trying to understand what I'm saying. This is also not actually a very effective process. And the reason this is not a very effective process is that your ability to retain information

### [32:00](https://www.youtube.com/watch?v=4gQIAXjraLo&t=1920s) Memorize and Understand (Low-Level Thinking)

and your ability to understand information as outcomes, right? If these are the two outcomes, remembering something and understanding something, these two outcomes do not come about as a result of using the process memorize and understand. That's confusing. And this is the part that really trips people up. Trying to memorize something and trying to understand something are not processes that are effective at generating memory and understanding. It's probably better just to use different words to completely uh dissociate them. So, if we call this process the process of memorizing, and um instead of understanding, let's call it comprehending, maybe. If we call that comprehending, sorry Bloom. You know, rest in peace um changing your pyramid. If we call this process comprehending, and then we say that these outcomes are retention, understanding is probably fine, then yeah, we can see that memorization does not lead to retention. Okay, comprehending understanding. What does lead to understanding and retention are actually the higher levels above that. So, in the middle here, we have one that's called apply. Apply is when you're using your information to solve problems and execute on things. Uh but there are actually many levels of application. You can apply things in a very simple way, like a one-to-one relationship. I learned this thing, I apply it just specifically, exactly for that singular purpose. Or you can apply things in a very complex way, you know, bringing this piece in and combining it with 10 other things to solve this very intricate problem. And so, even though the word just says apply, actually, this is this level is technically only for simple application. That one-to-one, very direct application. This also is limited usefulness. You don't really want to be spending a lot of time doing this kind of thinking. These three levels, this is fair game for AI.

### [34:00](https://www.youtube.com/watch?v=4gQIAXjraLo&t=2040s) AI's Role in Low-Level Thinking

If all you're trying to do is just wrap your head around something, you're just trying to paraphrase something to be able to understand a little bit more quickly, to be able to just comprehend it quickly, you just want to apply one fact into something else and not have to think about it too much, use an LLM for that. It's going to be good at it. faster at it. And these are skills that you don't need to develop. You don't have to be good at this. We're going to enter into a world where AI is just going to do that for you, and your ability to do that is not going to be important. But the levels above this, this is the stuff that if you try to get an LLM to do, it's not going to give you a good answer. It's actually like as in it's going to be worse than human trying to do this. And for that reason, this is valuable for you to get good at. So, the level

### [34:39](https://www.youtube.com/watch?v=4gQIAXjraLo&t=2079s) Analyze (Higher-Order Thinking)

above this, we call analyze. Analyze is really about at the end of the day, it's about looking for similarities and differences. It's about comparing. It's about taking two things and saying, in what ways are they similar? different? And you're finding all the different types of similarities and differences. So, if I took this cup, which is shaped like a camera lens, which I really like. If I take this mug and this Apple Pencil and I say, how are these two things similar and different? There are lots of similarities and differences. So, just because I found that, okay, well, the shape of these is different, but the temperature is the same, those are not the only ones. Being good at analyzing means that you're able to find many different types, many different categories of differences and similarities. What this does mentally is it allows your brain to create relationships between different items and different pieces of information. So, when you're reading a sentence on a page, don't just try to comprehend it. Try to think about, well, how is that information different or similar to this other paragraph? Or the same concept that was explained over here instead. Or how is that similar or different to what I already know? And by doing that, it takes a little bit longer, but the outcomes that actually matter, your retention and your depth of understanding and your ability to apply it, will grow. That's where the learning actually happens. It doesn't happen from understanding what you read, it happens from thinking about how it relates. That is the source of the learning. The next

### [36:17](https://www.youtube.com/watch?v=4gQIAXjraLo&t=2177s) Evaluate (Critical Thinking and Prioritization)

step above this is the evaluate. Evaluate is when you not only recognize that there are similarities and differences, but then you prioritize them. So, yes, these two things are similar and they're related together in this way, but how important is that relationship? How important is this similarity or this difference? And that has different contexts. Maybe for one application, it's really important. But for another application, it's totally meaningless. This process of actually critiquing the value and prioritizing and making judgment about how important different things are and how they fit together, this is really where we start getting into a special level of thinking and learning. This is where you can start solving the really complex problems. This is where you can engage in those deep discussions. When you're used to thinking in this way and someone says something to you and you're thinking, how is that different to what I already know? You recognize the difference and you say, okay, well, the implication of that difference means that it can influence this and this and this. And so, that is a very important difference, whereas this is not an important difference. That leads you to ask a really good question. understand how things to get connected together deeper than the people around you. This leads to better retention and problem-solving and deeper understanding. And as a result, again, more time spent mentally. This is why when you look at a top learner, they spend a lot of time just thinking, asking questions, maybe, going back and forth, exploring thoughts, and less time just mindlessly consuming information.

### [37:57](https://www.youtube.com/watch?v=4gQIAXjraLo&t=2277s) Create (Synthesis and Novel Solutions)

And so, that's evaluate. And the final level, level six here, is create. This is where you're taking your knowledge and you're hypothesizing something new. You're synthesizing something new. You're creating a new and novel, original plan or strategy, solution design uh for this particular problem or for this particular project. It's not about learning anything else anymore. It's about using what you know to bring it together and synthesize it. And maybe to do that well, you need to learn something. But that's a natural part of your primary purpose, which is to bring it all together. So, when you do this

### [38:29](https://www.youtube.com/watch?v=4gQIAXjraLo&t=2309s) Why Humans Must Develop Higher-Order Thinking

when you operate in these top three levels, these are the things that AI is really bad at. So bad that I do not even bother trying it to get it to do this. Yes, it will output something, and it's just not very high quality. I have never ever in my life, I what I mean, AI has not been around for so long, but like over last few years, I've never seen a single example where an AI has been able to output something at this higher order at a better quality than a human, a skilled human. You want to be that skilled human. Those processes that I just explained to you, I didn't just explain them for fun. The reason that I explained that to you is that becomes your mental checklist. So, when whenever you're doing any kind of learning or problem-solving and you're taking in new information, ask yourself, what part of this is hard? Is it hard because there's just a lot of information and a lot of inputs and I want to summarize that? If the answer is yes, feel free. Use AI. That doesn't require any of these top processes. But if what you're struggling with is bringing it together, comparing the differences, figuring out what is more or less important, synthesizing a map, get used to doing that yourself. Don't offload that onto AI. Save time with everything else. Feel free. Save time with all the other things. If it's just tedious, monotonous work that doesn't take a lot of mental effort, do that. Save that time. But when it comes time to use your brain and think about things deeply, don't shy away from that. Because every time you decide to offload that to AI, you are robbing yourself of an opportunity to get better at that skill. And that is career self-sabotage. So that is my guide on how you can use AI

### [40:15](https://www.youtube.com/watch?v=4gQIAXjraLo&t=2415s) Conclusion: Strategic AI Use for Effective Learning

to learn effectively. If you, again, want to check out the full report with some other insights from the survey that I've done, the link to the article is in the description. If you want to sign up to the newsletter, feel free to do that as well. If you want to learn a little bit more about doing these higher levels of thinking here and getting really good at doing this, then you may want to check out this video where I go into this topic of higher-order learning in much more detail. Thank you so much for watching. I hope this helped, and I'll see you in the next one.

---
*Источник: https://ekstraktznaniy.ru/video/15319*