Join my Learning Drops newsletter (free): https://go.icanstudy.com/newsletter-careerfutureproofai
In this video, I will teach you how to protect your career from AI displacement by strategically positioning yourself to learn the skills that AI will replace last.
Take my Learning Diagnostic Quiz (free): https://go.icanstudy.com/diagnostic-careerfutureproofai
=== Guided Training Program ===
I’ve distilled my 13 years of experience as a learning coach into a step-by-step learning skills program.
If you want to be able to master new knowledge and skills in half the time, check out: https://go.icanstudy.com/program-careerfutureproofai
=== About Dr Justin Sung ===
Dr. Justin Sung is a world-renowned expert in self-regulated learning, a certified teacher, a research author, and a former medical doctor. He has guest lectured on learning skills at Monash University for Master’s and PhD students in Education and Medicine. Over the past decade, he has empowered tens of thousands of learners worldwide to dramatically improve their academic performance, learning efficiency, and motivation.
Оглавление (5 сегментов)
Segment 1 (00:00 - 05:00)
Is AI going to take your job? The short answer is probably yes. This might be a controversial take, but we live in a pretty capitalistic society and if a business sees that AI can do your job while saving 90% of the cost, there's a pretty good chance that AI is going to take that job. And as humans who need to work and live, that kind of sucks. But the unfortunate reality is that the question of will AI take your job is less a matter of will it and more when will it. And with how quickly AI seems to be developing, that can feel scarily soon. But that doesn't mean you have no options. There are ways that you can futureproof your career against AI displacement for at least the next decade. And as a learning coach who works with thousands of professionals, I get to have conversations with everyone facing this challenge. I'm talking with new graduates, with managers, with CEOs, with recruiters, with AI experts, even AI researchers. And what I've noticed is that a lot of people who are worried about AI taking over their job are also not thinking about AI in the right way. And a lot of the things that people are doing to try and futureroof their career might make their situation even worse. So in this video I want to share some of those insights that I've gained from those conversations and how you should think about your career and AI to actually future proof yourself. Now the first critical perspective that I've noticed every AI expert and industry leader talk about is you have to think about this in terms of trajectories. So there's this saying that goes, you want to be where the puck is headed, not right now. So by the time you get to where the puck is right now, the puck is going to be, you know, over here and now you've missed it. And then you're constantly playing catch-up. And AI is the puck right now. And what I'm seeing in these conversations is that a lot of people who are really worried about their career and AI are spending a lot of that effort trying to get to where the puck is right now. and they're not spending enough time to actually step back and look at that trajectory. Now, the issue I bring up this idea of trajectory is that your ability to position yourself in such a way that you are where the puck is going to be and therefore you're future proofed yourself depends on how accurate your ability to foresee that trajectory is. So, if you think that the puck is heading in this direction and you position yourself to be over here, then if you are wrong, then you have not future proofed yourself. And what I see in a lot of these conversations is that people who are not really immersed in AI or the career or employability space or deeply in the industry, they think that the trajectory is heading in a certain direction based on the hype. So I call this the hype trajectory. And I noticed that there are a lot of people who are buying into this hype trajectory. And so this position here is starting to get pretty crowded. Now when I talk to AI experts and industry leaders, people who are really making the decisions in their companies about whose job is going to stay versus be replaced and why is it I talk to them about their decision-m process. The trajectory that they see is different. They're thinking about this differently. So I call this the reality trajectory. I call it the reality trajectory because these are the people who are actually making the decisions about what's happening with your job. And there are not a lot of people trying to position themselves where the puck is probably really headed towards. And so what I think is really helpful is that if you are worried about your own job and how to future proof it, if I can show you how these people are actually thinking about AI and you can think like the managers and the CEOs and the industry leaders, it means that you are able to position yourself and your future more safely. So there are three major differences in how I think people that are in this high trajectory think about AI compared to how I see industry leaders thinking about AI. Those three things are number one thinking about the technology, thinking about the complexity and then thinking about the exposure. So the first one is thinking about the technology. Thinking about technology is not the same thing as thinking about capability. So a great example is to think about generative images and generative videos. So when generative images first appeared on the scene, people were saying, well this is never going to replace my job because look at how bad it is at creating faces or hands or you know other complex objects. You know it doesn't look realistic at all. That is not sound reasoning. And we know that now because one year two years later the capability has increased massively. So this is the difference between thinking about technology versus capability. Technology is something that's fundamental to the way the capability works. Capability is just its current level. It was very difficult for AI to be good at generating accurate
Segment 2 (05:00 - 10:00)
human faces to begin with because of the fact that human faces are very complicated and humans are very sensitive to whether a face looks right or wrong. But there's no reason from a technology perspective that it wouldn't be able to bridge that gap eventually. And now it it's almost indistinguishable. And so when you look at an AI tool, the way that experts industry leaders tend to look at it is that they ask themselves, is this a temporary a current limitation that's just in capability or is this actually a fundamental limitation which is inherent to the technology itself, the underlying architecture? Because if it's just a temporary limitation, then this is most likely going to be solved in the coming 1 2 3 years. Whereas if it's a fundamental limitation that represents a significant advancement from where we are right now with the current technology. It's not about improving the machine. A fundamental limitation means that this machine will never be able to achieve that. And so a litmus test is to ask yourself, do I feel that my job right now is safe only because of a temporary limitation or is it actually a fundamental limitation? And it does mean that you have to spend a bit of time actually learning a bit about the AI technology. And if you really are worried, I think that's something that you should be doing instead of just generally thinking, hey, one day AI will get better and it will be able to do this thing. Actually take some time to learn how does the AI work that allows it to have its current capability. And when you do that, one of the first things you'll probably learn is about hallucinations. And that hallucinations are a fundamental limitation. When I say hallucinate, I'm talking about when you ask a question to an LLM like a chatbt or claude or Gemini and then it will output some text that is just made up. It's not based on reality. It's not factually correct. It's just synthesizing words to answer your question. That's what we call hallucination. And a lot of people are talking about, hey, this is the prompt you should use to reduce hallucination. If you do it this way, hallucination will disappear. Imagine what AI will be able to do when it's not hallucinating anymore. That's not really an accurate way of viewing the situation. And the reason is when you learn a little bit about the transformer architecture that sits beneath a large language model that kind of powers it. Hallucination is not a bug. Hallucination is actually just it doing exactly what it is meant to do. And the reason is that a large language model like HTP has no concept of truth or reality. What it has is training data, a lot of training data, and then a model, an algorithm that creates probability. And so when you ask a large language model a question, it's going to analyze that query. It's going to break them down into tokens, and it's going to create this little probability matrix. It's basically mathematically interpreting your question through a series of probabilities. And it's going to look through the training data to pull out the words that have the highest probability of matching that query. And so every word is just generated based on probability. And that's how the technology works. It's built on probability. So you can't actually remove hallucination. Removing hallucination would require this transformer architecture to somehow have a concept of what reality or truth actually is. There is no way for the current technology to enable that. So that is a fundamental limitation in the current technology. Hallucination will continue to exist as long as we use this technology. And now this for you as someone thinking about your own career and the role of AI versus yourself becomes a very useful thing to understand because now you can ask yourself well in what situations is it more likely to hallucinate more often? And it's these types of questions that I notice AI experts and industry leaders talking about more often. And if you follow that line of questioning one of the major things that you get to is this idea of complexity. One of the current limitations of existing AI models, especially large language models like CHP, Claw, Gemini, Notebook LM is like I said before, it only has training data and probability. Anytime you're trying to use an LLM to generate a response to something where there's not likely to be great training data on it, it's more likely to hallucinate. It actually doesn't take a lot of complexity to start reducing that reliability and accuracy by a lot. So anytime that you're trying to do really high context reasoning or decision- making, so high context meaning that there are lots of factors and these factors influence each other in complex ways or anytime that you're trying to apply that knowledge in really nuanced personalized situations or if it's a topic that is very emerging like an emerging new field or if there is not very much highly published resources or information about this uh situations where there are no existing best practice guidelines. If what you're trying to use this large language model for is one of these situations, the reliability is not going
Segment 3 (10:00 - 15:00)
to be good. So let's say that you are a business owner trying to create a marketing strategy for a new product that you've created and then you give it all the information that you think is necessary. what the product is, what the problem is solving, who it's solving it for, the geography, who your competitors are, the type of tonality that you want for your marketing campaign. This is like a common kind of situation that business owners will find themselves in all the time. A large language model is actually not going to perform very well with that, just with those types of context to all consider simultaneously, especially if you yourself do not have the expertise to look at that and evaluate whether it's good or not. If you don't have the expertise, it's going to look good because it it's still better than you if you know nothing. But if you give that to an expert, an expert is going to be able to find holes in it everywhere. That's also the reason why an expert is going to get better results using AI than someone that doesn't have expertise because they can look at that response and then they can modify it. They can adapt it. They can continue to refine it or they can say, "Hey, the output that the AI is giving me is not good enough, but it's good enough for me to use as a springboard. So, I'll use this as a template and then I will apply my own expertise on top of it. It saved me 80% of the time on having to do the tedious writing up everything and now I'm going to apply my expertise in the way that is the most valuable. Another example, let's say that you're a software developer and you're trying to develop an application for this client that has all these different product requirements. The AI is not going to do a great job at creating this really great cohesive solution design and architecture. And again, even if it looks good to the untrained eye, you're going to need someone with real expertise to tell you whether it actually makes sense or not. And the thing is that these types of situations that are really contextually detailed and specific that require a lot of higher order thinking and deep problem solving, these are areas of weakness that are fundamentally limited with the way that large language models are designed. And so another litmus test that you can do for your own career safety is you ask yourself how much contextually nuanced deep problem solving am I doing? How much of the work that I'm doing requires me to do stuff when there are no widely published best practices? If the answer is a lot, then your job is relatively safer from AI displacement. And if the answer is not a lot, then you should be thinking about how do I position myself to be doing more of that kind of work? How do I upskill myself? How do I improve my own knowledge to get to a point where I'm doing that kind of work? Because that is the type of stuff that is safer. And by the way, if you are feeling like your current set of knowledge and skills are becoming outdated very quickly and you're worried about how you're going to upskill yourself fast enough, then I recommend checking out my free newsletter where I teach you how you can upskill and learn things much more quickly. I asked myself the question, what would someone need to know to be able to master new complex knowledge and skills and half the time that they normally take? And the most important points and principles, I try to condense them down into these free weekly newsletters. So, if you're interested in that, you can sign up. I'll leave a link for you in the description below. So, let's say AI gets better, hallucinating goes down. Now, we're at a point where we're like 95% sure that the response it's giving us is pretty solid. At this point, surely every job is just getting replaced by AI. Well, when you talk to business owners who are dealing with millions of dollars and they're thinking, should I pay a human to do this or get AI to do this instead? That's not what they're thinking. The way that they are thinking about this is in terms of risk exposure, which is the third point. So, let's say you have an AI generating something that you have a 95% confidence that it is going to be correct. And based on the information it gives you, you're going to make a series of decisions. And based on that decision, it could either make you $100 million or it could lose you $10 million. The decision that you have to make right now is how much am I willing to pay for an expert to make this 95% a 99. 5%. For 4. 5% extra certainty and expertise, how much is that worth? Now, let's say that hiring the expert to look at this cost me $200,000. If paying $200,000 potentially protects me from a 5% risk of losing $10 million, am I going to pay this $200,000 or not? It's a no-brainer. And so, this is an interesting trend that I've been seeing over the last few years of talking with business owners and CEOs, which is that the equation of how much value is someone bringing to their company is changing from how much work they can do, what's their output, what's and of course, what's the quality of their work. And now it's shifting towards how much better is this person compared to using AI and how much risk am I exposing myself to if AI is only going to be hitting the mark 95 90% of the time and so this becomes another litmus test for the future safety of your career which is how big of a risk is each percentage of certainty worth as a former medical doctor I'm really interested in keeping tabs with how AI
Segment 4 (15:00 - 20:00)
and medicine are in interfacing and one of The areas that's really promising is in medical imaging. So getting an X-ray, CT scan, MRI, there's a lot of time that goes in hospitals with radiologists who have years and years of training and like thousands of cases of experience interpreting the images to say, hey, is there a disease present or not? Does this person have cancer? Yes or no? And AI right now is getting to a point where it's really, really good at detecting if someone is normal. Now, when it comes to is there a cancer, yes or no, it gets that wrong sometimes. And the interesting thing is that in a lot of businesses that AI would probably just do their job because who cares if it gets it wrong every now and again, but because the risk exposure and what's at stake is so high when it comes to healthcare, AI is not being used for interpreting images and finding and detecting cancer until it's almost at 100%. So when you think about all of these three concepts, this idea of thinking about the technology, where the technology is right now versus where it's going to get to, is it a temporary or a fundamental limitation, and you think about the complexity of the work that you're doing in your career? And then you think about the risk exposure and the stakes that are at play. Then you can start to measure the future safety of your career through just this one concept which I call the threshold of valuable expertise. So this threshold of valuable expertise I think is a useful way that you can think about your own career and how big of a risk AI displacement is for you. So let's say that you have this graph here on the y-axis we have the level of expertise. So level of expertise on the x axis we have time. The threshold of valuable expertise is the amount of expertise you need the amount of knowledge and skills you need to be competitive in the workplace. If you reach this threshold, you can get a job. You can have a good career. If you don't reach this threshold, you'll struggle. So, if we go back in time, hundreds of years, let's say uh the year 1,200 AD, at this time, the threshold of valuable expertise is fairly low. If you were born into a rich family uh and you had access to a personal library of a set of books, that becomes incredibly valuable because books are not things that the common people owned. And what books represent is access to valuable information. A very unique thing that only a few species on this earth have access to, which is generational knowledge. Humanity builds knowledge, accumulates knowledge as it gets older. And so this knowledge is distilled into these books much like how my knowledge is distilled into my newsletters. Anyway, back in 1200 AD, having access to these books becomes very valuable. And so that may have been just the threshold of valuable expertise, just having access to books. Now, if we fast forward a few hundred years, let's say now we're in the 1600s, the printing press was invented around 1450 or the mid400s. And now these books, which used to be these rare commodities that were meticulously handtr now produced in the thousands or tens of thousands. So, of course, supply and demand. As books become more readily available, the value of books goes down. They become cheaper. Not only does the value of books go down, the value of what books represent goes down. It's not enough just to have access to information. And so, as a result, the threshold of expertise goes up. So, it's not good enough to just have access to books anymore. Maybe it's that you need to have access to the right books or you need to be able to navigate them and find the right information quickly enough or you have to know enough about this topic from memory so that you don't have to spend hours reading through your books to arrive at certain facts. So what we're talking about here is the idea of having internalized knowledge and to build this internalized knowledge education becomes increasingly more valuable. going to school, going to university, these things represent that you have knowledge internalized and that creates significant value in the workforce. But now let's fast forward even more. So now we're in 2010. At this point, the internet is a thing. You can Google search basically everything. Again, the value of information is going down. And in fact, the value of even having internalized knowledge is going down because it doesn't matter if you remember stuff. It's so fast to just find every piece of information that you need now that you don't even need to have it in your memory. At this point, people are questioning, do I even need to go to university? What is it really doing for me? Having a degree does not guarantee a job anymore. In fact, having a degree almost seems to be just a prerequisite to send in a CV. And so, what this means is that the threshold of valuable expertise has gone up again, gone up quite significantly. Not only do you now need to know things and having access to information is just a given at this point. You also need to know how
Segment 5 (20:00 - 23:00)
to use that information. You need experience and wisdom. You need to know how to think with the information that you have. Critical thinking, proactiveness, resourcefulness. And so when we look at AI, we can think about this as this is where the puck is moving towards with the advent of AI. It's kind of the same thing. So here 2025. Now the value of having access to information means pretty much nothing. The value of having memorized things is in most situations not very valuable. So what are the things that are valuable? Well, valuable are now the things that are not easy. The things that are rare. And if we go up here in terms of these core limitations, the things that are rare are when people can work with high levels of complexity. They context with multiple factors. They can do higher order thinking and problem solving. And it's not that these things are now suddenly valuable. These were always the things that were the most promising careers highly paying. But the difference is that because now that is the stuff that AI struggles to do, the threshold of valuable expertise has gone up again. And so to be competitive in the workforce of the future, you have to be able to do the things that the AI can't do because everything below that falls below the threshold of valuable expertise. And the stuff that is above the threshold of valuable expertise is this stuff. And this is where the puck has been moving for a long time. And so with AI coming along in a way, yes, it's a huge change to the workforce. And again, it kind of sucks for the average person who is just trying to make a living and not lose their job. But again, capitalism is a heck of a force. And so if you are trying to future proof your career, I would say the safest way to think about it is how do I bring myself to a position where I'm doing this stuff? This is where the puck is moving towards 5 10 years from now when AI can do all the simple things. These are the things that AI is still going to struggle with. And these are the things that for a business it makes financial sense to pay someone to do to give them that level of certainty. You do not need to become a machine learning engineer. You do not have to become an expert in AI. You do not need to be able to use AI better than everyone else. Jumping onto the AI bandwagon and getting AI to do more of what you're doing today is not the winning solution. If you learn to use AI so well that it can do everything that you do every single day or even better, why would someone pay you to do it? That is an example of the crowded positioning. That is the riding the hype trajectory in the wrong direction. Instead, think about the parts of your job that are the most nuanced, complicated, that have the most number of factors at play, working in situations where there are no clear best practice guidelines. And if you really want to take this seriously, what I would recommend is that you try and upskill yourself to get higher and higher up on that complexity ladder. Get better at working in those complex situations, and then give yourself more responsibility to be dealing with situations where there are bigger stakes and bigger consequences. Because your ability to protect against a major complicated consequence is what is valuable. I am not pro-AI or anti- AAI. I'm not an AI influencer. I just don't want you to lose your job to AI. However, it is seeming like it's going to be inevitable for a lot of people. And so, I hope that this video can help some of you with that risk. And if you want to learn how you can upskill yourself faster, how you can gain new knowledge and new skills in less time, then check out this video here where I break down the process of learning in a little bit more detail. Thank you so much for watching. I hope this helps and I'll see you in the next one.