# how to stay economically valuable from 2026-2029

## Метаданные

- **Канал:** Nick Saraev
- **YouTube:** https://www.youtube.com/watch?v=4pAt0DP-x50
- **Дата:** 03.02.2026
- **Длительность:** 16:11
- **Просмотры:** 19,147

## Описание

🔥 Join Maker School & get customer #1 guaranteed: https://skool.com/makerschool/about
📚 Watch my NEW 2026 Claude Code course: https://www.youtube.com/watch?v=QoQBzR1NIqI
🎙️ Listen to my silly podcast: www.youtube.com/@stackedpod

📚 Free multi-hour courses
→ Claude Code (4hr full course): https://www.youtube.com/watch?v=QoQBzR1NIqI
→ Vibe Coding w/ Antigravity (6hr full course): https://www.youtube.com/watch?v=gcuR_-rzlDw
→ Agentic Workflows (6hr full course): https://www.youtube.com/watch?v=MxyRjL7NG18
→ N8N (6hr full course, 890K+ views): https://www.youtube.com/watch?v=2GZ2SNXWK-c

Summary ⤵️
Thoughts on being economically valuable in 2026.

My software, tools, & deals (some give me kickbacks—thank you!)
🚀 Instantly: https://link.nicksaraev.com/instantly-short
📧 Anymailfinder: https://link.nicksaraev.com/amf-short
🤖 Apify: https://console.apify.com/sign-up
🧑🏽‍💻 n8n: https://n8n.partnerlinks.io/h372ujv8cw80
📈 Rize: https://link.nicksaraev.com/rize-short (25% off with promo code NICK)

Follow me on other platforms 😈
📸 Instagram: https://www.instagram.com/nick_saraev
🕊️ Twitter/X: https://twitter.com/nicksaraev
🤙 Blog: https://nicksaraev.com

Why watch?
If this is your first view—hi, I’m Nick! TLDR: I spent six years building automated businesses with Make.com (most notably 1SecondCopy, a content company that hit 7 figures). Today a lot of people talk about automation, but I’ve noticed that very few have practical, real world success making money with it. So this channel is me chiming in and showing you what *real* systems that make *real* revenue look like.

Hopefully I can help you improve your business, and in doing so, the rest of your life 🙏

Like, subscribe, and leave me a comment if you have a specific request! Thanks.

Chapters
00:00 The Future of Work
01:14 Preparing Context for AI Models
04:23 Human in the Loop
08:02 Automation of Routine Tasks
11:29 The Taste Economy
14:54 The Future of AI and Taste

## Содержание

### [0:00](https://www.youtube.com/watch?v=4pAt0DP-x50) The Future of Work

So, what is the future of work going to look like? Are there even going to be people producing economically valuable things? Are we just going to be meat batteries for matrix style supercomputers? Or is humanity going to turn out unsecured Vibes slot project after vibe slot project until we have no API keys left? This video is meant to be an exploration of that topic and a short look into a near future of where I think our work is going and to just cut to the core of the onion right away because that's what we do here. I believe the vast majority of the work that you, I, and most other knowledge employees are going to be doing over the next few years is all about preparing context for AI models. So, first I want to show you this post by Balaji because given that he's a fellow, handsome and intelligent young man, he agrees with me. He says, "Much of any digital job is now preparing context for AI models. " And he gave some examples here like organizing files and folders, naming everything correctly, introducing things in the right order and only then asking the AI to do something in clear written most of the time English although there are some things here that I would disagree with. Now to really understand this there are a few prerequisites that make sense and I'm going to cover them all one by one.

### [1:14](https://www.youtube.com/watch?v=4pAt0DP-x50&t=74s) Preparing Context for AI Models

So the first is this concept of fshot verse instructions. Now, I've been working with AI since GPT2, and so I've seen all the evolutions of different prompting styles, different approaches, and even uh different economically valuable sorts of models. But one thing has remained really similar throughout that time, and that's the difference in the quality of the instructions and examples that you give a model and how that correlates to the quality of their outputs. So here I have a brief little graph. On the left hand side is accuracy. This is just how close the AI is to providing some output that we want it to provide. At the bottom is the number of parameters in the large language model in billions. And so what you see is as the number of parameters in the model goes up, there becomes a clear and more consistent divide between these three lines. The question is what are these three lines? Well, the bottom green one here is what's called zero shot. The blue one is what's called one shot. And then the orange one is what's called fuehot. What are shots and why is all of this stuff important? Well, a zeroshot prompt is when you tell an AI model to do something by just telling it to do something. Hey, I want you to organize my emails and filter them based off of whether they're spam or not spam. Hi, I'd like you to generate me a highquality award-winning article. I want it to be in this journalistic sort of tone of voice and I want you to do X, Y, and Z. To make a long story short, zero shot is just when you tell the model to do something. And that's heavily contrasted between one and few shot. Because with one and few shot, what you do is not only do you tell it to do the thing, but you also give it examples of how to do the thing. And in reality, AI models, much like people, learn how to do something a lot better when they are shown and not necessarily just told. And so in a oneshot prompt as opposed to a zerosshot prompt, what we do is we tell the model, hey, I want you to write this fantastic high-quality piece of content. Here are a bunch of instructions and more importantly, here's an example of what I'm looking for. And so instead of just trying to guess at this fuzzy sort of cloud of human language, because keep in mind that human language is not very precise, and we as human beings are even less precise in our ability to express ourselves. There's a fair amount of variance there. What we do is we just give it the thing that we want it to emulate. With fshot instead of just providing one example, what we do is we provide many. And so in this case, K, the number of examples is 15. And what you see happen is as we go from zero to one shot, there's somewhere about a 5% improvement in accuracy. As we go from one to few shot, there's something like a 10% improvement in accuracy. So the reason why we're all pushing towards providing context to models is because the more highquality examples we can provide a model, okay, the better the quality of those examples on average, typically the output. But that's not the only reason why I think this is so important. The second reason is this concept of human in the loop. The main competitive advantage of artificial intelligence

### [4:23](https://www.youtube.com/watch?v=4pAt0DP-x50&t=263s) Human in the Loop

right now is it can produce an extraordinarily high quantity of pretty good work in a very short period of time. Nowadays I can generate 100 designs in what used to take me onetenth of the time it took to create a single one. The issue is as of right now there is variance or variability in the quality of the outputs. For instance, if I'm generating, let's say, five designs, this design here might be pretty good. kind of crappy. This design over And okay. The issue is despite the fact that we can generate, let's say, four of these designs in a quarter of the time it used to take us to do even one. That's a 16 times improvement in time. Okay, only half of these outputs are actually good. And the problem is AI currently lacks the ability to make the determination on you know why this one is worse than this one let's say. And so what that means for us is the game optimal strategy to use AI models nowadays is basically one generate lots of stuff. Okay. So lots of I don't know designs code products and so on and so forth. And then basically you just have a human do what we call QA. Okay quality assurance. What we do is we basically generate 10,000 things and the human just picks the five best ones. As an example of that, here's a thumbnail generator that a friend of mine, Johan, put together for me where basically you feed in a thumbnail and then just ask it to change some aspect of the thumbnail for you. And so what I've done is I've asked it to create me uh eight versions of this thumbnail. So going through here, this is more output than I realistically feasibly would have been able to do in probably an hour. And I did this in maybe 5 or 10 seconds. So, all I need to do is basically just scroll through these, look at the one that I like the most, maybe this one, download it, and then boom. What I'm really doing there, if you think about it, is I'm collapsing the total space of all possible generations, down into just the one generation here, which is actually good. And I'm exercising my human taste in order to do that. My take on that is this is not likely to change over the next few years, especially as human taste itself grows more distinguished through massive and repeated exposure and through things like social media networks where short-term viral memes and stuff like that tend to dominate. All of this to say really that in addition to needing to compile a large number of highquality examples to improve the output accuracy and quality of large language models or their successors, the optimal strategy over the next few years is likely going to involve AI generating things and then us shephering those outputs in a direction that we like, perhaps selecting the highest quality ones. Now, some areas where I think Bellagi in particular is wrong is I don't think we're going to have to name everything correctly. I don't think that's going to be that big of a deal. I think the order of things that we feed into models is probably also not going to be that big of a deal. You can see this right now by uh voice transcription design patterns where a big thing that people do while designing flows and let's say claude code or agentic workflows and maybe Gemini is they'll just turn on their little voice transcription tool and then talk a ton. They'll dump in, you know, 30 seconds to 60 seconds of just what's up here and then trust that the model will organize that in a way that is important and makes sense. I also don't know if English is necessarily going to be the end all beall. Now obviously most of these models are trained predominantly on English text and so they're going to be smartest in English text but I also think there's a lot of promising work out of other areas around the world and I wouldn't want to discount that. But anyway, enough theory. What does this sort of work actually look like? Well, I do a lot of automation on a day-to-day basis. That's

### [8:02](https://www.youtube.com/watch?v=4pAt0DP-x50&t=482s) Automation of Routine Tasks

part of what my community maker school is all about. It's how I sort of got my come up in the agency world. told I skilled an agency to $70,000 a month doing this sort of thing. And one common um flow or automation that I'm asked for pretty constantly even now is building some form of email autoresponder. Now with Gemini or with Claude Code, you can usually just oneshot this sort of stuff. You just say, "Build me an email autoresponder that scans my mailbox and it'll do a pretty good job. " But pretty good is often not enough. The difference between, you know, 95% and 99% though objectively it may look like only 4% is actually a gulf in quality as any design professional, creative or copywriter knows. And so a practical example of me doing my economically valuable work, me building this task. It's no longer the construction of the workflow that matters. It's me building a really good prompt. And the way I build a really good prompt is, as I mentioned, I need to find really high-quality examples and I need to format those examples in a way that the model understands. So, what's the work actually look like? Well, it literally is scrolling through a Gmail inbox or having your agent pull you up a bunch of candidates and then just selecting them. When you found a good quality email, something that you like, what you do is you just copy and paste that puppy into a prompt. And a lot of people will look at me and say, "That's stupid. " But what is more human than that? What really exercises our cognitive faculties more, okay, than demonstrating taste for what other human beings want or look like? I mean, we have freaking mirror neurons, right? We obviously know what the human experience is like because we are human beings and so we take up a fair share of that experience ourselves. If you think about it, the most economically valuable tasks in the future are just going to be nothing but using that taste skill over and over and over again to maximize the probability that some AI or machine generated output aligns with what human beings want given that the economy currently still runs on human beings. It's human beings that are buying our products. going to be reading these emails. So why is it at all shocking that one of the highest economic goods is going to be your ability to predict whether or not other human beings are going to like something? Here's another example. As I mentioned, a big chunk of my work is on the automation of routine tasks. And there are a few platforms out there that make this really easy. One in particular that I really like is called NADN because it's just simple and easy to use while also connecting and integrating pretty natively with a lot of platforms that I like. So what is my work going to look like over the next few years? Well, it's going to be looking for very highquality examples of the sorts of flows that I like to build, think are maintainable, that offer me good, I don't know, error logging and stuff like that. Probably stuff that I have already built before in the past and then it's going to be using these in some sort of prompt. In the case of NAND workflows, a lot of the time that is simply copying the workflow JSON. The JSON ends up looking just like this. It's a collection of text like an email. And all I'm doing is I'm again applying my human intuition to know what flows are good, which flows probably need some work. I'm still going to be building the flows, but at this point, it's more going to be trying to massage pre-existing flows into ones that I really like, ones that I think the model will be able to understand and sort of gro. Let's say you're in design or you make creative or I don't know, you write copy or something like that. Well, most of your future work is probably going to involve finding and then organizing really highquality ads for some sort of

### [11:29](https://www.youtube.com/watch?v=4pAt0DP-x50&t=689s) The Taste Economy

ad creative or ad copy generator. And then you are going to be that last line of defense, that human bastion against the darkness uh that just selects the best quality results. You might do this by simply going on Facebook ad library, scrolling through and getting ads that speak to you on a visceral level. You then try and generalize that experience and you use again your human taste to tell you whether or not that ad is likely to speak to other people as well. Obviously, there'll be a lot of testing involved. You're probably not just going to select one or two or three. You'll probably select 20 good candidates, run them side by side, see which ones have the best uh I don't know, click-through rates and CVRs and stuff like that. And once you have that subset, feed that into your model, and I'm sure future models will help us build pipelines to do that sort of thing. But literally scrolling through Facebook's ad library or scrolling through the social media feeds of the future and then trying to pick out what you consider to be the highquality ones is about as human a job as you will have. You know, I just showed you guys the generation of some thumbnails. Well, a big chunk of why that thumbnail generator that my friend Johan put together for me works is because we have a very high quality reference library of images of both myself and then some other things that improve the probability that an output is going to be good. basically collapses that variability into a mode, a sequence of consistent performers. And as brands grow more important, like personal brands, people like myself for instance, or other people that you know, you trust, you like, people that act as consensus mechanisms for this big flood of come around. Well, obviously people are going to be looking to replace or outsource what they do to models as well. But as we know, if you follow somebody and you're a big fan of their work, you can tell pretty damn quickly if they're using AI to, you know, do their writing or whatnot. So, a lot of social media people in the future, just like with ads, you're going to be looking for highquality posts that they've written in the past that use their tone of voice. They're going to be looking for, massaging pre-existing ones in ways that really get to the core of what it is that they want the models to reproduce. Although I've spent a lot of the last week on Claudebot and Moltbot and OpenCloud and all these different attempts at the same sort of thing, there is going to be value in having models take over more and more of our work. And a lot of them are probably going to operate in a pretty autonomous agentic way. Well, these things are going to need to be our representatives, right? And what's one of the best ways to make sure that these things think like you do? It's going to be doing things like organizing your notes in such a way that a model can get a clear example of your own decisionmaking if confronted with some sort of decision or topic. It's going to be giving them access to your notes. Maybe not all of your notes, obviously. Retain some goddamn sanity. You don't need to feed everything to an AI, but it's going to be giving it notes on maybe how you've approached certain specific situations in the past and how you want the model to approach them again in the future. Again, fshot functions significantly better than zero or oneshot. So why wouldn't you if you have a model representative joining calls on your behalf or filtering out emails or signing up to services or simulating what you would do in a certain situation. Obviously the more data you can feed something like this the better. Now hopefully it's clear I don't believe this is how it's always going to be. Eventually these models are going to be better than us at determining taste as well and they're going to know us more than we know ourselves. I think I'm confident in saying that because that's basically what algorithms do right now. And algorithms can be thought of as just like a very distributed massive intelligence born out of statistics even

### [14:54](https://www.youtube.com/watch?v=4pAt0DP-x50&t=894s) The Future of AI and Taste

larger than most of these LLMs and whatnot. Think about your Facebook feed for instance. Think about your Twitter or your Instagram. You know, when you start using these things, they're not super addictive. But after, you know, a few days, months, or even years, these things tend to know what you want to see better than you know yourself. But that's not for now. That's eventually. And I think once we've reached that point and we've outsourced sort of that final aspect of human economic value, there are going to need to be some alternative solutions in place economically. But for now, for the next few years, until we reach that point, that's the basket that I'd store all of my eggs in, my proverbial eggs. Um, that's where I would spend all of my time and energy. I would spend it on preparing the context for the future generation of models that'll ultimately be able to use that much more effectively than I will. And it would be not sherking this idea of applying my taste to things because I think taste really is the highest human economic good. Um, you know, if previously it was the knowledge economy, now it's like the taste economy. Anywh who, just wanted to give you guys some thoughts on that. Hopefully you guys appreciated it. I love making videos like this every day. If you guys want me to do something specific, if there's a big trend happening right now that you guys want me to comment on, or if there's something that you guys are looking from me that maybe I'm not currently providing for you, just give me a shout. Thanks so much for your time. I'll catch all y'all in the next video.

---
*Источник: https://ekstraktznaniy.ru/video/11681*