# Legendary lead magnets with a simple n8n workflow [#04 Mark Kashef]

## Метаданные

- **Канал:** n8n
- **YouTube:** https://www.youtube.com/watch?v=ROlKT2P6Xec
- **Дата:** 24.10.2025
- **Длительность:** 50:38
- **Просмотры:** 3,621

## Описание

Summary

In this episode of the n8n Podcast, Dylan interviews Mark Kashef, known as the 'mad scientist' of n8n workflows. Mark shares his journey from using Make to mastering n8n, discussing the challenges and opportunities in automation. He emphasizes the importance of ideation, the role of language models like Claude, and the need for a solid understanding of coding principles. Mark also highlights the potential for small to medium businesses to leverage automation and the significance of community in tech. The conversation delves into the nuances of vibe coding versus context engineering, common mistakes beginners make, and the future of AI and automation.

00:00 - Introduction & The Power of Vertical Automation
01:41 - Mark’s Journey: From Make.com to n8n
03:22 - Building Workflows with AI & Mermaid Diagrams
06:21 - Leveraging Claude, ChatGPT, and Other LLMs
09:31 - Using MCP Servers & Live Documentation
12:38 - Pinning Data and Debugging Workflows
15:00 - When to Use Custom Code Nodes
17:58 - Favorite n8n Workflows & Use Cases
21:16 - Evaluating LLMs: New vs. Old Models
24:02 - Production-Ready Automation & Scaling
27:36 - Opportunities for Small & Medium Businesses
31:28 - The Legendary Lead Magnet: AI Audit Workflows
35:14 - Vibe Coding vs. Context Engineering
39:06 - Leveling Up: Skills for the AI Automation Era
43:39 - Overcoming Imposter Syndrome & Lifelong Learning
49:36 - How to Connect with Mark & Final Thoughts

#n8n #n8nMasterclassPodcast #podcast #masterclass 
The n8n Masterclass Podcast

## Содержание

### [0:00](https://www.youtube.com/watch?v=ROlKT2P6Xec) Introduction & The Power of Vertical Automation

There's a lot more opportunity than people expect and it's about finding a series of businesses or a business that you can just go deep in. If you move vertically or build workflows or automations vertically in a business, there are so many problems that are worth solving. My typical workflow is I'll go from business problem or operations and I'll kind of dictate what the process looks like. So I'll take that dictation and I'll make what's called like a mermaid diagram. So, it's like a mental model workflow of what that workflow might look like theoretically from a business standpoint. What are all the different steps? Once I know the steps, what are the different services and tools that we could plug in at each one of these steps? And by the time I'm done, I have a pretty clear vision of where I want to go. In the next 12 months, I would say coding will go from 80% to maybe 85 to 90% of the way there. I still don't think it hits that 100% if we're just using language models. You don't have to learn how to code anymore, but it's very helpful to be familiar with what code means along with understanding basic design principles as well as very basic software engineering principles like okay, if I have an app that's a SAS app and we collect payments, we should probably have a table for everyone that's a user. We should also paid. We should probably have a way that we link both. And a little bit of all of these will level you up, especially as you go through in the build process, and you'll be less likely to give up if you have some form of thread that you can pull on when things don't go right. N might look daunting to start off, but just like anything, over time, you'll get better. And everything is a series of steps. You would be surprised what you can accomplish in a pretty short amount of time if you give some focus and love onto uh building.

### [1:41](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=101s) Mark’s Journey: From Make.com to n8n

All right, welcome to the next episode of the Inn podcast. I have Mark Cashiff here. He is what's known as the mad scientist. The guy who originally created this prompt to na workflow that you might have seen all over YouTube taking prompts and flows from say Claude and turning those into workflows. And so he's been able to do something that a lot of other content creators have been able to do, but be able to make these content to workflow executions. So I wanted to have him on the show to talk about him, his process, and what he's seen evolving in this innate space. So Mark, it's awesome to have you on the show, brother. — It's an honor and a privilege, Dylan. Thank you so much for having me. — Yeah, bro. Now I'd love to kick this off. So can you talk to me just a little bit about your journey with NAN and what led up to this whole prompt to workflow execution that you've built? — Yeah, absolutely. So I was a make guy for a while, a long time. And around the boom of Naden last year with the big agent node that kind of changed everything started taking more of a look at it and as I saw some things that were a lot easier let's say custom code in n versus make. com very basic little quality of life things looked a lot better on the nn side of things versus the world that I was used to. So I started dipping my toes and as I was dipping my toes obviously new world, new environment, uh new structure of nodes. So I'm just I was just looking at it and I have a background in development and I saw that everything was open source and that NN was on GitHub and I'm like okay it's open source it's on GitHub. Theoretically this is all JSON at the end of the day. What if I could try to have a language model make the JSON and I could find a way to just like paste

### [3:22](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=202s) Building Workflows with AI & Mermaid Diagrams

it. Is that even possible? or import it. And then after like 12, 15, 30 hours with chatbt at the time before I even started with claude, I made some progress. I would have like some semi-broken workflows, but I'm like, there's something here. So I went from chatbt plus deep research to eventually claude 3. 7 to eventually now cloud 4 with MCP servers. And over that whole journey, it was just a cheat code to just going from like a thought to a draft. that draft might not be the thing I end up with, but it would give me it also educated me really quick to cheat time to be like, okay, cool. In make. com, this is how you do it, but in any this is the equivalent. And then I started building workflows that would take a make. com workflow and convert it into a JSON for any so. So I could kind of transition over my world fully to any. So I became a bit of a master in the nen JSON, which allowed me to move a lot faster. And now that like I've put together this trick and this workflow, I'm able to crank out tons of ideation or even go back and forth with clients or community members and just think of what a workflow might look like a lot quicker. So that was really the emergence of that idea. — Love it. And you know, so much of this automation is it's a there's a little bit of a barrier sometimes getting up and rolling with naden, right? There's some technical hurdles. You have to have a little bit more coding knowledge than some of these automation platforms. And so when you're doing this, I mean part of it is overcoming that barrier and then part of this thing is ideation. Are you like are you just coming up with just general ideas or like how do you source the ideation of like what workflows to build for this text to workflow action? — My typical workflow is I'll go from business problem or operations and I'll kind of dictate what the process looks like. So I'll take that dictation and I'll make what's called like a mermaid diagram. So it's like a mental model workflow of what that workflow might look like theoretically from a business standpoint. What are all the different steps? Once I know the steps, I can now start to plug in, okay, what are the different services and tools that we could plug in at each one of these steps? Where do we need custom code and where can we offload to a language model? Where might it make sense, if it makes sense, to use an AI agent? because I like to be more deterministic than just kind of like have the chance of hallucination all over the place. So, I go through that exercise and by the time I'm done, I have a pretty clear vision of where I want to go. And now that I know where I want to go, I can take that mermaid diagram. I can actually feed it as an input to Claude and basically say, take this, here are the tools we want to do, give me a draft of what this might look like in any. And that really allows me to do the ideation and brainstorming, visualize it before I actually go into edit end. So now even when I'm like doing the plumbing and the hard work after you've even generated the workflow, I have a goal in mind that I do have a northstar in mind as well. — And you talked about using Claude, did

### [6:21](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=381s) Leveraging Claude, ChatGPT, and Other LLMs

you experiment with other like LLMs or is it the best one? Like why did you end up landing with Claude? — I tried everything. I tried Gemini. I tried chativity multiple times. I would say Claude has been the most consistent and also the one that needed the least input to get going. If you give Claude just a few little inputs or in this case now we have MCP servers to help us out. It is more than enough to get you something that's pretty coherent with a very low hallucination rate in terms of broken nodes. With chatbt it's always almost consistently outputting broken nodes even if you feed it certain workflows as an example. Now it's a lot better with some of these more advanced models like 04. But in general, Claude, even with like the lower level models, has done the best and the most consistent. — And so it sounds like your process is kind of okay, what's the business use case? What are all the tools that are needed for the business use case? What are the nodes that I actually have access to that we can use versus custom code nodes that are inside of inadin that we need to outsource? And then you take that overall concept and diagram and mermaid and say this is the format and then bring this across. Did I understand those steps correctly? — Yeah, up until now those have been the steps. The one nuance lately is like a video came out with a few weeks ago is using MCP server that has live access to end documentation because typically the hardest part is let's say you guys drop a perplexity node after ages a native one right and it remembers that you only used HTTP before the tricky part was like educating it that this thing exists. So I would have to go pull on Perplexity, download the JSON, feed that as an example to say, "Hey, this is what the Perplexity thing looks like. Use that. " But now with MCP servers and all these little open- source tools that are coming out, it's easier to have that live connection to your documentation to make that connection happen. — That was going to be my next question because I know there's so much documentation that AI scrapes the internet and just doesn't know that this is Nadin from, you know, 2020, right? It doesn't. So I've played a little bit around with it and I was curious about that. So you plug it into an MCP server that has access to the live documentation. Where is that? Is that available on GitHub or where's that? — Yeah, it's on GitHub. That's called nen-mc. It's by this individual named Rald. Last name will be rough for me to pronounce, but it has like 3. 7K stars. And basically what he's allowed you to do is search all the nodes, go through the node properties, the operations, um go through a series of sample workflows. So everything that I did separately in isolation, he's put together a server that you can do it all in one place. So that makes your life a lot easier. And one of the cool things about it being open source and all that is the fact that everyone kind of figures out a little these painful problems and once and some person's like, you know what, this is the hill I'm going to tackle, right? this is what this is going to look like. I'm going to figure out the MCP and you're prompt workflow and because it's all open source to or fair code I should say. People can then get in and then solve each one of those problems. You stack them together and now you have these amazing use cases. It's beyond that. So just to compare make and edit in for a second in

### [9:31](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=571s) Using MCP Servers & Live Documentation

make if I wanted to have a web hook I would physically have to have some form of service pinging that data structure so it knows what it looks like. But in end, I can also ask Claude, can you make me a dummy mockup data set that I can use for the web hook? And I can just paste it, pin it, and I can start like retrofitting and practicing what the workflow might look like right away. So just removing as much friction as possible and offloading that to the AI so I can just focus on the workflow building itself. — Yeah, that pinning feature is pretty rad. Being able to pin the data and then not use all the operation calls and everything else and burn through credits. You just pin the data. That's one of the things that I like to do is as I'm working through a new workflow, whether I'm building it out or I'm using a template online is you get that original data source, pin, next step, pin, next step. You run that all the way through and once you get through, you're like, "Okay, I know that everything's now connected and hooked up, you hit go and hit run. " Is that similar for your process or do you have something different? — No, 100%. I pin and I don't repin until I figured out the workflow or flow all the nuts and bolts of the problem nodes or the code nodes because once that's figured out, life is good. You might get different data formats over time. You can adjust on the fly. But that's the hardest part is just going from zero to one error-free and uh no bugs. — Love it. When you talk about these code nodes, like when you figure out, okay, this is something that N& N can't do, so we need to do a code node. What's your process for making that code node? Do you copy into a certain LLM? Do you take the whole JSON, dump it in? What does that look like? — Yeah, it really depends on the use case. So on the code node, if I feel like I'm about to have seven to 10 nodes that are kind of superfluous to writing code because I'm I write Python by background or and JavaScript. So when I look at how many nodes it's going to take me to do the thing that I know in my head is like 15 to 20 lines of code that's where I know okay we're going to a custom code node and obviously there's some limitations. I don't think you can make an HTTP request from a custom code node yet in naden. So there's some limitations there. So depending on exactly what I'm trying to do I will usually if I'm building the workflow a draft in claude it knows the context of what I'm trying to accomplish. So I'll have it draft out the code. I'll run it. I'll debug as needed and go through that feedback loop until I'm where I'm at. And in some cases, this is more advanced. So, I'm going to give a disclaimer there. I will use a lambda from AWS because there is an invoke a function lambda in any that is a gamecher because you can summarize 20 nodes into one big mega script that you invoke. But that's again very advanced and there's very certain use cases that I use that. — Yeah. It's a code macro. Yes. — Yes. Exactly. — Love it. Okay. Fantastic. Now, let's talk a little bit about these use cases, right? I think a lot of these people, we see these things happen. We're like, "Okay, great. I know I can do prompt to workflow, but what is like maybe give me like a top three favorite workflows that you've been able to build using this kind of prompt to workflow action? "

### [12:38](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=758s) Pinning Data and Debugging Workflows

— Yeah. One of my favorite ones is this workflow where it tracks all the new models coming out from uh open router which is basically a model aggregator for open source and closed source models and it has an API that lets you list all the recent models. So, I have an automation that will go through, look for the latest model, filter out anything that's not on my existing Google sheet, and then it will run all the prompts that I care about through these brand new models, and then it will have a separate tab that acts as a judge to tell me, should I care about this new model that dropped or does it not change anything for me? That's my all-time favorite workflow for personal use. That's one of the That's a great use case because the thing is there's so many AI tools and LLMs and things like this. I kind of feel like it's like a Niagara Falls of like LLM and AI and you just stick your head out inside of the water and you're like, "Oh my god, there's so much here. " And like you could spend all your time researching to figure out what's the best thing. And I think it's one of the benefits of like content creators and people that do this deep research is you go and you figure out like okay what is what's hype and what's helpful and trying to parse those two things out and when you figure that out you're back. So I love the fact you made a workflow that that allows you to save time without knowing okay is this useful is this not. Is there any fairly recent LLMs using this methodology you're like oh this one kind of there's a delta here and it's actually kind of standing out as an outlier. I would say it's like the inverse. After running some tests, some OG models, some older models seem to do the thing better than some of the newer models. So, I noticed that for one specific test, Gemini, sorry, not Gemini, OpenAI 40 Mini, which is a pretty old model now, did the best on one particular task where I would have never imagined it. I would have indexed on 4. 1 or 40 or anything that's on the newer side. So this kind of workflow more humbles me because we always assume that newer is better and I've personally like consulted for and worked with CTO's that are still using 3. 5 turbo for some of their production grade systems. So newer is not better. Newer comes with new tests, new hallucinations, new edge cases. So that's like the biggest education point I've gotten from that workflow. — Yeah. And that's the thing is because a

### [15:00](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=900s) When to Use Custom Code Nodes

lot of stuff is new and we want the latest, greatest, most experimental things and at the same point we want it to be reliable, production ready, able to go out to the masses. And so it's really hard to find that that nexus point between, you know, experimental, exciting, new, reliable, stable, and predictable. And I don't know, is is this your automation? Is that the thing that serves as the function for you to determine what works there? — No, it's a great question. I have a whole like mental model around using models and for what so different providers will use for different use cases. But if we were in a production environment, let's say what we consult our clients on is unless the cost for running your operation or this workflow goes down precipitously and maintains the same level of accuracy and quality. Is it worth us even having a conversation? If it's just really good or it's really good at reasoning, that sounds like it's really expensive to run if it's really good at reasoning because the most cuttingedge models will always be the most expensive and the most unpredictable. So, when the latest I think 03 Pro came out on the uh ProChurch 50 account, I tested it out on some very normal prompts I ran everywhere and all of a sudden I got policy errors where I was saying I was um going against their policies for running a very simple innocent prompt and you can see the balloons there. Um, regardless of that, it's just when you look at the actual workflows, you'll notice that Claude does well on copy, but to use Claude at scale sometimes can be expensive. Open AI is a good soldier. It does what it's told. It doesn't do it with a great level of enthusiasm. It just does it. And then Gemini is very dry. So you I'll use Gemini as my workhorse because it has a million context window as of this recording. So it's cheaper. It has a large context window. It's good at going through a bunch of data, picking out some needles in the haststack. So it's really like the right soldier for the right job. — Yeah. And if you know what they are, you can almost label them and know that this is my workhorse, right? This is versus — Okay. Wonderful. And in terms of like not only like the building out the these workflows and these processes, what are your thoughts around making these production ready actually something that is beyond just innovative and cool, but like is there any ones that you've seen actually go beyond just exciting and experimental that you've been able be able to roll this out into some sort of pro production grade environment? — Yeah, of course. So we have built uh my agency um production grade and workflows in practice and obviously the biggest thing you want to ask is what does uh production grade mean? What's volume look like? What does concurrent requests look like? So a lot of the engineering comes in understanding uh the how to queue up certain requests especially at scale if you have hundreds if not thousands of different requests

### [17:58](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=1078s) Favorite n8n Workflows & Use Cases

happening at once and it just takes a lot of testing and before the evaluation node and framework from any which is now like native in the platform we've had to build externally a way to evaluate the workflow make sure that we had redundancies. So before we had our own process and now we're starting to dip our toes into leveraging what you've released which is that framework for evaluation those all those different nodes looking for metrics looking for consistency and you basically have an LM as a judge as a part of those nodes as well. So we're looking to migrate to those because the more we have nen native the better. — Yeah I've used the AI evaluation. It's a really cool new feature. And it's funny because I went to like the there was an AI engineering conference in San Francisco and I went there and there's all these companies with their own AI evaluation software platform things. And I was like, oh, and kind of has it already natively built in. And it's funny because there's these companies that you got to figure out what product you're going to build and then as you build out these products, if a company like OpenAI or Nadin or somebody else all of a sudden goes, "Oh, that's a problem we'll solve. " they kind of, you know, there's a thousand like, you know, like sad sounds that happen like all across there's a, you know, a thousand AI companies go down at night. What do you see as like the opportunities in this space for if somebody was looking at problems that are painful enough to solve, but not something that these big guys, these big people will take on? thing is small to medium business problems are typically not addressed in full by any of the big competitors because they're never going to optimize for how this individual marketing agency or how this individual law firm runs its operations or which systems it uses or customcoded CRM for certain companies. So I would just say there's a lot of opportunity especially for small to medium business because they're so special that in a way they are special snowflakes. Each one has its own operation, its own suite of tools, its own budgets or budgetary constraints. So I would just say that the entire industry I just had a call today with someone um in Europe that has a marketing agency who was super worried because they felt like they were super behind in this race and they're trying to compete and make sure they survive as a marketing agency. And the thing is there's so many pockets where people have not even started to implement this stuff. We might live in this bubble where I typically tend to assume that everyone is on the same page as me in terms of where I see things going, how things look, what I use day-to-day. The average person is literally just still learning chatbt or the basics of chatbt. So I would just say like there's a lot more opportunity than people expect. And it's about finding a series of businesses or a business that you can just go deep in. You'll realize that if you move vertically or build workflows or automations vertically in a business, there are so many problems that are worth solving. And it's like a very deep problem. There's a data problem. There's a workflow problem. And you'll notice it's much meteor than just a workflow. It's a whole a mindset. It's a process. It's upskilling employees. So there's just so much opportunity in moving to the world where everyone wants to be AI native and AI first. — Yeah. It's not just from like build me a

### [21:16](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=1276s) Evaluating LLMs: New vs. Old Models

workflow. It's more of like a systems backend operational overhaul of your technology stack. And so moving from fax machines and WordPress, right, to going into this future focus, maybe it's a web flow into Air Table instead of a Google Sheets or whatnot. This is okay, let's let's do an overhaul on your technology stack so that everything runs smoother and better. It's not just one workflow that solves a problem. Maybe the workflow will get you in the door as a as an agency and trying to figure out what is that lane you want to be in. I want to serve marketing agencies or designers serve, you know, aerospace engineers, whatever it might be. Just figure out what do you have some sort of unique domain knowledge in that can make you stand out so that you understand their painful problems and you combine that with NAD to actually have a competitive advantage in this world of aggressively upgrading your tech stack. — Yeah. Uh, one of the most popular end workflows in my community right now is this AI audit workflow where pretty much someone fills out a form that you can vibe code on something like a lovable or a bolt or replet and then that whole payload of all that information in the wizard gets sent to any via web hook and it goes through a series of steps to fully audit your business and where you have pockets where you could fit niten other platforms other technologies and have a full report as like a lead magnet but like a legendary lead magnet. — I love that legendary lead magnet. And I was actually thinking the same thing because again, how do you provide immense value to a company that then allows you to be able to say, "Oh, great. Here's I've just revealed all these problems. Would you like that? Would you like help with me to help you with that? " And that's a cool we'll call it legendary lead magnet to be able to provide value for the systems. And so is that it sounds like one of the things it's doing is it audits the business based on the problems that they're coming across. Are you looking to do a lovable or some sort of vibecoded version of that to then go from that into some sort of inn workflow too? — That's exactly what's happening right now. So the app lives on like let's say lovable and then it sends that web hook directly to an end workflow that's listening in for that payload and then they just work in tandem symbiosis and by the end of it the user gets an email with a beautiful you know HTML with a PDF that goes through their business every area where it's been assessed where AI may might make the most sense where they are how long it would take to implement and obviously this takes some more tacet knowledge to set up in terms of what they should do. But when you marry this with something like deep research from Perplexity, you can have a really potent combo because you're looking up the company from the name and the company and domain and you're doing research ahead of time and you're weaving that in. So the report is actually a lot richer than some cookie cutter template. Yeah. It's this

### [24:02](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=1442s) Production-Ready Automation & Scaling

evolution from like, you know, quizzes for your business. It's like it's actually using AI to evaluate and do deep research. So, it's more so than you're going down one of three buckets for a business use case and being actually more generative problem solving answers. Um, I want to dial back to something that you were talking about earlier because it just is top of my mind right now. We talked about LLM. One of the new ones that just recently came out and maybe by the time this podcast comes out there's, you know, a thousand other ones. Gro Grock 4, I I've heard good things about it. I don't know if it really is. Have you run it through that system? Is Grock good for it or is it not great? — I'm biased. So for me, we've never had a production scale app for a client using Grock. And the reason why is I find the personality of Grock more on the rebellious side or contrarian side. I like predictability. I like boring. I like to know what I'm going to get. So Grock I use all the time when I vibe code or I don't know what the word is today, context engineer. um when so I'll use Grock to brainstorm because I can then harness its rebellious contrarian smart nature to come up with a beautiful product plan or a build or architecture plan but then I'll execute with claude or I'll execute with OpenAI. So although it got some views, it's a really good model. One, we're going to forget about it in a month when GPT5 or Claude 5 or whatever comes out anyway, it's kind of like the flavor of the month. But I would say like Grock and production environments, I haven't seen. — I do want to when we talked about this one thing and there it's nuances and we probably both know what it is, but I think it's really good to explain it, right? The difference between uh vibe coding and context engineering, right? And I I'm going to say and I want you to add to it is the vibe coding is really a lot around just you know one prompt yes yes accept all please make me something magic versus context engineering from my understanding is around being able to feed in all of the requirements around something and then giving it the context needed in order to engineer something. Could you add to it you know what you're seeing the difference between the two and where this might be evolving into? Yeah, I'm a bit to be honest a bit contrarian on this whole context engineering thing because I think that vibe coding is a spectrum. On the left hand of the spectrum is exactly what you just mentioned. Build me Shopify with one prompt and then why is this not working? Okay, vibe coding sucks. And then you have the middle where you're kind of leveraging you're kind of planning out in your prompt what you want to build. You're trying to be thoughtful about the stack and how it's designed, but you're still offloading a lot to the AI. And then context engineering to me is the complete end of the spectrum of vibe coding where you're still going to rely on AI to be your workhorse and build, but you're taking the onus of like kind of being a pseudo product manager or a technical product manager to come up with the build specs, what infrastructure it should use, the language models, the rules that you should use in something like a cursor or windsurf. So for me, it's on the end of a spectrum of vibe coding versus its own new world. — And there's something to that as well. I feel like when Generative AI came out, original chat GBT or any of that stuff, it basically was a 10x multiplier, you know? So, if you're like a 1x coder, it could make you a 10x coder supposedly, right? But if you're also like a 1x marketer, it can make you a 10x marketer. So, if you had no skills and you said, "Write me a book," you're going to do okay. But if you're, I don't know, some highlevel author, you can get so much more out of it because you have the depth of knowledge in order to understand how to properly educate. kind of like a good someone that knows how to cook in their own kitchen and can

### [27:36](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=1656s) Opportunities for Small & Medium Businesses

tell you how to make good food versus, you know, you never owning a restaurant and just saying, you know, make me a gourmet dish, you don't really understand the difference. So my question with that is while AI is awesome, while NA and is awesome and why these things are awesome, it does do more, what are some of the things that people should be focusing on leveling ups in terms of skill sets? like what do they need to focus on to truly get more of this than the average bear? — No, that's an amazing question and it's one that I receive on an almost daily basis. So, obviously it depends what your goal is, what you're trying to build. If you are trying to build or whether you're just trying to consult on what someone else should build, but I'm going to assume you want to build. If you want to build, you don't have to learn how to code anymore. But it's very helpful to be familiar with what code means. So doing a course to be aware of what JavaScript is, what an HTTP request is, what JSON is, what Python, at least at a high level, very high level is doing or what these functions mean will just help you understand when things inevitably go wrong. And I don't see a future for a while where coding is perfect. The reason why coding is 80% of the way there with Genai is Genaii relies on the fact that it has training data on old things it's seen in the past that kind of meshes together. It's hard to do something that's cutting edge or brand new because it's literally never seen it before. So it will hallucinate its way and then you're going to have to take over to go from 80% to 100. Now will this get better? Yes. But being aware of code, being aware of JavaScript and everything I mentioned along with understanding basic design principles as well as very basic software engineering principles like okay, if I have an app that's a SAS app and we collect payments, we should probably have a table for everyone that's a user, we should also paid. We should probably have a way do we that we link both. So, it's kind of like very barebones understandings of infrastructure. And a little bit of all of these will level you up, especially as you go through in the build process. You will learn against your will what an HTTP request is or why JSON's failing because you'll get errors and you'll spin and spin. And the deeper you go down the rabbit hole, the more you'll learn, but it will be less daunting and you'll be less likely to give up if you have some form of thread that you can pull on when things don't go right. — Yeah. So there's a fundamental knowledge of do I understand the fundamentals of code, right? Do I understand, you know, what an expression is? Do I understand what JSON is? Do I understand, you know, what a variable is or tokens? And then, okay, cool. That's that that's the bare bones. And so if something goes wrong, you can dive into it. So, like you I remember one time I was being lazy and I was trying to solve this thing for a previous client and I was like, "Oh my god, I was just trying to get the AI to solve this problem for me and I was trying to get this widget in this box for a chat module or whatever and then it couldn't get the answer and I must spent two hours of just going back for it would not produce it. I tell I tried to vibe code it could wouldn't give me the answers and I was like, you know what, I'm just going to look at the documentation and I went and I read the documentation. I was like, oh, click boom just solved it. " And so sometimes you just it takes so much more effort to sometimes be inherently lazy than you can say, you know what, I'm going to figure this thing out, right? I'm going to map out exactly what you need. And that's why I think it's really cool this what legendary lead magnet you can have to kind of get all of those details in place and then map the terrain. So as you build through the process, it just it's it makes the last 10% of the journey. It's easy to go from 0 to 80, but it makes it last 20 to 10% so much smoother and you don't need to go and try to rebuild the system or you don't get stuck in some sort of uh maze of vibe coding where you get locked away. Yeah. There's this meme that says

### [31:28](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=1888s) The Legendary Lead Magnet: AI Audit Workflows

um change this color vibe coding 100,000 lines changed human one line changed of code and the real impact of that and then that's where you get super spaghetti code where you rely so much on the AI to write the code. you have no way to see what's happening. It's kind of like being a manager of an employee who can literally tell you whatever you want in terms of like what they're doing and you just take their word for it cuz you have no way to audit their progress. So you're like, "Okay, you said you're done. I guess it's done. " Um, so that's one part and another thing is you'll notice that as you build, there's some things that we take for granted that we know now that the average person doesn't. So if you say to any vibe coding app, choose the one you want. Go implement the latest model from OpenAI, it's going to use GPT4 from two and a half years ago because that's the last time it got trained. Unless you're intentional about telling it, I want this model and here's documentation for this model or here's how to refer to it. It won't know. And these are only trials and tribulations that you can learn from after you're tested by fire. So, it's really about building, immersing yourself, using your research tools, not being afraid to get your hands dirty, and learn what you need to debug everything. — Yeah. And there's these, I say, uh, common mistakes, and I'd love to get your insights on this. There's common patterns of mistakes that some people make when they get started is they see these super complex workflows on any, and it can get incredibly complex, and then they try to get in there and like, oh, this looks amazing. I just download this workflow, get started, and then they get stuck, and they're like, uh, what do I do? And they don't they can't make progress. So like you get this bump, you know, like a like a sugar rush of like look at all this stuff and it feels good, but then you get stuck and you get disheartened and you don't know what to do next when and you have a community, right? And you have people going through your community. What do you what are some of the fundamental things that you've seen u patterns of behavior when people are just getting started with NAD that help them be successful in the long run? — There's the self-s serve and there's the help option. Obviously, I offer the help option in the community. We have coaches, we have me. Um, on the self-s serve though, step one is if it's pretty straightforward, screenshot the error and throw it into another language model like Claude or um, Chat Gvt, enable it to go read the latest documentation from NAN and have it be your co-pilot to like debug what's happening. That's like tier one. Tier two, this is like my secret sauce is I'll actually tell them record a loom of you walking through your entire workflow and double click on the area that's not working. Stop the loom, download it, upload it to Google AI Studio because it accepts videos input and have it watch the video and help you troubleshoot what's happening and how you can go about it and maybe do deep research on how you can fix it. Doing that fixes usually 70% of their problems. the 30% might need some more hands-on love for myself and my team. — That's great. That is actually so good because you're giving more context to it. And for me, like, you know, one of my love languages is Loom. Like the way that I just I like to communicate with people is like, hey, let me show you everything that's going on in my world, especially if you're trying to communicate something complex or whatever. So, you're just giving more context and then when you can, you're just giving more parameters. So, I love that. So, that the AI studio, right? Google AI studio. Yeah. Loom download video upload video and say hey please help — SOS yeah please somebody that's great and then in terms of uh you know people going into the space and leveling up where do you see are the opportunities when people are coming into this and they're starting to get real traction whether they're trying to grow an AI automation agency or they're trying to get a job as a developer or you're a business you know trying to leverage this stuff you know what do you see as the as the real

### [35:14](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=2114s) Vibe Coding vs. Context Engineering

opportunities in the space right now for these types of people. — I feel like there's so many words and terms and buzzwords. I would just say like being a welleducated implementation partner is a huge valley that's like yet to be fully tapped. There's just so many businesses, enterprise, small, medium, whatever startup that want someone to lead them through this darkness. Cuz if you're looking on the outside and you just hear these words like a and make, it just sounds like weird Pokemon characters. They don't you have no idea what this even means. Um, they don't know what the where things are heading. They don't even have a baseline, at least like I have a baseline for what 3. 5 Turbo was 3 years ago to know to appreciate what 03 is now. But when you say the word O versus four, there's all these different nuances that are so obvious to us because we're addicted to staying up to date that people are just like two, three years behind. They have no idea. They don't know that you could technically use something like a chatbt, a claude, an NAN to do the power of like four or five people in one shot. So being an implementation partner, being an educator, being the one that leads people, the not the un the one that can see, leading the blind, basically that's like one of the best positions you can put yourself in right now. — Yeah. It reminds me almost like, you know, with the United States is this trailblazing, you know. You know, some people, you know, landed in Plymouth Rock and they're like, "We're cool. This is where we're going to go. " And other people made it to Texas. Great. And other people trailblazed all the way to California. So, you're just farther along in the path and you're like, "Okay, I've seen all those spots, but you don't know how sunny California is, man. I'm gonna it's good over here. I've got some things, you know, maybe avoid, you know, Death Valley, go to some other places. " And I like that as almost being the guide for people and saying, "This is this is the latest and greatest. Leave this alone. " And also, when you're building things out, too, maybe not build this technology stack. Maybe we wait a little bit for what's on the horizon. And what I'm curious about on your side is because you're so deep in the space and you're also working with this even at the cutting edge of what we have with technology stacks. What what's on the horizon? What's next do you see coming up in the next, you know, 6 months, three years out? What's on the horizon? — That's a it's a tough question to do three years out. So for me like my rule of thumb is I only look three months ahead now because it it's hard to anticipate step changes that are coming but okay but in the next 12 months I would say coding will go from 80% to maybe 85 to 90% of the way there. I still don't think it hits that 100% if we're just using language models because of what I mentioned before. So coding will be not solved but like very close to being fully solved. We're going to have a little bit of a AI slop pandemic across all multimedia. We're going to have generated Veo4 perfect quality. We're going to have AI YouTubers that are eating my uh my business. We're going to have all kinds of things that are going to make us question our eyes a lot more. That's more of a macro idea that I had on the workflow side. I would imagine I would only imagine that any and the other competitors you're going to be able to go into there and do the entire experience by just interfacing with a co-pilot, build the workflow, test the workflow, evaluate the workflow, push it to production with the help of a co-pilot assistant, and you'll be able to get that help in the platforms themselves. So the ability to self-s serve is going to increase a lot. In terms of just like general trends, I would say like we're going to move from thousands of AI startups and we're going

### [39:06](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=2346s) Leveling Up: Skills for the AI Automation Era

to contract into a series of super apps and super apps that are finally focusing on profitability, not just training the best AI model to get to quote unquote AGI. Because once the reason why, for example, Gemini runs so many queries for free or close to free that are very expensive on their end is everyone's willing to bleed cash right now to be in a position to be one of the few competitors in this AI monopoly. Once that happens, inference costs will change quite a bit. They'll be very expensive eventually probably to run certain types of inference for certain use cases. At the same time, we're probably going to have open- source models that are also very potent. we'll finally get maybe an 01 or an 03 level open-source model in the notsodistant future, which is awesome for folks that might not have the disposal income or don't have access to that same type of cash flow to run an AI business on a lean budget, right? Especially on the security side. So, I just see a lot of that happening. And one major thing that I'm worried about is more so the cyber security aspect. there's going to be cyber security issues across the board for every use case, for every domain. So, that's one thing, especially as voice agents become indistinguishable, as Hey Genen or things like Heyen become 100% accurate, it's going to be a tricky environment. Um, but a good thing is you'll have many parties of one being able to build a company of their own or build seven to 10 N& N workflows within a week because they can move faster and upskill themselves really quickly as well. — Yeah. And there's this combination of whenever this new opportunity comes, right, and I' I've seen this cuz, you know, decades ago, I used to run a food truck business, right? And then there's this all a sudden it was a time of my life. But the point being is, you know, some master chef came out, made an awesome food truck, and then other people say, "Oo, I can open a food truck. " And the thing is, you can't tell the difference between a Michelin star chef and like someone who just, you know, just randomly open a food business with no experience whatsoever. And what happens is you can't look at the difference. They just all look the same, right? The AI agencies or companies, they all just look the same. And so you get this AI slop pandemic rolling out there and then people get disenfranchised and disillusioned, but then the good stuff sticks, right? The good companies stick. If you've got you have good integrity, you've got good ethics. You you're hardworking. You're following up. You're staying at the cutting edge. You're being a consultative. You're able to deliver value consistently, right? you actually care about your customer and you're consulting them and you be and you become an ally on that front versus you know shiny clickbaity type of stuff you're able to is that the good companies will stick through this so I think you're right and then this evolution of you know uh easy marketing versus the best marketing right or you have this uh you know make a workflow but do you have a cyber secured 100% reliable workflow that is impenetrable right and so the question is how do you find that competitive advantage and how do you take what is just known in business of relationship building like good relationships being a good person and then combine that to offering massive value with AI and I I really agree that there is going to be a drop off of people trying this new fad ways and then they're going to go into crypto 2. 0 and NFT 2. 0 and just all the other things but the companies that just really latch on to this thing I think are going to stick. I agree and I'm I have no qualms shouting you guys out because one of the reasons you're having the growth you're experiencing is your community obsessed and I don't think enough companies and startups are community obsessed and it shows up every day like today I was on a community call and we noticed that and then dropped the ability to have a fallback model. So if openi goes out, I can now default to some open source model. Very small qualities of life that I'm like, man, it will take insert name of competitor 2 years to care about this micro problem that's probably affecting thousands of users. I think the future is companies that are obsessed about their communities and literally just listening and taking a pulse for what they want. The question is like how do you because one of the big questions that people have right is in this world of AI generative technology you know how do I keep authentic right how do I use automation and AI to actually build a bridge of connection and trust and authenticity and care and you know one of the models in it is right is like really deeply care about your community offer massive value the reason why we do

### [43:39](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=2619s) Overcoming Imposter Syndrome & Lifelong Learning

have a product that's available and downloadable is that we're more hyperfocused on how do we build a product that people love, right? And then automate the things that people hate, right? Like you know, be like, "Okay, what what's the thing you don't like in your business the most? Okay, taxes. Can we find a way that someone can do taxes for us? Can we have any idea do that? " Right? What do you want to do? I mean, deep connections and conversations. So, that's the that's a great question on like how do you solve that, right? And how do you be customer obsessed and show that you deeply care automating out the processes that waste time and kill bandwidth? It's also about giving you what you need, not what you want. So, I'm sure there's a lot of features that people want in any end. But taking that one small fallback example, you need that feature because if you want to go into production environment and openi fails, your client's not going to be mad at OpenAI, they're you. So, that is something you need. And I think it's really important to be focused on the need versus want because people will want the world. And we'll keep wanting till the cows uh well I don't know what the term is uh I think cows come home so — exact I think so yeah but being obsessed about the things that matter first is going to be really important — I think there's levels of I think with any business right or any product or any company there's tiers of this like when you first start out there's kind of this like tell me what you want and I'll build it right that's like level one — right and it's like yeah what do you want I'll make anything that you want then level too is like, well, here's my Chipotle menu items of things that you can have. I've got these workflows and I can bolt on widgets to these things. So, which one of these menus are you going to have? And then there's like that Gordon Ramsay where they're like, I'm not going to tell you what you want. You I trust that you know, I'm not going to say Gordon, here's what I want. I'm like, Gordon, chef's recommendation. Tell me what you got for me on my table. And I think when you demonstrate that much mastery in a space and then people will outsource that trust to you especially with integrity with care with proven mastery and I think that's what you're talking about is like how do you get that so much customer obsess that you know their problem their needs more than that it's like you know with kids like I just recently had a son and it's like you know he may want Snickers bars and you know YouTube for 47 hours a day. Yeah, I know that's what you want, but what I'm going to give you is this other stuff that's probably going to serve you a little bit better in the long run. So, how can you be customer obsessed with AI technology? I love that, man. It's been super awesome having you on the podcast. My question for you is this. Is there anything else you'd like to let people know about before you tell them how to get a hold of you? — I would say if you haven't started or you always say some selft talk like I'm not technical. I don't know what any of this stuff means. Whoa, that's way too above my head. I would say this is a really good point in time to start deconstructing those self-beliefs because I literally witness I have more almost 600 people in my community many of which are entrepreneurs they're starting at zero sometimes and we go from zero to us doing like a little hackathon and all of a sudden in week seven we are talking about puppeteer and words that they could not imagine were possible so I would just say like there's so many free resources online there's an infinite number of YouTube videos you can watch to upskill yourself and n might look daunting to start off and I'm a developer and even for me it took me a little bit of time to get used to things but just like anything over time you'll get better and you can start building and really not I'm not going to say change your life and make some form of hyperbole but you can at least change your circumstances you can change your skill set and the next 10 to 15 years is like this mindset shift that you're never going to be done learning you're going to be a lifelong learner whether you like it or not. We're going to have to keep up with a certain level of what's happening. So take this moment or at least maybe this call to action as just like destroying those limiting beliefs. They're not technical and everything is a series of steps and you would be surprised what you can accomplish in a pretty short amount of time if you give some focus and love onto uh building. before you tell people to get a hold of you. I want to double click on that and I and I know that just from my own being in the space I have a tension jumping into industries that I have no understanding about but just being excited about it and there's this feeling that I've always had when I step into like a new industry of like I'm a fraud. You're I don't know what I'm doing. How do I even do this? like what is this new thing whether it's like you know going into um a new industry a new space a new technology and you feel this and what I've noticed is if you just keep doing those activities even though you feel like oh my god I feel super uncomfortable with this thing and I don't know you're going through a tunnel and it's all darkness and you're like I don't know if I'm going to make this through then all of a sudden like one day you just kind of wake up and you're like oh yeah I've got this like it doesn't it's not like it's like this slow trickle effect that you don't realize that you're adjusting to this uncomfortable waters of this new technology space and then all of a sudden people start asking you for questions and recommendations and whether it's on a macro scale like on like YouTube or in a microscap scale in your own community or at your own company you start to become the resource and you start to be the go-to expert in the space and so it's just like stay on the path long enough until those you know fraud imposter syndrome beliefs slowly silence themselves and they've got nothing left to stand on. — I agree. And it's going to be it's a beautiful moment when you realize that you can uh remove those words from your vocabulary of not being technical. And the even better moment is when you can start teaching others having been someone that's not technical as well. So I just say like it's a feedback loop. It's a each one teach one mentality. — So if people want to find you, how do they find you? — Right on. So I'll give you the list. So uh on YouTube you'll find me Mark Cashef. As uh Dylan said, I'm a mad

### [49:36](https://www.youtube.com/watch?v=ROlKT2P6Xec&t=2976s) How to Connect with Mark & Final Thoughts

scientist. So, I'm going to keep pushing the boundaries of what's possible as much as possible and educating as much for free as I possibly can. If uh you're interested in 10xing your learning and you want a one-stop shop for not just automation, but voice agents, prompt engineering, vibe coding, we run a community called Early AI adopters, which is almost at 600 folks. Um, happy to see you if you want to jump in there. And other than that, uh, we have our agency, Prompt Advisors, where we do more enterprise builds, but if you're looking for strategy consulting, that's another way you can engage with us as well. — Awesome. Mark, it has been an honor and a pleasure to have you on the show, brother. And I will see you on the other side, my friend. — Likewise. Thank you, Dylan. — Take care now. Later.

---
*Источник: https://ekstraktznaniy.ru/video/15220*