“I want to give ChatGPT 10x more docs” - RAG Explained
29:41

“I want to give ChatGPT 10x more docs” - RAG Explained

The AI Advantage 07.08.2024 22 962 просмотров 853 лайков обн. 18.02.2026
Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
In today's video, I'm going to explain a technique used in AI called RAG (Retrieval-Augmented Generation) and show you how can apply it to build advanced workflows and automations. VectorShift is a fantastic platform if you're looking to integrate AI into your workflows, build custom AI bots and way more. Check them out to start applying powerful techniques like RAG today! 👉 https://vectorshift.ai/ Links: https://vectorshift.ai/ https://medium.com/@bijit211987/rag-vs-vectordb-2c8cb3e0ee52 Chapters: 0:00 Advanced Concepts of LLMs 1:59 Definitions of Advanced Concepts 8:38 Let’s Build #ai #automation This video is sponsored by VectorShift. Free AI Resources: 🔑 Get My Free ChatGPT Templates: https://myaiadvantage.com/newsletter 🌟 Receive Tailored AI Prompts + Workflows: https://v82nacfupwr.typeform.com/to/cINgYlm0 👑 Explore Curated AI Tool Rankings: https://community.myaiadvantage.com/c/ai-app-ranking/ 🐦 Twitter: https://twitter.com/TheAIAdvantage 📸 Instagram: https://www.instagram.com/ai.advantage/ Premium Options: 🎓 Join the AI Advantage Courses + Community: https://myaiadvantage.com/community 🛒 Discover Work Focused Presets in the Shop: https://shop.myaiadvantage.com/

Оглавление (3 сегментов)

  1. 0:00 Advanced Concepts of LLMs 495 сл.
  2. 1:59 Definitions of Advanced Concepts 1583 сл.
  3. 8:38 Let’s Build 4813 сл.
0:00

Advanced Concepts of LLMs

all right so in today's video we'll be exploring and then also applying some more advanced concepts when it comes to using llms because there's a lot of buzzwords that get thrown around carelessly agents chatbots automations rack and all of these have a place but the definitions of some of them have become so blurry that many of these definitions especially with agents are going to differ depending on who you ask but all of them are undeniably important especially for the future of tools and use cases as we explore them on this channel and I'll also say this all of these top topics might be a little out of your comfort zone purely depending on how much time you have spent engaging with these topics but although we are about to review some Theory just to turn around and apply that theory in practice and a practical application today I would just encourage you to stick with some of these more complicated Concepts as all the advantages that you might be looking for are going to be hiding outside of your comfort zones so I'll do my best to explain all of this and at the end of the video you'll walk away with some practical applications that you could be using today but some of these stuff gets a little more advanced and you'll need a little bit of patience and it might take some brain power on your part to understand some of this with that being said let's dissect some of these buzzword and I want to start out with some definitions so we are both on the same page when we talk about some of these Concepts namely I want to Define three words first of all automations secondly agents and thirdly rag if we Define these terms we'll be ready to move on and zoom in on Rag and how it empowers automations and agents and then in the end I'll go ahead and give you multiple examples of how to actually work with rag Empower a llm with your very own documents and super personal context the dream of most business businesses so the point of this video is realistically showing you where we're today and at this point I would also love to point out that this video is sponsored by Vector shift they offer one of the easiest way to actually use rag in your everyday workflow and when they reached out I was super excited because at the top of my list it actually said rag SL agents explanation and tutorial video so we get to do that and I'll go a little more depth than I usually do on this channel the first half of the video by the way it's going to be a bit more teacher mode rather than the excited here's what's new vibe that is typical to this channel okay here's what you need to know all right
1:59

Definitions of Advanced Concepts

so there's free big terms that we got to Define here rag automations and agents and we need to do this to be on a Level Playing Field and to actually communicate when it comes to these more advanced concepts especially the definition of Agents is very fuzzy these days and I want to just get that straight we'll also talk about agents in practice briefly okay but starting out with rag it stands for retrieval augmented generation and what it means in practice is that you can massively expand the amount of knowledge that you can provide to llm and from all of my teaching around this I always return to this nice little graph it's just so beautiful and in my opinion atively simple to understand this is a medium article by bead and I just love this explanation because look if you talk to something like chat GPT AKA and llm basically what happens is you go from query which is your prompt it's the question you ask or the thing you type into that chat box and it goes straight to the llm and then the llm gives you a response correct that is the basic flow of things now with ractice changes a little bit because from the basic prompt that you sent I don't know right when I say about penguins for example you actually do not directly move to the llm first you move to something called an embeddings model and this is just technical terminology for basically a transformation of the prompt that you sent happening okay so you don't need to know the technical details all you need to know that your prompt is going to be turned into a so-called embedding and then the reason it's turned into embedding because embeddings are storable into something that is referred to as a vector database again no need to understand the technicalities here but just know with Vector database technology you can store a lot of embeddings are very tiny many embeddings fit inside of a vector database and then what happens next is this step number three you take the embedding with the prompt that you sent back out of the vector database you transform it back from an embedding into text and you feed that prompt back to the llm and then from the llm you get the response as per usual now you might be asking yourself Igor why are we taking this round trip this makes no sense why can't we just send our prompt to the llm and then get a response well the secret sauce here is that you can turn a lot of documents that you might have a lot of data any type of context into embeddings that then you store inside of your vector data so when this final step happens and we retrieve the prompt back from the database and turn it from an embedding to text you can take a whole lot more than just your prompt you can search over the whole Vector database over all the embeddings stored in there and then you can take all of the necessary info plus your prompt back into the llm and then you get a response the massive advantage of this is that you can use all of the data inside of the vector database with the llm and you only bring what's necessary into the context window rather than working with something like jgpt trying to upload 20 30 documents here and hoping that it retrieves everything no you store that in the vector database and then you retrieve it whenever it's appropriate and if not it's just left in there and that's the whole point of rack you basically get to access external data that is stored on the side when necessary now look this does require some extra technology this is not the way attaching or the knowledge base inside of cat GPT exactly function they never exactly communicated what technology they're using in the background but people generally agree that most likely this is not rag in the background because with rag you can attach a whole l more but it's also slower that's one of the downsides we'll talk about that later but this is essentially what rag is okay and this is just one of the building blocks to more advanced use cases so now that we Define rack let's talk about automations and then agents because automations are thrown around a whole lot but at the end of the day a lot of people seem to think that automations are some revolutionary thing that emerged hand inand with AI and the truth of the matter is automations are as old as the very first computer every conditional statement written could be considered an automation with a broad definition if ego raises hand then play a sound effect now if this would happen automatically and with code in the background then this would be considered an automation now the big change that happened recently is the fact that you can PLU these llms like GPT 40 Etc into your automation workflow so you can bring some intelligence to automation all of a sudden it's not just hey if ego raises his hand then play a sound effect you can do things like if ego raises his hand use the cat GPT Vision API look at the frame check if he has all five fingers up if he only has four fingers up then play a different sound effect this is just a very simple example and actually integrate the vision API into automations is not that simple it's way simpler with text just because a lot of the tools that make automations happen without needing code usually only work with text anyway the point being automations are just this causal relationship of hey if this happens then make this happen and why people are getting really excited about some of them is because you have this intelligence in the middle all of a sudden that you can access from open eye or others now we get to the buzzword of the year I guess agents well what are they their name is Bond they're essentially a combination of everything we talked about rag and automations and prompts but there's this whole long-term reasoning idea that goes along with agents which if I were to summarize is more like you just State your goal the agent figures out what steps and what actions need to be taken for you to get to your goal and then the agent also takes all of those steps rather than a prompt which is like hey assistant I want you to do this task in this way here's all the other info you need now go do what you can and then the assistant provides you with the best information possible but it doesn't really take responsibility for the task being done whereas an agent it's all about okay let me figure out how to get there let me also click the buttons and do the things necessary to get you to that goal that's the idea of an agent now is this something that you should be using today honestly no not at all we're just too early now this is not to say that there's no agentic workflows that work for people or that certain organizations use today already this is more to say that for most everyday people and most Everyday Use cases there's just no agentic workflow that is worth the hassle a lot of times the workflows are hard to set up they're unreliable as the long-term planning and reasoning capabilities of the LMS are just not there yet for these more action oriented workflows you face hallucinations and even things like rag that are absolutely necessary to do some of these more complex tasks don't supplement the workflow well so my opinion is that today this agentic feature is more of a dream than a reality and it's a Well founded dream it's this Utopia of Technology actually doing the work for us or you could also call it dystopia I suppose it all depends on your viewpoint but my point here is this that most everyday consumers should not be worrying about agentic workflows yet what you should be worrying about right now is the thing things that I cover on the channel a lot it's how do you communicate your personal context to the llm what kind of data and other context can you provide it for it to do a better job what kind of use cases even exist out there and then how can I combine all of that into a little chatbot that I can interact with today because if you build a little chatbot if you use rag in it then you get the option to enhance it with Rag and then moving forward when we get better models that are even better at reasoning and some of these agentic work flows become simpler and more reasonable you're going to be able to take almost everything that you get here in the beginning and move it to this new workflow because communicating in personal context formatting your data and figuring out what kind of goals you have that AI could Aid you with are all things that you could already do today
8:38

Let’s Build

all right and now the time has come we're going to dive into the Practical part where I will be building two AI powered pipelines one of them is going to be a classic chatbot and the other one a strip down pipeline that includes Rag and then you can plug it into whatever you want so without further Ado let's switch to my screen here and look at the interface of vector shift yet again they're the sponsor of this video but they built an amazing platform that gives you a lot of flexibility and again this is actually a video that I've been looking to make since a while so when you log in it looks actually quite simple but it runs very deep now the good thing is that they do have a marketplace with various presets for you so that's exactly where we're going to begin we're going to be looking at two pipelines here first things first we're going to start with a standard chatbot all we need to do is hit this import button and all of a sudden we have the entire chatbot with a bunch of notes that explain things I won't be going into these notes you can read these yourself what I'll be doing here is showing you how to work this and how to get results with this as soon as possible so quick overview of the interface it's actually very simple here on top you have various functionalities nested inside of different categories as you can see as we're starting with one of these presets which I by the way recommend there's many great ones in the marketplace we don't have to be using this a whole lot I would recommend first getting working results and then looking into what you can integrate this into or where you could be pulling additional data from so as you can see this is the simplest implementation of a chatbot right here on the left you have a input field which is basically when the user types something into box this gives him the opportunity to input something all right then we just follow these lines so one line goes into the knowledge based reader right here and the second line goes into the open AI llm okay so whatever the user types gets passed to two points the knowledge base and the openi llm and at this point I would like to remind you of the theory that we just looked at because I told you for Vector databases to work you need to pass the prompt to the vector database and you need to turn it into embedding on the way and that's exactly what is happening here this knowledge base reader takes in the prompt and then looks over the knowledge base that is attached here and we'll look at how to customize this in just a second here but for now just realize that the prompt is getting fed to the knowledge base and also goes straight to the llm is basically an API call to whatever open AI model you want to call right here I'll switch this to GPT 40 mini right here as that is The Sweet Spot when it comes to Performance versus pricing okay and as you can see in this open AI llm building block right here we have a system prompt you are a helpful assistant that answers user questions based on context and conversational history okay and then at the bottom it says if you're unable to answer the question or if the user requests direct them to these support resources and then there's the vector shift documentation you can book a call or go to the discords to get support so obviously you could customize this and you could put in your own support links right here so the chatbot has a fall back whenever it doesn't know a certain response but more importantly here at the bottom is the prompt that gets fed to the llm every single time along with the system prompt okay and this prompt consists of three parts first of all you have the conversational history marked as history now where does that come from this might be a little confusing you can see this little history word right here on the left and if you just follow the line you're going to see this chat memory block and what basically does is really beautifully explained for you right here if you're new to this or you can just listen to me basically takes the full conversation that has happened so far and it feeds it back to the llm so it is aware of the context than of the previous conversations that you had this is basically how you build chat Bots you save every single message to a list and then every time the user prompts you feed the list back along with the newest prompt and that's exactly what's happening here as you can see the history gets fed the user question gets fed right you see here user question and the line goes back to the input so the original prompt gets fed along with the history and then last but definitely not least we have the context and the context as you can see right here links to the knowledge base and that's what we'll talk about now but the completionist in me just kind of wants to finish this up so let's briefly talk about this output because it is very simple as it takes the system prompt plus these free variables it generates an output and this is what the chat putut would respond to you with by the way it's really easy to deploy as a standalone website or on a website we'll look at that right after talking about the knowledge base because now it's time to show you how to practically use rag with a chatbot inside of a pipeline you have the theory you have the base layout of the practice here now how do I customize this how do I upload my own files and how do I integrate my files into other pipelines well right here you can see that there is a simple drop down the drop down gives us two options right now Vector shift documentation and AI a kb1 and this is the one thing in this account that I did create already it's the AI Advantage knowledge base one and if I want to shift to it I simply select it and then confirm that I am sure that I want to do that and now I switched it up to the knowledge base but wait a minute where do I manage the knowledge base where do I create it well here we're inside of the chatbot editor so we have to go back out to the original page which you can always do in this sidebar if you just look at these words you might already be able to guess where the knowledge base is hidden exactly right it is under the knowledge Tab and no worries everything Autos saves here so our chatbot is still here but in the knowledge tab you see the AIA kb1 this is literally the only thing that I did in preparation for this video so you just hit create new knowledge base here with this Plus and then in here you can just add a document or you could even do something like scrape a URL you give it a website it looks through the entire website and imports the data into here and then you can use the various sites that you scraped as your knowledge base or you connect one of these applications as an integration to get the knowledge base from there so you could be using a type form you could be taking things from your Google Sheets whatever the opportunities here are vast that's the whole point of using a more advanced system like this where you get customizability versus just for example building a GPT but back to this knowledge base so once I created it I simply said choose slash upload files and then I can drag and drop files into here or I can pick one of the files that I already uploaded here under files what I did here is I uploaded this wonderful PDF called Zombie plan now I kid you not this is literally a post-apocalyptic plan on what to do when a zombie apocalypse happens it has been issued by this County in where else could it be Florida so this is the official Oki chobi County zombie apocalypse Annex and includes various gems like a level system for the different levels of a zombie apocalypse you have the prep period you have the emergency period assistance period and the recovery period if you scroll through this is full of very specific knowledge that you will absolutely not find inside of gp4 and that is why I included it as sort of a fun and light-hearted example in here and if you just dig into this it includes various gems like the assessment that there is currently no assessment of any pro zombie or radical zombie rights groups but history indicates that there will be some group that takes this odd stance in the face of the Apocalypse I kid you not this is a real document and we'll be uploading it here as a file okay and then inside of my knowledge base it allowed me to add this document here from my files and voila after about 3 to 4 minutes it turned this zombie plan. pdf that is a glorious 75 pages long as you can see here into various embeddings and again this is where you need to understand the theory that we discussed in the beginning because otherwise all of this stops making sense at this point as you can see theb plan PDF of 75 Pages has been turned into a total of 98 different vectors meaning there were 98 embeddings created and they were stored inside of 98 vectors now why 98 it's 75-page PDF well it's because of a process and this is a new term that I'm introducing here that is called chunking and essentially what chunking does is easier shown than explained because right here if I just open up the preview here you can see that chunk number one includes everything in the PDF up until the words red development activities if I take the PDF and I look for redevelopment activities I will find that they are right here so the first chunk includes everything up until this point okay so it's basically the entire first page is the first chunk and the next chunk should begin with levels of the zombie apocalypse there you go it also includes the header right here and this goes on nicely you can see page two is this chunk page three is this chunk but at a certain point these Pages start getting more and more dense and all of a sudden chunk 21 is Page 15 as it splits it up differently now why is this important it's important because if you're going to be accessing this much data it can get messy okay already at this point we have a total of 98 chunks and this is just one PDF now imagine what happens if you show up here with 20 PDFs that's going to be a whole lot of chunks that the llm will need to look through that's why we need to understand at least the basics of how the technology works with chunking and embeddings and storing them in a vector database and then retrieving them because you get to customize some of this stuff and it's quite simple to customize but you do need this level of understanding so let me go back to my pipeline just to recap I uploaded the PDF here created this knowledge base and then added the PDF to the knowledge base which then after a few minutes was turned into all these vectors and now I can access this inside of my chatbot so I'll go back to pipelines I'll hit edit and here in my knowledge base as you can see I have this AIA kb1 that I just showed you how to create and now I can pull in all the data from the knowledge base into my results in this chatbot because I'm infusing all of this context that comes from the knowledge base into GPT 40 mini right here and then of course the output that gets produced will be passed to the output and display to the user as a chatbot output now let's talk a little bit more about this knowledge base before we move on to actually completing this bot and then showing you a second use case if I click this Cog wheel right here you will see that there is variety of settings so the very first one already says Max chunks per query and this basically asks how many chunks am I supposed to pull in as context every single time I user queries something now remember the way this is linked up is that the input goes into the knowledge base because we need it we need to know what the user is looking for so for example if I go into the text and look for a word like shotgun I will find that there's only one occurrence of it on page number eight right here so it doesn't need to pull in the entire PDF it only needs to look at the chunk that includes this word and as it only appears one time it only needs one chunk but depending on your use case and the data that you feed it you might need multiple chunks and then you would want to bump up this number from two to something higher in my case two Works super well a lot of things inside of this wonderful document are tackled once like let's work with the example of a chainsaw that is also mentioned here chainsaws look great in the movies but will not be reasonable self-defense weapons during the gray plague all right so let's just take this keyword and let's test based on this keyword right let's just think about this for a second if I go into my chatbot here and ask hey what about chainsaws should I be using those in Apocalypse then it should replyed to me exactly this that they look great in movies but are not a reasonable self-defense weapon during the gray plague and it will get that from the chunk that has been created for this particular page or this particular section of the PDF so let's give this a shot let's go up here and press this button which is deploy changes which is basically going to save and deploy all of this that's more than saving it everything is live and we can start using it how do we use it well you could hit play here and say run pipeline but actually my favorite way is hitting export Pipeline and going to chatbot okay so if I do this I can give this a name I'm going to call this the AIA chatbot one and then the input is the user question in the beginning we only have one input and the output is output one we only have one output in the end you can check Auto deploy here and this will create the chatbot for you and then what I'll do here is I'll hit export in the top right and you can see this thing is already live because we already deployed it so I'm going to do the simplest thing possible here and I'm just going to open the chat bot and there you go this is your live hosted version that you could share with a coworker colleague friend family whatever and I'm going to ask it is using a chainsaw a good idea in a zombie apocalypse question mark and now the assistant is processing it and let's see what it says may not be the best idea while they look impressive in movies yeah there you go this is the rag at work they're impractical for self-defense situations chainsaws are allow and could attract more zombies to your location and they require few Fuel and maintenance which may be hard to manage in apocalyptic scenario instead it is recommended to use quieter and more manageable weapons such as bladed weapons Etc there you go it even pulled an additional context from the documents for me I bet that if I looked for this keyword I would find it somewhere in here there you go it was actually in the same Chunk in the same page so it was really easy for it to pull it in now mind you this is why it's important to understand chunking because if this information was hidden not on page8 but maybe on page 12 it would never be looking for that information and it would not be present within the chatbot it pulled this additional info because it was inside of the same chunk that it was already looking at this is important to understand because it's going to change some of the decisions that you make while building this chatbot okay going back to this page one step back as you can see you can customize a lot of things about this chatbot I'm not going to go into every single option here but basically you could do something like limit the messages per conversation maybe you can say only 10 messages so that people don't abuse your API key that might be linked in the background you can obviously also customize the look and feel of the chat Bots I'm going to call this the AIA apocalypse advisor and you can customize some of the branding like so I think you get the point don't forget to deploy your changes cuz that makes them live and then again if you go to export you can either look at the chatbot just like we did on a website that is hosted under this URL you can even protect it with authentification I a password right here so I'll just go ahead and say ASDF and now if I open the chatbot you will see that it's protected I need to type the password ASDF for me to access it this is great to have Additionally you can connect it to slack activate it inside of a slack Channel that's one of the simplest and most common use cases but you can also hook it up to twio for it to send WhatsApp messages SMS messages for you these Bots that you build they're Universal if you want to go even beyond that a vector shift allows you to host your own API like so and that could literally link to anything all right so that's the basic B that we built there and I believe that I showed you how to use rag in practice but I want to go a step further here and show you some more options because there really is a lot you can do here we're going to go back to the marketplace and I'll take something very simple here I'm just going to take this simple chat with my files pipeline okay I'm going to import this like so just by clicking it and here we are this is a very simple pipeline one thing that is different here from the last one is that there is this chat file reader what the chat file reader allows the user to do is they can upload their very own file so you can build a chatbot where the user can upload files just like you can upload files to chat GPT and this is just one of many options that you have in here if you start exploring and looking at the various options you'll find that you can integrate a ton of various things like for example I could pull in a YouTube video and it could read a specific transcript and then I could hook that up and the transcript could be a part of the prompt or I could go even FCI here and I could use one of these logic points and I could start splitting the text of the YouTube video transcription so maybe just a hook of the YouTube video goes into here and now how do you do this I believe by me showing you this one more time and how to integrate it into something else you will be able to figure out how to do many of these steps by yourself so let's see how do integrate a knowledge base here because none is present here as you can see there's input as before there's an output as before there's a llm in the middle as before and we also have the memory as before right it's this history variable as you can see otherwise the bot would have permanent Amnesia what we now also have is this chat file reader that's great but that's not the knowledge base so how do I add it I go to knowledge base and I drag and drop a knowledge base in here we saw this before so you might already know what to do we need to select the knowledge base that we're going to be using the AIA knowledge base one with the one PDF in it and now what do I do how do I make this word how do I connect this because right now if I go ahead and try it out if I deploy it and run it and ask what about chainsaw it's not going to give me anything look at that I get a error message here and I can see that these various nodes here ran right this one has a green check mark this one doesn't because I didn't upload any file that's not even possible right here I would need to open up the separate website but you can see that the open AI llm actually ran and the output ran and here's the output it seems like you're asking about chainsaw but you haven't provided any specific context so just because this thing is in here it does nothing I need to connect it how do I connect it let me show you so you need to start by connecting the input into the knowledge base because as we talked about before the knowledge base needs the input prompt to search over all the embeddings that are stored inside of the vector database otherwise it doesn't know what to look for and it doesn't know which chunk it should retrieve and hand over to the llm over here so we definitely need this one connection we established it like so by simply dragging and dropping and now we can do the second connection which is the output of the knowledge base once we found the correct chunks and we can just plug that into here but look at that it doesn't even work there's no point where do we connect this I want to connect it into here doesn't work well what I'll do here is I'll go to a new line and I'll say context and here inside of context you can press this little button which makes it really simple to add a little variable and then you can just change the name of it so I'll just name this context and as you can see by adding a little variable like so it actually opens up a new field here on the side and now the llm is not just pulling in the question which is what the user inputs not just pulling in the file which is what the user uploads over here not just adding the history to that it's also adding the context where does it come from well nowhere right now we don't have a connection so let me take the results right here and as you can see ah aha we have a connection point and there you go we hooked it up and now we added a knowledge base to this Pipeline and now if I deploy these changes and I go ask the same question what about chainsaws it should give me a response now it didn't reference the zombie apocalypse but as you can see here while they might appear effective and dramatic in movies are not recommended as practical self-defense weapons during a zombie apocalypse perfect that's exactly the answer from the knowledge base in here so this pipeline is working and we get a successful output again I can deploy the changes I can share the chatbot like so I'll call him bot two ah that name already exists let's call him bot three and here we are again with the one change that now we can upload files as you can see here if I look on the final website as it's hosted right here this little file upload button allows me to upload something to it and it will also consider that as context and there you go that's how you build chat Bots or pipelines with rag integrated into it and one very last thing that I want to show you is that you can take these pipelines right these are not just chatbots and you can use them in various ways so if I go ahead I want to export this you can also export this as an automation where you're not waiting for the user to do something with it meaning that the input field will be linked to something else like a slack channel for example meaning if you connect your slack account here and you say new message and you link that to a specific Channel every time you send a new message in a channel it will run through this Pipeline and then the output again you can link to something else to a Google sheet or your GMA Emil automatically send an email whatever it might be you can hook these things up that's why they're called pipelines it's the plumbing in between something coming in and something coming out and you can decide where you want to plug the thing into that comes in and where you want to send the output that comes out and then it just operates autonomously and that's how these Technologies work that's how rack can support some of these workflows but as you can see as soon as you venture into automations and chatbots all of these techniques that we're learning like prompting or rag and now we learned about Vector databases and chunking all of it starts coming together to work autonomously and today I showed you what's possible today quite easily but this is really the idea that if you take it one step further we're in this agentic future where these things don't just happen automatically but the llm also makes the decision on what should happen and where the routing should go you won't be picking the integration agent might be saying hey this belongs in this select Channel and email an email to this particular person and this we should post on social media Etc that's really the idea of agent it's taking what we looked at today and also giving it autonomous decision making and long-term planning along with it you can think of it as a separate AI kind of governing this whole interface and building various things in here that's what an agent is but the wonderful thing is with tools like this available today you can become that agent and set these things up in a way that is useful to you all right and that is how to use rack today thanks to Vector shift to sponsoring this video you can try everything that I showed you today on the free plan in the description below obviously if you want to be using this seriously there are subscriptions available Link in the description and let me know in the comment what you might want to build with this or if you enjoy more educational tutorials like this one I'm thinking about making more of these all right that's all I got for today see you soon

Ещё от The AI Advantage

Ctrl+V

Экстракт Знаний в Telegram

Транскрипты, идеи, методички — всё самое полезное из лучших YouTube-каналов.

Подписаться