How to Connect an LLM to Python Functions (Tool Calling Tutorial)
1:00:36

How to Connect an LLM to Python Functions (Tool Calling Tutorial)

Dataquest 06.02.2026 974 просмотров 26 лайков

Machine-readable: Markdown · JSON API · Site index

Поделиться Telegram VK Бот
Транскрипт Скачать .md
Анализ с AI
Описание видео
In this video, Dataquest’s Director of Curriculum, Anna Strahl, walks through a practical walkthrough on LLM tool calling (function calling) in Python using the OpenAI API. You’ll learn how to build a real AI product assistant that can reliably: - Look up real product details - Return accurate pricing information - Calculate bulk order discounts Instead of letting the model guess, you’ll connect your LLM to external Python functions using the tool calling / function calling workflow supported by modern AI APIs. This is a key skill for anyone building AI agents, chatbots with tools, or production-style assistants that need to interact with real systems like databases, pricing logic, or APIs. By the end of this walkthrough, you’ll understand how function calling works conceptually and how to implement it step-by-step in code. What You’ll Learn: - LLM Function Calling in Python: Register tools and let the model decide when to call them - Tool Schema Design: Write clear JSON schemas so the model uses functions correctly - Preventing Hallucinations: Force accurate responses through external data sources - Bulk Pricing Calculations: Use Python functions for reliable math and discounts Recommended Prerequisites: - Prompting Large Language Models in Python → https://www.dataquest.io/course/prompting-large-language-models-in-python/ Access the lesson: https://app.dataquest.io/m/3000058 Video chapters: 00:00:00 - Intro 00:03:12 - Function calling workflow overview 00:05:39 - Why LLMs hallucinate without tools 00:09:21 - Writing the get_product_info function 00:12:53 - Defining tools with JSON schema 00:17:51 - Connecting the model to tools 00:22:26 - Executing tool calls and returning results 00:35:55 - Adding bulk pricing with calculate_bulk_price 00:43:45 - Making tool execution dynamic (tool_functions mapping) 00:46:03 - Next Steps 00:48:10 - Q&A #LLMFunctionCalling #PythonAI #AIAgents #OpenAIAPI #GenerativeAI

Оглавление (11 сегментов)

Intro

Welcome to the data quest project lab. Today we are talking about LLM function calling. It is a very exciting topic for working with AI. Now I do want to make sure that you are in the right place for getting the most out of this webinar. So in order to get the most out of the topics we're going to talk about today, you should feel pretty comfortable with Python foundations. And what I mean by this is loops, methods, dictionaries, functions. I'm not going to spend a lot of time on the syntax for these things. So if these are newer to you, just know that some of that may um move more quickly. Of course, you're welcome to stay. And like I said, it will be recorded, so you can always go back and rewatch sections that you need a little clarity on. And I do think that having at least a little bit of familiarity with APIs and the Open AI chat completions API in particular will make what we're doing today a little more digestible. I'm not going to spend a lot of time on how we connect to the Open AI chat completions model. If you joined me several webinars ago, we actually built a chatbot using this model. So, if you joined me for that one, you're in the right place. If you didn't join me for that one, don't worry. Everything we're doing today builds on that in a way where that's going to just kind of sit on the back end. So, now that that is out of the way, let's talk about today's agenda. We're going to introduce the scenario we're going to work with. We're going to introduce the function calling flow because I think that if you understand function calling conceptually then once we get into our coding environment it's not going to seem as intimidating because I know when I first saw function calling or even heard the phrase function calling I was like oh my gosh this is new this is intimidating. Um, but we're going to break it down and then we're going to build two functions together. And we actually are not going to have time to create our agentic loop today, but we will answer some Q& A towards the end. So, if you do have questions for me, use that Q& A box. All right. So, our project brief. We are going to be acting as potentially AI engineering developers to build a product assistant that can look up real product information and calculate bulk pricing for a fictional company. Um, and the fictional company that we're using is Global Java Roers. They are a coffee bean roasting company that sells different kinds of coffee to customers. And a product assistant may seem similar to a chatbot, and it is. It's a chatbot, but a chatbot with very specific capabilities that a regular old chatbot doesn't necessarily have. So

Function calling workflow overview

as we keep that in mind, let's talk about our function calling workflow. So, I have this colorcoded for a very specific reason. So when we are going to be function calling, let's imagine that a customer or a user asks a question and it says, "What is the price of Ethiopian yoga chef? " And that's a kind of coffee bean blend that our company offers for sale. So, the customer types that in. Then our LLM sees that question and immediately says, "Hey, you know what? I don't know this on my own. " So, I'm going to call a tool and say, "This tool is outside of my capability, but we're going to use that. " Then, we move on to the code. And notice the code blocks here are green. This means that when the function is actually executing, it is in its own Python script. So the LLM itself does not run this code. It's kind of on pause waiting for the function to finish running. And once the function finishes running, it returns the information. So in this case, it says Ethiopian yoga. and then it gives the price and finally the model sees only this information that was returned and responds to the user. So in chat neoo is asking who writes the get product info function. That is our job and that's actually what we're going to start with today. So everything in green here is something that us the developers has to create behind the scenes. And then we also need to create a way for our model to connect to that function. So those are the main things we're going to do today. Write the functions, connect the model to the functions. And when we put it that way, it doesn't sound too complex. All right. So now let's dive into a coding environment. And I'm working in VS Code. You can use whatever IDE you would like to. And the first thing I want to do is start with a non-example

Why LLMs hallucinate without tools

of what happens if we do not use function calling. So, at the beginning when I said it was important slashvaluable if you've worked with an AI API before, this line of code is importing an existing AI API connection. So, to follow along best today, you will need your own API key. And once we get into writing our main code, I'll go over that a little bit more. But for this particular demonstration, I just want to show why function calling is valuable. Notice here our prompt. Our prompt says, "You are a helpful product assistant for global Java roasters. " The customer question we are showing in this non-example. This is not function calling. This is what we do not want to happen in a real customer experience. It says, "What is the current price for our Ethiopian yoga chef and how much does it cost for a 50 bag order? " Respond with the product information. And then we [snorts] we ask the model to actually run that query. But notice nowhere in this code do we give any actual information about our product. So what is the LLM going to do? It doesn't know anything about our company. company's actual products. So when we run this, let's go ahead and do that now. [snorts] Yeah, sure. Okay. And it's running its course and it says, "Our Ethiopian yoga chef is a fan favorite and I'd be happy to provide you with the latest pricing information. The current price for a single bag of our Ethiopian yoga chef is $15. 99. Whoa, whoa, whoa. Where did the LLM get this information? The LLM has just hallucinated everything here. Everything here is completely made up from nowhere. And if this was an actual product assistant, a customer would be very unhappy with this because they would go to check out and realize that this pricing is not correct. So, we need to figure out a way to provide the global Java roers information to the LLM. And Nu asks, is there a website for global Java roasters where the LLM can get info? That's what we're going to sort of work on right now. So everything in this output, just to be like crystal clear, this is just completely nonsense, madeup information from the LLM. If we run it again, in fact, let's go ahead and run it again. see if we get anything similar. Okay, so our next response here now it's saying $12. 99. So very clear that the LLM is doing its best, but it doesn't have any foundation on which to actually base factual information for our specific case here. So, I'm going to close out of this non-example. And here is what we are starting with. We

Writing the get_product_info function

are starting with a or a dictionary of our fictional company's products. Could we have connected to a database? Um, yes. But for simplic simplicity sake and for learning's sake, we are just using a product's dictionary like this. But a next step could be to actually make a database connection and build from there. So if you're following along with my code, this products dictionary is provided for us already. I'm going to go ahead and clear this and we're going to start writing our first function because the question we want to be able to answer with our product assistant LLM is to start what can you tell me about Ethiopian yoga? And if a customer asks what is the price of Ethiopian yoga, we want to get this specific price back. So, I am going to be referencing my products. py file. And the very first thing we're going to do is write our get product information function. And here's what it is. Defining a function just like we do in Python. We are using typing. Typing is something that may be on the newer side or less familiar side to you. But let's break down what's happening here. The reason for typing is we are specifying what type different components are in a function. So our argument our product ID will be of a string type. Then our function will return a dictionary and this ensures that we are always going to have uh predictable output types, predictable in predictable input types and predictable output types. We have a quick little doc string just for our reference of what the function does. And quite a slick clean function here. It says if a product ID is not in products, we're going to have some exception handling here and say, hey, the product you requested was not found. But if the product ID is in products, we are simply going to return the value for that product ID key. So on its own, this function may seem a little lackluster, but what we're doing really is saying if we request Ethiopian yoga, return this information. All right. Now, we're going to work in a separate file here. We're working with two different files. And in this file, one moment, let me get my solution notebook pulled up. In this file, we're going to start with

Defining tools with JSON schema

how do we actually tell the LLM about our function? Because this products. py file, the LLM will never actually go into this file. So the LLM will never see this function, but we need to tell the LLM about this function. And the way we're going to do this is with a little bit of a tools JSON schema. I have some boilerplate that I'm going to just talk through for a moment. Let me uncomment this. And I think that boilerplate is valuable to know in this case because it highlights that this is going to be true. No matter what function we add, no matter how many arguments the function takes, we are always going to need this same kind of structure to talk to the LLM. And so what's happening here? We are telling the LLM the type of the tool that we're going to be using. And in this case, the type is going to be a function. Actually, I'll put product side by side so we can see it as we talk. So, here's our function. And we're going to translate this Python function into sort of JSON schema text that the LLM can access. So, we know it's a function. Now, what's the name of the function? function is get product info. The next thing that we need to tell the LLM about our get product info function is a description. And this is the make or break of function calling in my opinion because you don't want to just say something like Whoops, what did I do there? There we go. You don't want to just say something like looks up product info. This is not right. I like to think of this as how would I explain my Python function to my grandmother who has never coded a day in her life. You want to explain at that level of detail to the LLM because remember the LLM never sees the actual function. It only sees this. So with that in mind, here is the description I came up with. Get detailed information about Global Java Roers product, including name, origin, flavor profile, and current price. Use this when a customer asks about a specific product. So, we're saying what the function gets and when to use this function. The next thing in our boiler plate is strict is true. and strict is true pairs with additional properties is false. And what these do is makes sure that all of the information stays in the same rigid structure for the LLM. LLMs, if you've worked with before, you know that they really like being eager and creative. So to harness that down and make sure the output as is as predictable as possible, strict as true says we are sticking directly to the parameters I'm about to tell you here. And what parameters does our function need? Um well our parameters one second our parameters what was the parameter in our function was product ID so that's what we're going to put here so what do we need to know about product ID what type is it string type and we know that because we have made very very sure that the product ID will be a string type because of our typing in our function definition. And then our description of our product ID is the product identifier. For example, Ethiopian house blend or geisha reserve. And then the required parameters. Do we have any required parameters in our function? Yes, we do. We have product ID is required for this function. And now we've filled in all the boiler plate. This is what the LLM will access in order to know it needs to use this function.

Connecting the model to tools

function. So now that we have our tool defined, we are going to actually connect to the API above our tool JSON schema. Here I'm going to import our needed libraries. So we're going to use the open AI library. We're going to work with JSON. JSON is just very nice formatting when we need predictable but flexible information to get input and output with. OS is how we're going to work with our API key. And then products is the products. py pi file we have on the right hand side here. And from products. py, we are going to import our get product info function. For the moment, we're not going to work with another function just yet. So, I'm going to delete that. And now let's connect to open AI. So, just under our imports, we're going to create a client object. And in this client object, we are going to say our API key. And our API key is going to be stored in environment variables. This is the most secure way to work with an API key. However, if you are working 100% locally and your code will never ever ever move off of your local machine and you don't want to be blocked by this for the time being, you could technically copy and paste your API key here as a string. Um, but just know that's not secure because anyone who has your API key, it's just like if they had the key to your house. Um, your API key will get charged real money and you don't want a stranger to have access to that. So, this is our way to talk to OpenAI. And then for models today, I have a little bit of boilerplate just I have uh some models here that may be valuable. In fact, I will share this link in chat with you. This is a list of all of the together AI API models available. And the model that I'm going to be using today is Llama 70B Instruct Turbo. In all of my testing for this webinar, I believe I was charged two pennies. So, two American cents because it's 88 cents per 1 million tokens. But there is a free model available and that model is service now AI Apriel 15b thinker. I did a little bit of experimenting with this yesterday. It seemed fine. Um, so as long as you have a together AI API key and if you want to be 100% free, check this one out. Okay, so now that we have our models, we are going to create our message. So underneath our tool definition here, let's create our messages list. And if you've worked with an AI API before, messages is basically your conversation history. So we are including a system message of how we want the AI to respond in general. So we say you are a product assistant for global Java roasters. Always use the available tools to look up current product information and pricing. Do not rely on general knowledge about coffee. So we are making extra sure that we are reducing the likelihood the model will hallucinate madeup nonsense like we saw before in our output. Now we are asking a user question and this is for development purposes. You could create some kind of interactive chat environment but just for development we're going to hardcode the user question here of [snorts] what would 100 bags of yachef cost? Oh, that's our next question. Um, let's say, what can you tell me? Can you tell me about Yachef coffee? And let's see if the model is able to interpret a slightly different name because remember in our products dictionary, it's Ethiopian Yachef, not just Yachef. So, we're kind of testing the LLM with this message.

Executing tool calls and returning results

Now that we have our messages list initialized, let's create our response object. So we are now officially connecting to our open AI client object. We're saying we we're chatting. We want to complete that chat. And we're specifying a couple things within this function. The first one, the model. Like I said, I'm using 7dB instruct turbo. Messages is going to be our messages list that we just created. But here are the new parameters that are specific to function calling aka tool calling. These words are somewhat interchangeable nowadays in the world of API or AI. And tools we are giving it our tools JSON schema that we defined up here. So the AI is going to be given this information about the function and it's told that this information is a tool. The next one, tool choice auto. For our purposes, auto is fine. Um, basically we're giving the LLM permission to decide whether a tool is needed or not and decide if which tool to use. You can change tool choice to uh to required or say use this specific tool if you want to be um have a little bit more control over what the LLM will do. But that's kind of hardening type work that we don't need to do at this stage of development for our specific use case. And I also gave a pretty low temperature because when I was experimenting with this, I was finding that the LLM still tried to get creative with its output if the temperature was a little bit higher. So setting a low temperature made the output just a little bit more aligned with what I wanted a product assistant to actually give. Okay. So now we are going to take this response and actually you know what let's go ahead and run this right now and I'm going to print response just so that we can see what this may look like. Okay. Um, I need to update my VS Code to not have everything show in one line. That's okay. Um, let's go ahead and do it this way. All right. Um, can I put my terminal here? No. Uh, there we go. All right. I like that a little better. So, you can see that the output of response is this kind of mumbo jumbo garbled thing. And we are going to pull out specific pieces from this. And in particular, we're going to pull out the assistant message. So somewhere in this the assistant has responded but instead of having to read through all this I am simply going to say um the assistant message is this part. So let's see what happens when we print assistant message. Oh I would like to continue running it here. There we go. All right. It's thinking. It's probably connecting to the API. There we go. And here is our output. Um, it says content is none. This is interesting. And we'll talk about why the content is none right now. If you've worked with a chat completions before, usually the content is where the assistant says, "Sure, I would be glad to help you answer that question and that's, you know, where their answers. " But because we are using tools, the LLM saw that our question was about yoga coffee and it saw that it was not supposed to rely on general knowledge. So, what did it do? It looked and saw that it should call this function. And what we're going to do now is write a loop or not a loop, we're going to write some conditional logic that says if it's seen it needs to use a function, let's go call that function. So here is the beginning of our logic. So if the assist Oh, one second. and I need to turn off co-pilot from trying to reply with things. So, if the assistant message has said that tool calls are necessary, the first thing we're going to do is add that message to our messages list because remember this is our conversation history. This is what the LLM actually sees when we make new calls to chat completions. And then I'm going to copy a couple lines in a row here. We're going to pull out a tool call. and the tool call. We can see here we can kind of see this from our mumbo jumbo where tool calls is a part of the chat completions response here and it looks like it has pulled out a product ID that's necessary as well as get product info is the function name. So we are going to pull this out. And one thing to note is in tool calls it is possible to have multiple functions requested from the LLM. Maybe the LLM sees there's get product info. Maybe it also sees um something like create an image about the product. And so for our development purposes in this demo, we are just going to look at one function call at a time. So that's why zero is here. But if you create a assistant that can get multiple tool calls at once, you can loop through them in this index right here. So once we get the tool call pulled out, we're going to list the function name which we can see in our output. The function name is given here. And then we're using JSON because like I mentioned earlier, JSON is just a very nice organized way to work with information. And then we're going to print out a little bit of information about model requested this function name with these arguments. Then let's actually get our result. So for the moment we're going to hardcode our function get product info and within get product info we are going to use star arguments or star arguments. And this unpacks a dictionary because as you'll see in our arguments here arguments looks like a dictionary but we know that get product info takes a string product ID. So star arguments will pull out just the necessary values for us in a very slick line of syntax. Now ask yourself this. We know the model bases all of its knowledge on the messages dictionary. Once we have called get product info. Is this information yet in our messages list? I said dictionary meant list. Not yet. So what we need to do is we need to append this information to messages so the LLM can actually see it. So we're appending a new kind of role. If you've worked with AI APIs before, we've seen system, we've seen assistant, we've seen users, but now we're having a new role called tool. And we're using our tool call ID. And the content of this new message for our messages list is going to be a JSON. dumps version of our get product info result. In my opinion, if you can understand conceptually the logic of what's happening here, function calling is unlocked for you. If this is confusing, I don't blame you. It is a new way to think about working with AI APIs. But at the end of the day, let me just revise or um go over this one more time here. The first thing we've asked the AI is a question. The AI realizes it doesn't have the answer to that question. So, it is going to call a tool. If a tool is called, we run that tool and give the answer from that tool back to the AI. Then we actually need to call the AI again. So we are again using chat. comp completions to create a new message using our 7dB instruct turbo model. We are feeding messages back to it. But now it has that it needed a tool and it has the result from that tool. And if any more tools are needed, yes, that's available. And we've set a low temperature just to rain in creativity. And then we can print the final response. So actually just for best practice here I'm going to give my else block. So for example if for some reason uh content did populate here and there was not a tool call we would want whatever the assistant said. So let's run this and see what happens. So we can see it's going to run down here. All right. So it says model requested and it did in fact request our get product info model or um function with the product ID of Ethiopian yogurt. So that's awesome. The fact that we asked it just about yoga kind of vaguely and it filled in the context correctly. And let's see the response. It says, "Our Ethiopian Yurgich single origin coffee is a bright and citrusy coffee with a floral aroma and a light body. It is sourced from the Yachef region in Ethiopia and it is certified as both fair trade and organic. The current price for this coffee is $18. 99. That's a lot of information. Let's check if it's correct. " So referencing our products dictionary here on the left now, I just moved that over to the left hand side. We can see it is single origin. It comes in the Jurgich region. Bright citrus, floral aroma, light body. 1899, fair trade and organic. Bang, bang, bang. All of the points from our product dictionary have been addressed, but in a human readable way. So, imagine that this is a product assistant chatbot. A customer did ask about this coffee. Instead of dumping a pretty dry set of facts, we've now translated that into something a chatbot can say and have a customer easily digest. That's pretty cool. Um, [clears throat] and the price is correct, which is also very important for a product assistant.

Adding bulk pricing with calculate_bulk_price

assistant. So we have added one function to our function calling here. But I think it would be very cool to add a function that is a little bit more indicative of the power of function calling because as we were talking about we could have simply connected to a database to regurgitate this information. A function isn't strictly necessary here. But something that LLMs are not inherently good at reliably is math. So we're going to write a function called calculate bulk price. So let's say that if a customer wants to order 100 bags of Ethiopian yoga, they're going to get a discount because they're buying in bulk. And so that's what this function is going to be doing. I'm going to copy and paste a lot of code here. Don't panic. It is being shared. But we're going to add a new Python function called calculate book price. Once again, we're using typing here where the product ID will be a string. But notice we have a second required argument and the quantity will be an integer. And once again, we want the type returned to be a dictionary. You'll notice that there's a lot of squiggles. I need to import a library at the top. Let's go ahead and do that. I need to import decimal. Why is it squiggled? Did I spell that right? Oh, from decimal. What am I saying? From decimal. Import decimal. And it's lowerase D here. This is a somewhat new library to me. My colleague told me about it. But what decimal does is it makes money numbers more predictable because if we are working with price, there's going to be two decimal places but no more and really no less. And if you use float here, float is not great at keeping money numbers as money numbers because of the way binary works. Sometimes rounding gets funky. Sometimes working with money in floats just gives us answers that don't look like money anymore. Where decimal solves that problem. I'm not going to go into the syntax of decimal a ton looking at our time, but I highly encourage you if you're excited or interested in decimal and quantise after this webinar, do a little bit of research, figure out what is happening here because the majority of the syntax in this function is familiar. Right? We're using an if block where if the quantity is larger than this, the percentage of discount is going to be this. All right. Um, and I do want to point out that what are we returning from this function? We are returning the price for bulk coffee, but we're returning those as strings. And the reason we're returning things as strings even though it is discounts and prices is because JSON serializing works most nicely with strings especially because decimal is not natively familiar to JSON. So converting everything into strings works well with the tool set we are using. Okay. So this is our calculate book price function. Now, let's go back over into our product assistant file and ask ourselves, okay, we made our Python function, but how do we connect that for the LLM to actually know what the function's about? We need to add another tool to our tools schema. And remember, we have kind of some boilerplate code we're going to fill in. And for time, I'm going to copy and paste that here. So what are we doing? We have to say the name of the function and a description that our non-technical grandmothers could understand. So calculate total price for a bulk order with volume discounts and it gives the percentages for what those discounts are. Use this when a customer inquires about pricing for large orders. I do want to point out also for parameters we have two parameters listed. We have the product ID but we also have the quantity. And so this is just this is why JSON is really nice is it's following the same structure but in a flexible way where we can add more or fewer properties as needed. Now required both of these parameters are required. Okay. So what else do we need to do in order for the LLM to successfully calculate bulk price? Well, let's change our question here. Um, so instead of what can you tell me about Yachef coffee, let's ask what would the price of 100 bags of giggchef cost? And our response object here is fine. Our assistant message is fine. All of this is fine. The model is going to infer the correct function because when we call tools, it sees all of this. Meaning it sees information for get product info and for calculate bolt price and makes [snorts] the determination of which one it needs or if it needs both, it'll say that too. So [sighs] this knows what it needs. It Honestly, the only thing we need to change is the part that we hardcoded for development, which is the function. So, because we're still developing, let's go ahead and hardcode our new function. Checkbook. What was the name of it? Checkbook price. Oh, and you know why it's a squiggle is we need to import that function here. Check Or is it check or calculate? Calculate bulk price. There we go. And now let's put the right name of the function here. There we go. And with just a few lines of code, we're going to run this again. I keep using the wrong terminal. There we go. All right. So the model requested and we can see the model requested the correct function. It pulled out the correct product and quantity and did correct math. So 100 bags of yoga would cost $1,614. 15.

Making tool execution dynamic (tool_functions mapping)

Fantastic. So we're actually almost done. But there is one thing we need to address. the fact that we've hardcoded in our function name, which is not great practice because if the model needs the get product info function, it's not going to be able to successfully do that right now. So, at the very top of our file, we're going to add in a global variable called tool functions. And this is going to be a dictionary with are the keys being the name of the function and then the values being the function themselves. And then now that we have this defined, we can scroll back down to where we had hardcoded and simply say tool functions function name and then the same star arguments. Now if we do one more test run here we can see that even though we have made the result variable more flexible we get the same information back. So in today's walkthrough we have done something pretty dang cool. We have converted nonsense hallucinating LLMs from making up its own prices, information about products to telling it what products we actually have, telling it how to get that product information and reliably getting that information back from AI. Is this ready for production? Could you take this project and say, "Okay, global Java roasters, here's your product assistant. It's ready. " No. There are some things that we need to account for. And those things for are some next steps.

Next Steps

account for are some next steps. You can add in an agentic loop. What if a customer asks a more complex question? For example, what if a customer asks, "I like coffee with citrus notes and I want to buy a 100 bags of it. What do you recommend and what would the price be? " Why is that question more complex? Well, we don't say the name of the coffee we want. So, the model has to figure that out for us and then calculate bulk pricing. So that's where an agentic loop can come into play here where we say model until all parts of the user question is answered keep doing this cycle. Another thing that we would want to account for I actually have not put into my visual of next steps but you do need to be mindful of security. Um, it's important to have exception handling and say, "Hey, if the tool called that the LLM requested doesn't exist, how do you handle that? " Because LLMs at the end of the day still are eager. Maybe it's going to get the name of the function wrong. Maybe instead of get product info, it's going to try calling get product information. And that slight difference can really mess things up. So you need to build in some of that flexibility, some of that security. Um, and another thing is I think I saw a question about this in the chat. You can refactor your code to use model context protocol. And this means that you can take the same building blocks but apply them to lots and lots of different applications. So if you wanted a product assistant to live in Slack and to live in your website a phone application or something, model context protocol means you don't need to duplicate that code everywhere. You can also explore some libraries and tools that make tool calling a little bit more integrated. So, at this point in time, now that we've

Q&A

talked about next steps, we have time for questions. I'll try to stay 10ish minutes for questions. I do know that our webinar is slated to end in about 4 minutes. So, if you do need to hop off, I want to thank you for your time. Um, but if you can stick around the extra 6 minutes or so, I'll be here. So, if you have questions for me, use the Q& A box and I'll try to get to everything I can. Okay. Nu asks, "Why use temperature? " Good question. So, temperature is a way to specify to the LLM how much we want its answer to vary. uh because kind of baseline LLMs predict the next word based on statistics and temperature helps us change that predictability. Do we always want the next word to be the cat or if we open up that um flexibility it might be the dog or the house or the person. Um, but we're keeping our temperature relatively low here because the higher the temperature, the less predictable the output is. And because we are wanting the model to be very predictable with its output type, um, I found that a low temperature worked better. But actually, what happens? So, I mean, we can live experiment here. Um, so what can you tell me about Jurgie? That was our get product info test. And let's crank this up to like 0. 8. I don't know if this is going to be wildly different or not, but we're going to see. Um, and I'm going to clear my output here. There we go. Okay, so I am pleasantly surprised that even though we cranked the temperature up, the model did exactly what it was supposed to do. It pulled information only from our products dictionary about the Ethiopian yoga. Didn't make anything up. Uh everything is accurate. So maybe temperature is less important than I initially thought. But what happens if we run it again? This is just kind of developer brain exploring in real time here. Okay, so it seems like it's doing fine. Um, this could be an interesting next step for you. Can you make the model do a bad job with this if you crank the temperature up too much? I don't know. All right. Um, user asks, "How does the LLM call the function call? It is a little confusing. " Yes. So, this is the heart of function calling and it is the hardest conceptually to grasp. So I want to point out the actual function happens in this line of code. This line of code is not connected to AI. The AI connections happen well above it and below it. Oh, below it. So our separate products. py PI file here where the function information actually is housed. The LLM never sees this. So this is kind of a good separation of LLM versus Python code. And the only way the LLM sees the function is once we add it to this messages list. The magic really does happen though when the LLM makes the determination that a tool is needed. And this is open AI doing lots of fancy programming to make their models capable of doing this. Um but at the end of the day it is based on text. The model sees there is text about functions. The model figures out the determination of which function from this text is needed and returns that and then with its special JSON formatting and special behind the scenes um object work it makes that determination for us to pull out and then within the same conditional block. So it's still the same flow, but model did its work, takes a break, function does it work, gives its answer, model sees the answer. So it's always there's kind of like a wall in between them. Um, but hopefully that answers your question. If not, let me know. And Neu, I think this question was probably answered, but it says, "Why don't you use function name instead of hard- coding the get product info? " So, that was right here. And we did once we finished our development process, we did go ahead and unh hardcode the function name. Um, I wanted to leave that in because I thought it was easier for demonstration purposes and also because to show that when you're actively developing a program, sometimes best practices for production ready code aren't what you need in order to see the code working in real time for yourself. So hard code while you're working, unh hard code when it's ready for the end. And so Aon asks the LM LLM does see the JSON tools block. Yes. So this chunk of text it is stored in a variable that we have named tools. And this variable named tools is passed into the chat completions creation right here. So tools equals tools. This is where it gets all that information from. Product assistant. Okay. Reading uh reading Q& A here. Um, Priya Darini product assistant with tools. py file is invoked by the LLM. We have not defined a tools. py file. Um, we had tools just defined within our same file here. You could separate it out, but for our level, we didn't need to do that. All right. And how does the API work here? So, if you're curious more about the OpenAI API, um there is a webinar I did a couple months ago to build a chatbot using this same starting code. And this portion of the code here, the part that we added to our tool calling in particular was the tool calling parameters. Um, so if you are curious to know more about that, dataquest also has some lessons on working with the open AI API. Um, so if you search the data catalog for generative AI, you'll see that information pop up. Um, but the main things that we need to know is you need to have an API key. Sign up for it online. Um, whether you use open AI API key or together AI API. Uh, together AI is a little bit cheaper overall than Open AI in my experience, which is why I recommend it. Um, especially for development. But either works. You could also consider a Gemini or Enthropic or a different [sighs] source like that, but the syntax for all of this may vary slightly. And so the LLM connects with Python. Yeah, this Python line of code. Um, I do want to make clear that the LLM connects to the AI API. The LLM does not connect to our functions. So, a slight nuance there, but hopefully that makes sense. All right. Um, Fon asks, "You're interested in machine learning. Is it necessary to learn data science for CSV data making according to machine learning? " Um, so in order to learn machine learning also awesome, you're interested in that. Machine learning is very integrated with the concept of data science. Um, and CSVs in particular are the backbone of a lot of tabular type data. So, understanding how CSVs work and how to manipulate open um, read from CSVs with a programming tool of your choice. Most data scientists work with Python. Some work with R, but especially if you're interested in machine learning for the majority of fields, um, Python's your best bet. And so learn some Python and learn some basic statistics in order to understand the different algorithms that machine learning is based on and then hit the ground running. Hopefully that answers your question. All right. If I did not answer your question, please tag me in the data quest community and we can keep the conversation going. So, at this point in time, um, we're going to wrap things up. I want to thank you all for coming. I had a blast teaching this. Um, hopefully you had a blast learning it, but please do fill out the feedback survey because it will let me know what I did well and what I could improve next time. And there's also a field for you to request future webinar topics. And this webinar topic in particular was inspired by the fact that many of you wanted to see more AI related content. So let me know you what you want to see in the future and I'll try to incorporate it. And um yeah, that that's it for me. Thank you all so much. I hope you have a good rest of your day or evening. Bye everybody.

Другие видео автора — Dataquest

Ctrl+V

Экстракт Знаний в Telegram

Экстракты и дистилляты из лучших YouTube-каналов — сразу после публикации.

Подписаться

Дайджест Экстрактов

Лучшие методички за неделю — каждый понедельник