In the last lesson, you used an LLM to invoke a single function. In this lesson, we're going to cover all of the variations of this. All right. Let's go. There are a lot of different permutations and combinations of functions and calls that an LLM is capable of issuing. These include single calls, parallel calls, no calls, multiple functions, and nested functions. Let's start with parallel calls. Before we get started, let's do some housekeeping. In the last lesson, you created a function and then created a prompt with the description of the function embedded within the prompt. You can actually be a bit more efficient about it. By using the definition itself to create the prompt automatically. To show this, you define a generic function called "afunction". You can actually use the name and dot attributes like this to pull out the relevant information, and you can use the Python inspect module to get the signature or the arguments of the function like this. You can put all of that together in a utility that you can use to create prompts for a given list of functions, such as this utility called "build Raven prompt". Here, you are accepting a function list and a user query for every function in your function list. You're using the trait we discussed earlier to pull out the signature or the arguments of the function, as well as its dot string, and you're using that to create the function annotation for each of the functions in the function list. And then adding the user query at the end. Let's try it on our example. Great. That prompt looks fine. Okay. Now, in this example, you will use the function from the first lesson. We've defined this in the utils API file to keep our notebooks small. If you wish to view this file, please click on view. Under view, please click on File Browser to open the list of files available. Please click on utils.py to open the utilities file that contains all the functions we'll be describing. This function now has more parameters to make it more complex. Previously we had the face color, eye color, and the nose color. However, we've now extended it to include more parameterizations of the clown's face, such as the eyes size, mouth size, mouth color, eye offset, and the mouth theta. These controlled various attributes, such as the width and the height of the clowns mouth, as well as the starting and ending angles. They also control the offsets of the eyes as well as the size of the eyes. Now back to parallel calls. Parallel calls are when the LLM has to issue in the same turn, multiple function call strings to either the same function or to a set of functions. Let's take a look at one example. You will use a user query that, while similar, is more complex than the user query we used in the first lesson. In this user query, you're asking for two clowns, one with a red face and one with a blue face, one with a blue nose and the other with a green nose. And the first clown should have a sad smile. And the second clown should have a happy smile. You will soon see why we specify the numerical representations instead of just saying happy or sad. But for now, let's just keep it as it is. You saw that the new clown face function is far more complex than earlier. You can now build the Raven prompt by passing in the clown face function and your user query. This looks as we would expect. Here is the function delineator. Here is the function itself. And here is the description of the arguments of the function. And finally a description of what the function does. Let's call Raven. Great. You can see that there are two function calls here. Here's one. And here's the other. The first calls for a clown with a red face. The second one for a clown with the blue face. The first one calls for a clown with a blue nose. The second one with a green nose. The first one for a sad smile. And the second one for a happy smile. And we see that the clown matches the descriptions we asked in the user query. Try it yourself. Try requesting three or more faces with different features. That was parallel functions. Now let's talk about multiple functions. You can provide the LLM with multiple functions such as f1, f2, f3, f4 all the way to fN, and the LLM should be able to pick the right function or functions out of the list. Indeed, you can also present a no relevant query function, which the LLM can use if it decides that none of the functions you have provided is relevant to the user query. Let's take a look. In this example, there is a new function called the draw tie function, which draws a tie. This is also in the utils.py file to keep your notebook from getting too cluttered, here's what it looks like. You will specify a user query that asks Raven to just draw a tie. However, you will provide both the functions to Raven, both the draw clown face and the draw tie function. Here's the prompt. You can see that both the functions are present. Draw clown face and draw tie and we have the user query. Let's send it to the LLM. Great. What do you notice? It's Raven as only use the draw tie function and it is ignored. The draw clown face function as you would expect. Let's now execute the call. Great! A tie! We're ready for a formal dinner. It's also important to note that it's possible to combine multiple functions and parallel function calling at the same time. In this example, you will ask Raven to draw a clown and a tie. As before, you provide Raven with the draw tie and draw a clown face function. Let's take a look at Raven's call. Since Raven was not given any concrete requirements when it comes to the clown face, it is made best guess assumptions about what arguments would work best based on the default values for the functions provided in the dot string, such as for the face color and for the eye color. Let's call it. Great! A clown with the tie. It is at this point that it's worth talking about the significance of the dot string of the functions that you have provided, especially as the complexity of the function grows and you provide multiple tools to Raven, the significance of the dot string becomes more important. Let's show a failure case. Let's ask Raven to draw a sad clown with a green head. When you take a look at the call that doesn't look quite right. It gets the face color right, but the clown is happy. It's possible that the reason for this is because the dots strings for the functions are not sufficiently detailed. Therefore, we would need to iterate a bit to make it clear what the arguments of the function does. You can take the prompt from earlier and replace a description for the argument that has the most impact on the perceived emotions of the clown. You previously had it state that the starting and ending angles of the mouth art is what this controls. However, because this is very vague and doesn't clearly link back to the essence of the user query, Raven was unable to understand what this actually means. What you will do is simply replace this with a more clear description of what this argument does. Such as, by adding this line. You will now query Raven with this new prompt. What you now notice is that there is a new argument present. This looks great. Now we have a sad clown as requested. In this example, we observed the impact of good dot strings, as it can help Raven understand how best to address the user query. When you notice that prompts are failing, you can sometimes address this by editing the dot string to add pointers to make these values more understandable to the model. On the other hand, you can also add in a few short examples to convey this as well. We have now touched on multiple functions, so let's talk about nested functions. An outcome of multiple functions is that you can now define independent functions where the input to one of the functions, such as say f2, depends on the output of a function such as f1. The LLM can utilize function f1 first, call it with the arguments that it extracts from the user prompt, keep the output and feed that back into f2. This capability can sometimes avoid multiple calls to an LLM. Some agentic solutions create intermediate results with one call and then final results with the second call. With nested calls, this can be done in one step. Let's take a look at making this more concrete. In this example, the cloud function has been split into many, many parts. You now have multiple functions that draws various parts of the cloud, such as the head, eyes, nose, and the mouth. And you have a function to combine all of them together. You will find these functions in the utils.py file. You have simply asked Raven to generate the clown with a red face, blue eyes, green nose, and a black open mouth. This is a prompt that was addressable with the old function. But let's take a look at how Raven handles the multiple functions. We will simply provide the draw head, draw eyes, draw nose, draw mouth, and the combination function to Raven along with the user query. You'll notice that the response is first called the draw clown face parts such as for draw head, draw eyes, draw nose, draw mouth, and so on with the necessary arguments. It is then combined the output of all of these calls by passing that into the combination function that we've defined. Let's run it. Great. There's our clown. Try some variations on the prompt yourself. Maybe try to add parallel nested clouds. This concludes our lesson on function called variations. In the next lesson you will use external functions, such as functions that rely on open API specifications and API endpoints that will enable you to add web service to your function calling LLM.