One of the great use cases where oh one models shine is creating a plan to solve a task. Given a set of tools to carry out the plan and constraints to set bounds around the task. This kind of use case would be very slow if we used a one for every step. So what we'll do is generate a plan with a one mini, and then execute each step with GPT for a minute. This trade off of intelligence against latency and cost is a practical one that we've seen developers use to great effect so far. All right, let's go. Let me tell you about the overall architecture of the task, so that we don't get lost in the details as we go through the code. You'll begin scenario. This comes from a customer where they're making an ask, which requires multi-step logic to answer that scenario. That scenario will be given to A1 mini, who will have at its disposal some instructions of how to build a plan, and then a number of tools which it can use to carry out the plan. This is a great use case for a what, where we use its multi-step reasoning logic to build a durable plan before we then engage for a mini as a worker to carry out each step of the plan. Once the plan is completed, we'll have an answer which we provide back to our customer. Now let's nip into the code and see how this works in practice. You'll start by importing your OpenAI key and initiating the opening. A client and the A1 and GPT models. You'll first define the constraints that this scenario will be bound by. Your first variable is a message list because our for a worker is going to loop through multiple steps. We need a list of messages to accumulate the history so that the worker can tell what steps is performed of the plan that we've come up with. Then you've got a bunch of context. I'm not going to go through every variable here, but essentially we have an inventory, a order that a customer wants to place. And then we have suppliers, which we can use to supply us with additional material. If the customer asks for something that we don't currently have, we then have a few other variables, like the shipping options and the metadata of the customer who's going to be giving us our scenario. The last element is the state. So because we're going to go through multiple turns of conversation, we will maintain conversation state. And when you take this and apply your own scenario, you can consider whether you want to wrap this in an application where you clear down the state at certain points to allow the customer to submit multiple scenarios. Next you need to define the prompt which dictates how old one is going to go about building a plan to solve the customer scenario. This prompt consists of a few main sections. The first is where we define who it is and what its task is. So we're telling it that it's a supply chain assistant, and its task is to review the challenge that it's been set or the scenario, and create a detailed plan to answer it. We tell it that it will have access to an agent, and then we give it a list of functions. Now, it's important to note here that The oh one model isn't going to be provided with the OpenAI function calls. Those will be given to the worker the for a mini model, which will be doing the work for it. But one needs to know what those functions are and what they roughly do so that it can build up that multi-step plan for the for mini worker to carry out. So, for example, it needs to know that there's a fetch new orders function which will allow the 400 mini model to check the status of new orders that have come in and decide whether to fulfill them or not. Underneath this, we've given O1 A, set of instructions which tell it how to lay out this plan so that we can then parse it and provide it to for a mini to carry out. Next, we'll define the functions themselves. can now do function calling. But we're not going to use it now do function calling. But we're not going to use it for this scenario because we want it to come up with that multi-step plan to give to for a minute to execute if we wanted to oh, one to carry out each step, bit by bit, we could supply the OpenAI function calls. But in this case, it's going to create the plan for for a mini to carry out. Now we'll define the GPT for oh system prompt. So the prompt that dictates how that for a mini worker is going to do its job. Again, we've given it a personality. So your a helpful assistant for telling it that it's responsible for executing the policy that it will be given. And its task is to follow the policy exactly as written. We also ask it to explain its decision making process across various steps, and this will come in handy later. So as it goes through the plan, it will decide whether to update the customer on the progress and whether it's hit any blockers. We then give it step by step instruction, again referring back to the principles for prompting oh one we don't give a one explicit chain of thought prompting, but given that we have a 400 mini worker here, it is useful to give it some chain of thought to help it make the right decisions. And lastly, we'll instantiate that policy that was generated by oh one, which is that plan, and give it to 400 to then, execute each step. To allow for a mini to take action on each step of the plan, it needs to have OpenAI function calls. We've defined those here. You'll see a long list of function calls. And I'm not going to go through each one in detail, but just picking out one of them. This function allows us to check the production capacity. So it will components that we have, and figure out how much capacity we have to more units. And again, we've got our standard OpenAI function called parameters here. So in this case a time frame which is an enum that it can pick from each one of these functions will tie to a step of the plan which one is produced for, and will allow for a mini to take action. Those OpenAI function calls not to Python functions, which actually execute the results of each function call. Again, looking at the same one that we looked at a second ago with the OpenAI function calls. Here's the Python function, which is implemented based on the results of that function call. So we pass in that time frame and then we get back a Json, which the forum mini model will assess and use to decide whether it moves on to the next step of the plan. Now that we've defined the input variables, the four oh and oh one system prompts and the function calls with their Python, definitions, we can move on to creating a function which will knit all of this together to orchestrate the process. The first step is to take in a scenario provided by the customer, and call a one who will produce a plan. We then take that plan and initiate a for a mini worker that for a mini worker, will loop until it's carried out the whole plan and then return the messages. And that is what we will provide back to the user to tell them how we managed to perform against the scenario that they gave. If you'd like to look into the underlying details behind that orchestration wrapper, the first function is the append message function, which takes in a message and decides which variables to include with it to give for a mini context to continue carrying out the plan. The next function provides the scenario provided by your customer. 201 and generates the response. This plan is what we're going to provide to for a mini to then carry out that scenario. The last orchestrating function calls GPU for oh with the plan generated by A1 and uses that to carry out the plan. You'll see here that we initiate a well loop which begins producing tool calls with GPU for a mini and decides what action to take based on the content of those tool calls, so it will keep looping until it receives an instructions complete function, at which point it will break. Now, you defined all the variables you need, so the only thing remaining is to come up with a scenario that the customer wants us to carry out. In this case, we've said that we've received a major shipment of new orders. We're asking, oh, one to generate a plan that gets the list of awaiting orders and determines the best policy to fulfill them. We've given it some idea of what the plan should include, and we've also given it an overriding directive that it should notify the customer before completing. Lastly, we've given it a criteria which is to prioritize getting any possible orders that they can well, placing orders for any backlog items. With that, we're ready to start generating the plan. You can see the plan that one's generated for you, laid out nicely here in markdown. The first step is to retrieve new orders. You can see here where oh one has specified the exact order with the exact function call that for so many should use to carry out step. Next, it's asked it to process each order individually and then within that it's given extra detail. So for each order and list, extract these variables, check availability and handle any backlogged quantity. If there's backlogged quantity we've given it. If then else logic to go about breaks out of that, it will arrange shipping for the available inventory and notify the customer. We give it a little bit of logic here to tell it what to do if the order is fully fulfilled, if it's partially fulfilled, or if there's a backlog that it needs to let the customer know about. Lastly, it will finalize the instructions and call instructions complete, which will break out of the well loop and provide the answer to the customer. Next, we'll go through the actual steps, which for many carried out to check that it followed. This plan as specified. The last thing to note is that oh one has adhered to the prompt engineering principles, which we mentioned earlier by providing a structured prompt so that for many, will find it easier to follow as we go through the 400 mini actions that that took place. It's key to refer back to this and understand whether or not it actually properly followed these steps as specified. To review the actions that Forum Mini took in more detail, I'm going to print out the messages that they're a little bit easier to read. I'm going to skip past the oh one, policy because we've already gone through that and start with the different actions that Forum mini took. So first we can see that it called the Fetch New Orders function. And that got us one new order, 200, which we can see here. We then see that it called the get inventory status function, to check whether we actually have the inventory to serve what the customer asked for. And we can see that we do have that particular product, but we only have 50. So it's going to need to order a few more if it's going to be able to fulfill the customer's order. The next thing it did was call the allocate stock. So of the 50 units To fulfill the 150 units that it doesn't have, the first step is to get the product details and figure out what components are needed. We can see that this comes x 200. Component is what you need to build one of these units. It then checks the available suppliers, finds that we have two and first one to figure out what the available quantity is. It then places a purchase order for the quantity that it needs which is one 5500 available. So that's okay. next step next step is to check whether we have production capacity to actually produce those. Once we have the raw materials you can see that it checks for the immediate production capacity. And we only have capacity for 100. So it schedules a production run for 100 immediately. And then it goes back and checks the production capacity for next week and is able to schedule the remaining 50 units to be produced next The last step is to calculate the shipping options for the units, which we have right now. Before sending an order update to the customer telling them your order has been partially shipped, the remaining items are being processed and will be shipped shortly and then successfully closing off the process. So pretty impressive. We've been able to implement an agenda flow whereby, oh, one acts as the orchestrator, creating a plan and then handing to for a mini, who then executes the multi-step plan in sequence. Now it's a great opportunity for you to take a step back and think about where you might have opportunities to use this kind of planning paradigm, using the intelligence of oh one and combining that with the low latency and cost of firo to carry out these complex, multi-step agenda processes. Next we'll change our focus from planning over to very common use case for our own one models, which is coding.