In the Letta framework, agents are designed to be deployed as services. In this lesson, we'll go over how we can implement multi-agent collaboration using tools for cross-agent communication as well as sharing memory blocks. Let's have some fun! In the Letta framework, agents are designed to run as a service so that a real application can communicate with them via Rest APIs. So if you have something like a mobile app that's using a chatbot, that mobile app will send, Rest API requests to some kind of Letta server that's backed up by a database, and then that server will return back a response that the mobile app can then use. So how can we have multi-agent collaboration when we have agents running separate services? And we want them to coordinate and communicate. So one solution is to have the agents themselves send each other messages. So for example you could send a Post request from an agent on one server to another. Another option is to have shared memory blocks. So you can have blocks that are in a shared persistent data store, and then have agents have synced to memory blocks across different services. In this lab, we're going to go over implementing a multi-agent recruiting workflow. We're going to do this by having three different agents. So a recruiter agent and evaluator agent and an outreach agent. And then each of these agents are going to coordinate both by sending each other messages. And then also having a shared memory block about the organization that they're recruiting for. So to get set up, we're of course going to import the notebook print. And then we're going to create the Letta client and then also update the LLM to be used to be GPT-4-o-mini. So in this recruiting workflow we want to have multiple agents that have both their own memory and also shared memory and the shared memory is going to contain information about the organization that the agents are all a part of because the memory is shared, we want it to be that if one agent updates memory and that memory block is shared, then those changes are propagated to the memory of all the other agents. To do this, we're going to create a shared block called the company block. So we'll have some initialization string. That's organization description. We'll just prefill it with something like the company is called agent OS. Building AI tools to make it easier to create and deploy LLM agents. And then with this we can create a block. And to do this we have to specify a block name. So this is actually the tag that gets used when it's compiled into the context. And then also provide the value. So this is the organization description that we just created. Just for convenience we're also going to create a custom memory object. We're going to extend basic block memory which is a pre-implemented memory class that just takes in a list of blocks. And then that's what it creates the core memory from. We're going to create an org memory that takes in a persona string that will be different per agent. And then also takes in an org block, which is shared across agents. And then we'll initialize the parent class, which just takes in the list of blocks with both the persona block and the work block. Now that we've created our shared organizational memory, we're now going to create the three agents, the eval agent that outreach agent, and the recruiter agent The eval agent is going to be responsible for evaluating candidates based on the resume that outreach agent is responsible for writing emails to strong candidates. And the recruiter agent is responsible for generating leads from a database and passing those leads to the other agents. Much like humans, these agents will communicate by sending each other messages. We do this by giving agents access to tools that allow them to send messages to other agents. So the evaluator agent will have access to two tools. One of them is a resume-reading tool, which looks up the file path of a resume and then reads that resume. The resumes are provided in the data folder along with your notebook. The evaluator agent also needs to be able to submit its evaluations. So we're going to create a tool that submit evaluation that takes a candidate name and evaluation, which is whether or not to reach out to the candidate, the resume data, and then also the justification for the decision. This submit evaluation tool is actually going to use the Letta client to message the outreach agent. So import the Letta client. And then we're going to compose a message to send to the other agent, which is telling the other agent to reach out to this candidate and providing the name, the resume data and the justification. And if we do want to reach up, then we send this message to the outreach agent. Otherwise, we'll just print the candidate name. And then also the justification for why we decided not to reach out to the candidate. Now that we've defined these two functions, we'll next create the tools with the client. Create the agent by first defining the persona. Specify a list of skills that we're looking for, and then also provide the evaluations persona and submit your candidate evaluation using the Submit Evaluation tool. So this evaluation will basically have this organizational memory, which references a shared organizational block and then also have its own custom persona, which is its own private memory. And then we'll also have access to tools that we just created for reading resumes and also submitting evaluations. Well, next create an outreach agent. So this agent is responsible for emailing candidates with customized emails. Since we don't want to actually set up an email client, we're just going to pretend to email people by just printing out an email. Now we can define the outreach agent by first creating a persona. So we're basically going to tell it in the persona that they're responsible for sending outbound emails. And then we also provide them with a template that they should be following. Now we can create this outreach agent using the organizational memory with the outreach persona, and then also the shared org block as well as the email candidate tool. So we now have two agents which are connected to each other, both by send message and also an organizational memory. We're going to kick off the eval agent by telling it to consider the candidate, Tony Stark. So we as a user are directly talking to the eval agent. But the eval agent actually has this amazing ability to trigger other agents because it has the ability to send messages to other agents itself. We send out a fake email to Tony Stark that references information about the company that's personalized, and then also is personalized to Tony Stark himself. This tool, of course, wasn't called by the eval agent because the eval agent doesn't actually have an email tool. The reason why the tool was called is because the eval agent was able to tell the outreach agent who we should reach out to and what information that that person has. So we can see the eval agent that first reads the resume. It gets back a resume, which is quite long, and then it decides that Tony Stark's resume is a good fit, that it calls the Submit Evaluation Tool, which provides a candidate name as well as the resume data. And after it's done all this, it messages back to the user to tell them that they have submitted Tony Stark's outreach. It didn't actually ever call the email tool that was done by a separate agent. We can also provide feedback to these agents to get them to adopt their memories. So, for example, we can inform the eval agent our company actually pivoted the foundation model training. And we can also even tell it the company was renamed to Foundation AI. So given this new information, the Eval agent should know to update the company section of its shared organizational memory block. So just like we've seen in the previous examples of core memory replace, the agent updated its content to now reflect the new name Foundation AI. So even though it was the eval agent that updated this company section of memory, the same memory section should also be updated for the outreach agent. So in future emails, the outreach agent should be referring to Foundation AI not Agent OS. We can test this out by telling the eval agent to now evaluate SpongeBob SquarePants. We should expected the outreach agent. Even though we didn't tell the outreach agent directly to update its memory, should have an updated company section of its memory. And we can see here in the email that it refers the Foundation AI, and then also refers to the pivot to foundation model training that we only have so far told the evaluator agent. So you can see here that even though this is coming from the outreach agent, that it reflects the updated information about the company and even refers to the pivot towards Foundation model company and the new name Foundation AI. Even though we actually only provided this information to the eval agent. We can also observe this directly by looking at the core memory of the outreach agent. It didn't quite replace all the information. Again we're using a weaker model for cost reasons GPT4-o-mini. So because this text comes after it was still able to reflect this in the outbound emails. So we're now going to add our third agent which is the recruiter agent. So so far we've been triggering the eval agent manually. So we've been each time passing in an explicit name to evaluate. But we actually want to have a recruiter agent that just generates the names automatically and then passes that to the other agents. So to reset things, we're basically just gonna delete and recreate the outreach and eval agent. And then we're going to have the recruiter agent connected to the same org block. So we can actually also directly look at the same company block by just querying the block ID. So again you can see that this is the same memory as what was in the core memory of outreach and evaluations. So the recruiter agent is going to have two tools. So the first is searching candidates database Tony Stark, SpongeBob SquarePants and Gautam Fang. So all just made up. And then we're going to add a second tool which is a consider candidate tool. So this is submit a candidate for consideration. And it does this by creating a Letta client and then passing a message to the eval agent to tell it to consider this candidate. Automating the process that we were previously manually doing of handing a name to the eval agent. So we can now create each of these tools. We also provide a custom persona for this recruiter agent telling it to continue to pull candidates from the candidates database until there are no more candidates left. And then also for each candidate to consider the candidate by calling the consider candidate tool. And we also emphasize that it should continue to call and pull candidates until there are no more candidates left. So this process is going to take quite a long time because for each step and we're not doing anything asynchronously in this example, but for each step, we basically have the recruiter agent considering a candidate passing that to the eval agent and then the eval agent, if it thinks the candidate is good, is passing it to the outreach agent. So there's actually a lot of agents working together to print out the stream. So we can see the recruiter agent decided to consider the candidate, Tony Stark. And then this resulted in a successful email outreach from that outreach agent with a customized email. The outreach agent also ended up emailing SpongeBob, but for the final candidate Gautam Fang, we can see that the eval agent actually printed out and this justification for the rejection is actually printed where it says that Gautam's resume is to focus on gardening. So it's not really a great fit for what Foundation AI is looking for. If you're interested, you can also check out the message History from both the evaluation and the outreach agent to see what steps that they went through. But because we were interacting with the recruiter agent, we only get back the messages of the recruiter agent back and the response. So we can see that the recruiter agent pages through the candidates. The first request candidates page zero, gets Tony Stark, then request candidates page one and then get SpongeBob. And then finally gets the candidate Gautam Fang. And then in the end, when it gets to page three, it gets no response. And so it realizes that it's finished evaluating all the candidates and then decides to message back the user and tell it all candidates have been evaluated. So this is a really impressive and incredible example of multi-step reasoning. We have an agent that has successfully paged through multiple rows in a database until it's out of results. So it's basically realizing when it needs to keep running and when it needs to stop. And then when it stops, it actually communicates that to the user. In addition to that, this agent is also successfully orchestrating two other agents. So both the eval agent and also the outreach agent. And because the eval agent and outreach agent can specialize in their own tasks, they can both run concurrently and then also have their own personas that kind of narrow the scope of their tasks and make them more reliable. These agents also all have a shared company memory block. So if any of the agents decides to update its company section of its core memory, these updates are also reflected in the context windows of all the other agents. You could also imagine doing something similar with the current status of a task, or a job shared across multiple agents with multiple agents being able to update the job status. This example that we just went through is incredibly complex, so it's possible that you may have run into some issues here and there, especially since we're using a relatively weak model GPT4-o-mini, just because it's really fast and cost-efficient. If you have your own OpenAI credits, I would recommend trying out this example with GPT-4, though I will warn that will be very expensive just due to the number of LLM calls. But I think what's really exciting is that as LLMs become more intelligent and cheaper, these kinds of multi-agent workflows will actually become more real. Congratulations for getting to the end of this fairly long lab. You've learned a new way to coordinate multiple agents, both by sharing memory blocks and also sending messages between agents with tools. This is also the last lab of the course, so congratulations on making it all the way through.