In this lesson, we'll show you how to create and interact with MemGPT agents using the Letta framework. We'll also go over how to understand the agent state, such as the system prompt, tools and the memory of the agent. We'll also learn how to view and edit the agent archival memory. All right, let's go. You can use Letta to create MemGPT agents. MemGPT agents are stateful and have explicitly managed sections of their context window. In this lesson, we'll go over a different parts of the agent state and also archival memory and recall memory. We'll also go over how you can design agents. So agents can be designed by controlling these following knobs. So the prompt which includes things like the system prompt and the persona that defines the agent's behavior. The agent's tools, the way the agent manages and organizes its memory, and also the content of the agent's memory, both inside of core and archival. These knobs define what's placed into the LLM context at each step, thereby defining the agent's behavior. So we're first going to import a helper function that's basically just going to make the printing of its MemGPT responses more legible. So first we're going to create a Letta client. This client can also connect to a Letta server. But in this case we're just going to use an example of the local Letta client that will just run the agent reasoning locally. We're also going to set the default config for this client to be GPT- 4-o-mini for this lab. So to start off we're going to go over creating a basic MemGPT agent using Letta. So we'll name this agent simple agent. You can also change his name to be whatever you like. You can also change his name to be whatever you like. So we're first going to call the client function to create an agent. So this will take in an agent name and then we also will pass in an instance of the chat memory class. So this chat memory class basically represents what we learned about previously inside of core memory. And you can see here that we're passing in the human string to represent the starter core memory for the human section, as well as a persona string to represent the starter core memory for the persona section. So for the human section, I just put in, my name is Sarah. But you can also change this to be your own name or also include additional information about yourself. And then for the persona, we're telling the agent that you're a helpful assistant that loves emojis. So this will define the personality of the agent, but then also still be editable because it's within the memory section. So now that we've created an agent, we can now message the agent. So we'll just message it something very simple like "hello." So this response is going to generate back two things. Both the usage as well as the actual messages that the agent returned. So this usage statistics object basically just shows you the number of completion tokens and prompt tokens that was used to generate the response messages. We can use the print function that we imported to print out the response messages. So you can note here that the agent is generating an internal monologue that explains actions. You can use this monologue to understand why agents are behaving as they are. And this monologue also helps the agent have more time to think to generate better responses. We can also see that the MemGPT agent is actually using a tool to communicate. So messages are sent back to the user by using the send message tool. This actually allows the agent to communicate over different mediums. So for example, if you wanted to have the agent generate a text message instead and also allows agent to generate between what information is sent to the end user, so the content inside of this message, as opposed to what content is actually kept to itself, like the internal monologue. So we're now going to go over understanding the different parts of Agent State. So Agent State is what was returned when we created this agent. So there's a lot going on here. But we'll break it down step by step. So the first part is the agent system prompt. So this also defines what the agent's behavior is similar to the persona but is not editable by the agent. So this is actually a really long system prompt that we designed to basically get the best performance out of MemGPT. It's very long, but you can see things like it tries to override who the creator of the agent is. So this is because a lot of LLMs will have their own system prompts that will say things like, I was created by OpenAI or Anthropic. We also have other information, like stuff about the control flow. We have different user events. We also have basic functions like inner thoughts, the send message, and we also describe things like the memory editing. Basically how this memory editing should happen and the instructions for that. And then we also describe things like the recall memory, conversational history, core memory and the different blocks of core memory as well as archival memory. So this is a really long system prompt, and I know I just skimmed over it, but you can also read it in your own time if you're interested. And sometimes it can be important to edit the system prompt if we want to really refine the behavior of the agent. Another part of the agent state is list of tools that the agent has access to. So you can see the default tools include things like sending a message, pausing heartbeats to stop the agent loop, searching the conversational history, inserting into archival memory, searching the archival memory, and also appending and replacing core memory. So, using these tools together, MemGPT agents are able to control their memory. We can also look at what's inside of the core memory of the agent. We can do this by accessing the memory field inside of the agent state. So we can see here that this returns a memory object, which within it has multiple blocks. We'll go over in more detail exactly what these blocks are and what they mean in a later lesson. Beyond the agent state, we can also get a summary of what's inside of the agent's archival memory by using the agents ID. So right now we haven't placed anything into the Archive of memory, nor has the agent, so it's empty. Similarly, we can also get a summary of the recall memory. So here, because we've exchanged a couple messages with the agent already, there are already some messages inside of recall memory. We can also get the raw message history of the agent. You can go and look at individual messages inside of this to understand exactly what the trace was of the agent. So core memory is the in-context memory of the agent. And what's unique to MemGPT is that it can actually edit its own core memory with tools. So here's an example of this. We can send a message to the agent using its ID and actually tell it. "My name is actually Bob", even though I previously told it that my name is Sarah. You can alternatively have the agent add additional information about you that you might not have included in the original human string. So we can use our print function to print out the response messages. So it should update the memory to reflect this. You can then see that it calls the core memory replace function. So it knows that it needs to update the human section of memory and replace the content. My name is Sarah with my name is Bob. After it completes a function call, it realizes that it successfully updated the memory and then knows the message user so it messages: "Got it. Bob. What should we chat about today?" And of course, includes an emoji. So the persona section of the memory and the system prompt are very similar in the sense that they both are defining the behavior that we want from the agent. So what this means is that we can actually get the agent to edit its memory about what it should be doing. So so far, we've had a pretty friendly agent that's been using a lot of emojis in its messages. But after we give it this feedback, we expect it to never use emojis ever again in whatever messages that we might send in the future. So now we have the response messages. We can see that the agent realizes that the user prefers no more emojis to reflect this in future interactions. So similar to the last example we can see it calls core memory replace and it replaces loves emojis with does not use emojis based on Bob's preferences. And so, finally at the end it sends this message understood. Bob will not use emojis anymore. So what's really cool about this is that we can actually adapt the agent's behavior over time. So as a human, our user gives feedback to the agent. The agent can actually improve using this feedback. This feedback will also be included in all future messages to the LLM, because it's inside of the memory, which is sent every single time in the request. We can also retrieve the current memory of the agent using the agent ID, and then grab the persona block to see exactly what the value of the string inside of the agent's memory is. You're a helpful assistant that does not use emojis. So this basically reflects the change that we saw happen previously. So next we're going to go a bit deeper into archival memory. MemGPT agents are long-term memories and archival memory which persist data into an external database. What this means is that agents can have additional space to write information to outside of its context window, which is limited in size. So we can see what's inside of the agent's archival memory by calling get archival memory. So right now it's empty. The agent will add archival memories as it sees fit over time, but we can also suggest to the agent explicitly that it should add something to archival memory. So here we're telling the agent save the information Bob loves cat to archival. You can also change the string. Bob loves cats to something specific to yourself. So just like in the previous examples, the agent again has internal monologue where it realizes that it needs to call the Archival Memory insert tool, and it does so and inserts the data: "Bob loves cats." Once it does this, it acknowledges that it's made this adjustment to the user and follows up with some relevant conversation with "what kind of cats do you like?" So now if we get the archival memory, we should actually see something inside of the archival memory. But we can also just get the text. So now we see that the first row in the archival memory has a text. "Bob loves cats." So that data has been successfully added. So we just saw an example of how the agent can insert into its archival memory. But as a user, we can also manually insert memories into an agent's archival memory. So using the agent ID, we can insert into its archival memory. "Bob loves Boston Terriers", and this will return back the passage that was inserted. This has the text, the information about the embeddings use and the date added, and some other things. So now the agent should have two memories inside of its archival memory. And we can try asking "What kind of animals do I like?" Bob loves cats and also Bob loves Boston terriers. So given this result from the archival memory search, the agent has some internal monologue about continuing the conversation given these results and sends back a message. You love cats and Boston terriers. And then I will also try to continue to keep the conversation engaging with. "Do you have a favorite between them?" So this is an example of how the agent can use what's in its archival memory to generate more informative responses back to the user. So we've now built an agent that has archival memories, core memories, and also edited those core and archival memories. So congratulations. Now you are able to create a MemGPT agent that has self-manage memory. In the future lessons will also go over how to implement more advanced capabilities for core memory, and also how to extend the RAG capabilities of MemGPT agents.