In this lesson you will learn the different levels of knowledge customization available with Azure OpenAI. You will mostly focus on grounding techniques, RAG to start building your first AI agent. By the end of the lesson, you will deploy your Azure OpenAI service instance and test API. You will also deploy an orchestration engine and LangChain to enable these scenarios. All right, let's dive in. In the first lesson, will be building the first AI agent. So imagine you're starting with these technologies you want to apply Azure, OpenAI, langChain etc... You just need to know how to deploy them, how to prepare them in order to build your first AI agent. Before we dive in, let's quickly cover some background to set the context for this course. This will be brief. Don't worry, we will get to coding soon. Generative AI, particularly in the 20s, refers to the ability of artificial intelligence to create new text, images, videos, and other types of content. This is achieved through interactions with a genAI system using natural language prompts. This capability is impressive. While it's not entirely new, there are many emerging technologies in this field that you should be aware of. You will be working with cutting-edge technologies, including in preview new models that are still in development. Therefore, we are relying on the official documentation from different providers. Additionally, remember that generative AI builds on the foundations of machine learning and deep learning. These are powerful technologies, but our main focus here will be generative AI. In my opinion, the key differentiators of GenAI are remarkable. If I had to explain what GenAI is, I will highlight the concept of foundation models. Why? Because a foundation model can perform multiple tasks with a single setup. This means that when you are building your AI agent, you can connect it to a database, process documentation, or create various applications all with the same model. This simplicity is a significant advantage for companies, as deploying one model to handle different tasks is incredibly efficient. Additionally, foundation models can handle various data formats, including text, images, videos, and audio. This versatility is ideal for multimodal applications. Another crucial aspect is the ability to use natural language. As developers, we just code. But for most of the people interacting with the system, using natural language is much more intuitive, whether in English or Spanish or any other language. People can now communicate with this system naturally. By the end of this course, you will be building a database agent. This is exciting because it means the user can interact with the application in their own language. Instead of using SQL queries. This opens up these solutions to a broader audience, including technical and business users. These are essential features to keep in mind for this lesson and for any other generative applications you will create. Now, let's discuss how to customize the LLMs. And by customization, I mean integrating your additional knowledge to an LLM. Imagine your company has databases, PDFs, datalakes, or private information. How can you combine these data to enhance the knowledge of the model like GPT-4, for example? Well, we have two options. The first one is retrieval generative augmentation. This method leverage an orchestration tool to connect the model with your data sources, such as databases, without the need for the training. For instance, Azure OpenAI GPT-4 can be used to query your database directly without changing the model itself. This approach is more efficient and practical as it avoids the overhead associated with model training. The second one is fine tuning the model. This process involves retraining the model and specific data sets followed by redeploying the updated model. While this allows for customization, it is resource intensive and less commonly adopted to the complexity and operational cost. The primary advantage of RAG is its flexibility. Future model updates or replacement can be seamlessly integrated into existing setup. Using the same connection mechanisms to access your data. This approach will be implemented in this course. Consider how it is streamlined the integration of new models and optimize the use of your existing data infrastructure. We'll begin by setting up the environment with inside packages such as pandas. If you are running this locally, you need to install the required packages by using the requirements.txt file which contains all the dependencies for this course. You can find this file in the current directory of this lesson, and detailed instructions will be provided on the lesson's notebook. Next, you will need to set up the actual resources like Azure OpenAI. Starting first by providing the information needed to configure your Azure OpenAI environment. If you have worked with OpenAI APIs before, this will be familiar. There are some differences, but you'll grasp it quickly. Here are the variables you will need, the API endpoint and the API key. This endpoint is a pre-created cloud resource that will give you access to the Azure OpenAI GPT model. This provided the API key and endpoint here are for teaching purposes only. Your notebook environment using the deepLearning.AI platform are set up with the needed keys. Now let's think about the next step. Let me just add the information here and we will run it. See that we are running like a very small piece of code. And in this case, what we are saying is, okay, we're importing from LangChain schema, something that we call the human message. And then from LangChain we are importing the Azure chat OpenAI. Which means, this is interesting, because we will be using the role of human message, which is the kind of prompt you will be sending to the system. So we need the LangChain system to recognize that you are sending a message. And then we need to establish something to connect from LangChain to Azure OpenAI, which is slightly different to OpenAI. So here what we are saying is, okay, we have the information about the Azure, OpenAI. We have the API version that I believe at this point is the latest stable version of preview. And this will change. Obviously. And you can try different versions. You can just change the date, just go to the reference of different API versions, and then you will have the deployment and the endpoint. Again, this is because my configuration includes an endpoint that is called Adrian, Suecia and deployment. I call this test adri for this single purpose, but usually, you will put the deployment name that is more specific to what you are creating. So up to now, we have the GenAI environment. We have added the endpoint. We are preparing the connection with LangChain. Three steps. Very simple. Direct to a point. Now, what's next? Remember I have told you about the human message part. Oh here you can see it. You might miss it. And it is for the system to recognize that you are sending a message as a human. So let's try something like this. We created a message and inside this message we have the content. And the content is translate this sentence from English to French and Spanish. Because you know why not? I like red car and blue houses, but my dog is yellow. So I'm just testing this because it's a nonsense sentence. But also it's interesting to see how the system may answer. Remember, we have just created a instance. We haven't sent the message yet, but you have it there. In order to send. It is very simple. We'll put here is model invoke. The invoke function is the one that is working now. Remember model is related to this model that we have created. That is an Azure chat OpenAI kind of object. Let's see what happens. Okay, so we are starting to see things. Check this. We have talked about the human message here. We are getting the AI message. When you think about the interaction with these models we had different kind of roles. The human, the agent, the administrator. In this case, we are focusing on the human AI interaction. We have seen a human message. We get the AI message and you say the content is, okay, you ask me in French and Spanish. "French, j'aime les voitures rouges et les maisons bleue, ok I can tell you, that I speak French, I can tell you that this works. And even better than the Spanish because I speak Spanish. "Me gustan los coches rojos, las casas azules y mi perro es amarillo" perfect. So we have the full content here. This is working. We have to tell the system, translate this sentence to French and Spanish, is giving me with, the two languages. Take some minutes. Review the code, play with it. Change the prompt. Experiment a little bit. So congratulations, you have finished lesson one.