The Easiest AutoGen Conversation Patterns Tutorial
The Easiest AutoGen Conversation Patterns Tutorial
The Easiest AutoGen Conversation Patterns Tutorial: Mastering Multi-Agent Workflows
In today’s fast-evolving digital landscape, effective communication between automated agents within systems is crucial for delivering efficient and responsive services. Managing these interactions can be complex, particularly when multiple agents are involved. However, with AutoGen workflows, this complexity can be managed with ease. In this comprehensive guide, we will delve into the power of AutoGen to streamline agent conversations in various formats, ensuring clear, coherent, and logical dialogues that enhance user experience. Whether you’re a developer, a project manager, or simply a tech enthusiast, understanding these patterns will revolutionize how you perceive and implement automated dialogues.
Introduction to AutoGen Workflows
AutoGen is a robust framework designed to facilitate seamless interactions between different automated agents in a service. It helps in structuring conversations in a manner that they mimic logical human interactions, despite involving multiple bots. Before diving into specific conversation patterns, it’s vital to understand the overall benefit of using AutoGen: it simplifies the creation of multi-agent systems, making it easier to maintain and scale customer service operations or any system requiring complex dialogues.
Two-Agent Chat: The Foundation
The simplest and most foundational workflow in AutoGen is the Two-Agent Chat. This pattern involves direct communication between two agents – usually representing a user and a specialist. The process starts when a problem is reported by one agent (the user proxy) and received by another (the specialist agent). They engage in a back-and-forth dialogue until the issue is resolved. This straightforward pattern is essential for anyone new to AutoGen as it lays the groundwork for more complex interactions.
Understanding Two-Agent Chat through Code
To implement a Two-Agent Chat, the basic setup involves importing necessary libraries and configuring the agents with specific roles using AutoGen’s scripting environment. Here’s a quick snapshot of how the code structure looks:
import autogen
user_proxy = autogen.UserProxyAgent(name="User", human_input_mode=True)
specialist = autogen.SpecialistAssistantAgent(config=llm_config)
user_proxy.initiate_chat(receiver=specialist, message="Hello, I need help")
This code initiates a conversation where the user proxy seeks assistance, simulating a real-world inquiry.
Sequential Chat: Enhancing Interaction Complexity
Building on the Two-Agent Chat, the Sequential Chat involves multiple agents where a conversation’s outcome with one agent determines the starting context of the next. It’s particularly useful for processes requiring stages of approval or multiple levels of inquiries.
Code Example for Sequential Chat
In sequential chat patterns, after an initial interaction, the summary or outcome of the first chat becomes the input for the next, ensuring continuity and context relevance:
user_proxy = autogen.UserProxyAgent(name="User", human_input_mode=True)
specialist = autogen.SpecialistAssistantAgent(config=llm_config)
survey_agent = autogen.SurveyAgent(config=llm_config)
user_proxy.initiate_chats([{"receiver": specialist, "message": "Initial inquiry"}, {"receiver": survey_agent, "message": "Follow-up based on initial response"}])
Group Chat: Managing Multiple Participants
The Group Chat pattern is suitable for scenarios where multiple agents need to interact simultaneously. A Group Chat Manager orchestrates the flow, deciding which agent speaks next and ensuring that all agents are updated with the latest conversation context.
Implementing Group Chat in AutoGen
Group chats are more complex, involving a manager to handle the communication flow:
group_manager = autogen.GroupChatManager(participants=[user_proxy, specialist, survey_agent])
user_proxy.send_message("Hello, I need help", manager=group_manager)
Nested Chat: Handling Sub-Conversations
Finally, the Nested Chat is for intricate scenarios where an agent within a conversation needs to initiate a sub-conversation before continuing the main dialogue. This is common in support scenarios where a specialist might need to consult another expert.
Nested Chat Code Insights
A nested chat allows an agent to pause the main conversation, consult another agent, and then continue the original thread:
specialist.register_nested_chat(recipient=expert_agent, message="Need expert review")
return_message = specialist.continue_main_conversation()
Conclusion: Streamline Your Agent Interactions with AutoGen
Understanding and implementing these AutoGen conversation patterns can significantly streamline interactions within multi-agent systems, making them more efficient and effective. By mastering these patterns, developers can ensure that automated agents not only communicate seamlessly among themselves but also provide coherent and context-aware responses that enhance the overall user experience.
Whether you are building a complex customer support system, an interactive bot network, or any multi-agent platform, AutoGen’s versatile conversation workflows are essential tools in your development arsenal. So, dive into AutoGen, experiment with these patterns, and watch your automated systems interact as smoothly and logically as the best human teams.
[h3]Watch this video for the full details:[/h3]
This video is all you need to watch to completely understand AutoGen workflows (aka Conversation Patterns). After watching the tutorial, you’ll gain a deep understanding of the inner workings of AutoGen and the different options that make communication between agents efficient.
We’ll look at how agents talk to each other and go over the four types of conversation patterns within AutoGen:
– Two-agent Chat
– Sequential Chat
– Group Chat
– Nested Chat
We’re going to examine how each workflow works, then we’ll see what implementation looks like in a sample Python app, and finally, we’ll run a demo and watch how the agents communicate.
Ready? Jump right in!
—
Unlock the source code for free at: https://hey.gettingstarted.ai
—
Connect with me:
Subscribe: https://www.youtube.com/@gswithai?sub_confirmation=1
Follow on X: https://www.twitter.com/gswithai
👉 Get help with your AI integration: https://forms.gle/XXE8tA2jBWFWYHaf8
—
Other resources:
Read on the blog: https://www.gettingstarted.ai/autogen-conversation-patterns-workflows/
Agents Overview: https://youtu.be/a5iVvrkIk3s
AutoGen Workflows: https://microsoft.github.io/autogen/docs/tutorial/conversation-patterns
—
Timeline:
00:00 Introduction
01:18 Conversation Patterns Overview
01:30 Two-agent Chat Pattern Overview
01:53 Two-agent Chat Code
03:50 Two-agent Chat Demo
04:20 Sequential Chat Overview
05:11 Sequential Chat Code
07:38 Sequential Chat Demo
08:27 Group Chat Overview
09:04 Group Chat Code
11:56 Group Chat Demo
13:10 Nested Chat Overview
13:52 Nested Chat Code
16:40 Nested Chat Demo
[h3]Transcript[/h3]
hey how’s it going imagine that you have a problem with a service or product and when you call the customer service number someone asks you to write the conversation and whether your problem was solved even before talking with anybody now that doesn’t make any sense does it hello how can I assist you today hi I need help accessing my account please all right on a scale of 1 to 10 how would you rate this conversation sir oh well I haven’t even said anything about my issue well thank you for calling and have a great day um okay okay just a few things before we get started first make sure to subscribe to the channel because you don’t want to miss any future videos plus your support is what keeps me going and creating content for you second I’m on X or Twitter whatever you want to call it and I’ve put a link to my profile in the description so make sure to check it out and hit the follow button I’d love to connect with you finally check out the blog getting started. a you can create a free membership and access the code that you see on this channel today I’m going to show you how you can set up multi agent workflows in autogen workflows or conversation patterns Define how your agents talk to each other so that the conversations actually make sense we’re going to explore four main workflows in autogen the first one is a two agent chat second sequential chat third group chat and finally nested chat let’s start with the first one and the most basic two agent chat now I covered this one in another video and I’m going to leave a link in the description so that you can check it out so what’s a two agent chat well obviously it’s a chat between two agents let’s take a look at this diagram this is the most basic and simplest pattern of the four that we’re going to see today and it starts with a problem first the problem comes in then we have the first agent agent a and the second agent Agent B and they’re going to have a conversation about the problem so there’s going to be some back and forth between the agents and once they’re done the summary of their conversation is going to be the solution so you’re going to get a better idea once we see the code okay we have a new file called to agent chat. py now if you don’t know how to set up your open a key and how to to install the autogen package make sure to check out the video in the description because I go through all of the setup there all right let’s write some code first I’m going to do import OS and then I’m going to import autogen user proxy agent and if you’re not familiar with the user proxy agent or the assistant agent or the conversible agent make sure to check the video since it goes through the basics of how agents work and what they are okay next we’re going to set up our large language model configuration so we’re going to do LM config now I’m going to be using using uh open AI GPT you can use whichever model you like and you can try using GPT for just keep in mind that it’s more expensive okay next we’re going to create the user proxy so we’re going to do user proxy and we’re going to give it a name and we’re going to say human input mode and this means that whenever the user proxy or this agent receives a message from other agents it’s going to ask for our feedback always so every time it gets a message I want to add in my feedback back and we’re not going to execute any codes for this example so we’re going to do execution config false okay cool now we’re going to create the specialist agent and we’re going to do specialist assistant agent but this one we’re going to make use of the large language model so we’re going to do llm config and pass in the configuration from above all right the last step is to initiate everything so the sender in this case is the user proxy so we’re going to do user proxy initiate chat and the receiver is the specialist so we’re going to say specialist the message is going to be hello I need help what this means is that the sender the user proxy is going to start a conversation with the specialist and the first message is going to be hello I need help now after this message is received by The Specialist there’s going to be some back and forth between the agents until the conversation is done and we terminate okay now we’re going to run this and we’re going to see what it looks like let’s do this so we’re going to do python to agent chat and we’re going to run this and as you can see the user proxy says to the specialist hello I need help and then the specialist says uh to the user proxy hi I’m here to help what do you need assistance with today and that’s good that’s everything that we need now if we send another query let’s say how can I access my account for example then this query is sent to the specialist and then the specialist is going to come up with an answer to the query and that’s the kind of back and forth that’s happening with a two agent chat now we’re just going to type exit to stop this conversation all right now we’re going to see the sequential chat pattern and from the name you can guess that there’s a sequence of chats that are happening one after the other so in this diagram we have this chat here between agent a and Agent B and this one let’s call it chat one and we have another chat here between agent a and agent C we’re going to call it chat 2 and as you can see we have agent a in both chats that’s just how sequential chat patterns work within autogen basically the sender is Agent a and it’s going to start a conversation with Agent B and then once that conversation is done the summary is carried over to the next conversation which also involves agent a this time talking to agent C to solve the problem and once it’s done we’re going to have a solution it’s going to make more sense when we see the python implementation so let’s do that right now and jump to vs code all right cool now I’m going to take the code from the two agent chat P file and paste it here for the sequential chat so we can modify it so that the chat becomes sequential that’s the code first we’re going to modify the specialist so we’re going to add a custom system message and we’re going to say you are a professional customer service [Music] representative okay and finally we’re going to do reply terminate okay cool we’re going to need to add another agent and this guy is going to check with the customer so the user proxy in this case if the customer had a good experience with the chat so this comes after the conversation with the user proxy and The Specialist and we’re going to call this guy surveyor assistant agent surveyor and we’re going to say llm config and we’re going to overwrite the message here um you are a surveyor now we’re going to initiate the chat and we’re not going to do it like we’ve done with a two agent chat because now this should run in a sequence so first we’re going to modify this and we’re going to say user proxy do initiate chats because we have two chats here so I’m going to remove this and I’m going to add an array the first recipient is going to be the specialist just like before and the message we’re going to keep it same I need help and we’re going to add a parameter called summary method we have two options for the summary method the first one is last message and what we’re going to use is reflection with llm this means that when this conversation is done the next conversation will get access to the context of this conversation okay so when we say this means that this conversation here is going to get summarized and passed over to the next one and our next conversation is going to be between the user proxy and you guess it the surveyor so we’re going to say surveyor message is going to be based okay now we can add an extra parameter called carry over and this is additional context when we move from the first conversation to this one so in this case you can do whatever you want based on your application needs but I’m just going to add anything just to show you that it actually gets carried over when the new conversation begins so I’m just going to say this is a new customer all right perfect the user proxy is going to initiate with the specialist first and once the conversation is done the user proxy is going to initiate again with the surveyor we’re going to get the summ of the user proxy specialist conversation when we start the new conversation user proxy surveyor all right we’re going to run this code and we’re going to see what happens so we’re going to do python sequential chat so as usual the user proxy is going to say to the specialist hello I need help and then the specialist is going to say hello how can I help you today now if we exit so let’s do exit what’s going to happen is another conversation is going to start as you can see let me scroll up so after we entered exit as you can see a new chat started because this one uh terminated and now this one is between the user proxy and the surveyor and it says based on the provided information that’s the message that we have and then we have the context here so this is the carryover message and this is the summary of the previous conversation so the surveyor uses this context to come up with an answer which is kind of a survey question here so it says please rate your support experience just like we asked it to this shows how the the chat pattern moved from this one once it’s done to the next one perfect we’re going to do exit and we’re going to continue all right the group chat pattern and this one is interesting because the agents they’re not talking to each other directly as you can see there are no arrows between them instead the group chat manager is orchestrating everything so everything goes through the group chat manager for instance the group chat manager is going to select the speaker first let’s say it selects the user proxy and then the user proxy is going to send the message to the group chat manager and then the group chat manager is going to broadcast that message back to all the agents and then it’s going to select the next speaker and so on so forth so this cycle keeps going until there’s a common solution which then terminates the flow again this is going to be more obvious once we jump into the code so we’re going to do this now all right back to vs code and I’m going to go over here and go into the sequential chat copy everything create a new file and I’m going to call it chat top high paste everything and we’re going to do some changes so first let me hide this guy so that we can start okay we’re going to leave the specialist here kind of the same so I don’t think we’re going to do any changes we’re just going to add another agent here and this guy is going to be responsible to answer the user queries whenever the user mentions that a specific question is urgent the KB manager will step in and answer instead of specialist all right so to do that we’re going to create a KB manager conversible agent well this could be an assistant agent as well but I’m just going to show you that you can also use a conversible agent type here which is the parent of the subclasses assistant and user proxy agents okay so we’re going to call it KB manager and we’re going to have the llm config human input mode never and we’re going to add a system message um you answer in a polite way that will be looked into now we’re going to add an extra parameter for each agent and what this does it allows the group chat manager which we’re going to create in a bit to know the specialty of each agent it’s like a introduction for each agent so for example we can say description this agent so now the group chat manager when it needs to select a speaker it’s going to take a look at the descriptions of the agents that’s going to help it decide which which one should answer a specific query does that make sense okay so next we’re going to do surveyor description all right next we’re going to do KB manager okay I’m just going to remove this guy we’re not going to use it now I’m going to create the group chat so the group chat object defines which agents participate in a specific chat so we’re going to say group chat group chat and our agents are user proxy specialist surveyor KB manager cool group chat manager which is the agent that’s going to orchestrate the uh conversation flow we’re going to pass the group chat and we’re going to give it access to the large language model so we’re going to say okay that’s it finally we’re going to run this workflow and this is simpler than the sequential workflow because we just have one agent that’s going to handle everything so what we’re going to do is user proxy that’s our sender as usual shate chat this time it’s going to chat with the group chat manager and then the group chat manager is going to decide the speaker and whenever the speaker sends a message like we’ve seen in the diagram it’s going to broadcast that message to all of the other agents and then it’s going to select another speaker and so on and so forth so it’s going to be the same message hello I need help now obviously you can have a more specific message I’m just doing this for uh demo purposes in summary method we’re also going to use reflection with llm just like we did with the sequential chat from before okay super now I’m going to run this and we’re going to take a look at how everything works we’re going to do python group chat and hit return all right so we have a couple of issues with the code uh so I’m going to go to vs code and first make sure to add a messages parameter here that’s required and that’s just going to be an empty list in this case and here the KB manager I forgot to sign the property description this value so make sure to do that hit save and we’re going to go back and we’re going to try to run this again so let’s do it again okay cool so as you can see the first conversation the sender is the user proxy as usual but instead of sending this to the The Specialist now the user proxy is sending it to the chat manager instead and it’s saying hello I need help and then the specialist responded to the chat manager and it said hello how can I help you today and as you can see the conversation is being completely managed by the chat manager so if we say something like I need assistance with my account let’s say we’re also sending that to the chat manager and then the specialist is picking this up and it said I’m here to help you with your account let’s say uh I’m good thank you for the help and now instead of choosing the specialist the chat manager decided that since the conversation is done and the user is satisfied let’s run with the surveyor and that’s what happened and as you can see we have on a scale from 1 to 10 how would youate the support you receed today and then the conversation is terminated cool let’s take a look at the nested chat pattern and this one is interesting because we have a problem that comes in and then agent a is going to talk to Agent B like usual and then there’s going to be some back and forth between the agents but at some point in time Agent B might say hey I need help with this particular issue so it’s going to ask agent e in this case for the answer to which agent e is going to respond back to Agent B and then this answer would be sent from Agent B to agent a now agent a this guy isn’t aware of this conversation that took place between Agent B and agent e so once that’s done the conversation is going to move forward and then we’re going to have a solution now obviously that’s going to be more apparent and you’re going to get a better idea once we jump to the python code so let’s do that all right back to vs code I’m going to go to the group chat file and I’m going to copy everything and paste it here okay now we’re going to have a quential conversation but this time we’re going to use a trigger to detect if the user wants to escalate and to treat this as an urgent request and what this means is that the specialist is going to get the opinion from the KB manager uh without the user knowing that this side conversation took place and then when the KB manager answers the specialist is going to take this response from the KB manager and send it back to the user proxy from the user proxy’s perspective it’s just talking to the specialist has no idea that the specialist actually got its answer from the KB manager so I’m mostly leaving everything but I’m just going to change this part here I’m going to remove the descriptions we’re not going to need those yeah so we’re not going to be using a group chat manager we’re going to register this site conversation between the specialist and the KB manager and to do that it’s pretty straightforward we just do specialist register nested chats and we can have multiple chats here but we’re just going to do one so this is an array and we have the recipient KB manager and and summary method last message we just need a last message from the conversation Max turns one so there’s going to be just one interaction The Specialist is going to send one message to the KB manager and the KB manager is just going to reply and that’s it the side conversation would be terminated okay now we’re going to do the initialization so it’s going to be a sequential chat pattern like I’ve said before so we’re going to do user proxy and then we’re going to do initiate chats the first chat is going to be with the specialist and the second chat is going to be with the surveyor with now this user proxy is going to initiate the chat with the specialist first but we’re going to detect when the user proxy wants to escalate the conversation to do this we provide a trigger parameter so we can say trigger so you can pass in user proxy and this means that whenever the user proxy sends a message to the specialist then this nested chat with the KB manager is going to be triggered in our case we’re going to do a check if the user proxy is the sender and we’re going to see if the message that the user proxy sends to the specialist contain the word important for example now let’s create this function and I’m going to show you how you can add this trigger so to do that we’re going to do trigger and I’m going to say sender is user proxy so that’s condition number one should escalate sender okay now we’re going to create this should escalate function okay so should escalate content co-pilot is recommending this so I’m going to accept it cool now we’re going to say exists okay now just just one more thing before we run this code I’m going to add a message that’s going to go from The Specialist to the KB manager when there’s an escalation that is required so I’m just going to [Music] say okay cool all right now I know that this is a long video and you’re probably bored but that’s the last thing that I’m going to show you today so let’s just run the nested chat and see what that looks like so we’re going to do python nested chat okay so the usual user proxy specialist specialist user proxy nothing new now if you remember the trigger was a set of words now let’s say we want to send an urgent request so I’m going to do this is urgent and that’s going to trigger the site conversation or the nested chat between the specialist and the KB manager so I’m going to send this and as you can see as soon as we did a new chat started here between the specialist and the KB manager so this is the message that we set in the code and then the KB manager sent this to the specialist and then the specialist just sent the same message to the user proxy keep in mind from the perspective of the user proxy like we said it’s only having a conversation with the specialist now I just want to say thank you for staying here and thank you for watching and if you want to support my Channel Please Subscribe right now make sure to hit the like button because your support is greatly appreciated and it keeps me going forward so thanks again and I’ll see you soon