This comprehensive guide is crafted to empower you with the knowledge and skills to harness the capabilities of AutoGen, an innovative tool that leverages the power of artificial intelligence to generate dynamic AI personas. These personas are not just virtual entities; they are catalysts for creativity, aiding in generating compelling ideas - for example, YouTube scripts and brainstorming novel SaaS (Software as a Service) application ideas in this tutorial.
The integration of OpenAI's LLMs into AutoGen is a game-changer. It enriches the AI personas with a depth of knowledge and a nuanced understanding of language, making them incredibly versatile and powerful tools in your creative arsenal.
By integrating AutoGen with Chainlit, you'll gain access to a user-friendly chat-like interface. This synergy allows for an intuitive and interactive way to communicate with AI personas, making the process of content creation and idea generation more fluid and natural. Whether you're a content creator, a software developer, or an enthusiast of AI-driven creativity, this guide will be your gateway to exploring new dimensions of digital innovation.
Through this guide, you'll learn step-by-step how to set up and utilize AutoGen within Chainlit. You'll discover how to create and interact with AI personas that are tailored to your specific needs, be it scriptwriting for YouTube content or ideating SaaS products. With practical examples and easy-to-follow instructions, you'll be adept at using these tools to enhance your creative workflow, making the process not only more efficient but also more enjoyable.
Check out the GitHub repository for this application. You can deploy and preview the application by clicking the Deploy to Koyeb button below:
Note: Remember to replace the value of the OPENAI_API_KEY
environment variable with your own information.
Requirements
To follow this guide, you'll need to have the following:
Steps
To complete this guide and deploy the app for generating dynamic AI personas, you'll need to follow these steps:
- Install and Configure Chainlit
- Use AutoGen to Generate AI Personas
- Build the AI personas for the YouTube Scriptwriting Tab
- Build the AI personas for the SaaS Idea Generation Tab
- Combine the AI Personas and Chainlit
- Deploy to Koyeb
Install and Configure Chainlit
First, letβs start by creating a new project. You should use venv
to keep your Python dependencies organized in a virtual environment.
Create a new folder locally on your computer for this project, move into it, and activate the virtual environment with:
# Create and move to the new folder
mkdir CreativeProcessAutogen
cd CreativeProcessAutogen
# Create a virtual environment
python -m venv venv
# Active the virtual environment (Windows)
.\\venv\\Scripts\\activate.bat
# Active the virtual environment (Linux)
source ./venv/bin/activate
Next, install Chainlit:
pip install chainlit
The second step is to start designing the UI (User Interface) to interact with the AI personas.
Chainlit provides the chat-style interface out-of-the-box, so that is not a concern. What you must create now is the 2 different "tabs" so the user can access the distinct groups of AI personas.
You will use Chainlit's profile functionality to achieve this, starting by creating a file called main.py
:
import chainlit as cl
@cl.set_chat_profiles
async def chat_profile():
return [
cl.ChatProfile(
name="YouTube Scriptwriting",
markdown_description="Your next YouTube video script is just a few messages away!",
),
cl.ChatProfile(
name="SaaS Product Ideation",
markdown_description="Get your next SaaS product idea in a few messages!",
),
]
@cl.on_chat_start
async def on_chat_start():
chat_profile = cl.user_session.get("chat_profile")
await cl.Message(
content=f"Welcome to {chat_profile} chat. Please type your first message to get started."
).send()
Here's a breakdown of the code:
- The script begins by importing the Chainlit library, which is referenced as
cl
. - Then it defines chat profiles with the
@cl.set_chat_profiles
decorator and two chat profiles are defined: "YouTube Scriptwriting" and "SaaS Product Ideation", each with a brief markdown description. - The
@cl.on_chat_start
decorator marks theon_chat_start
function to be executed when a chat session begins. It retrieves the current chat profile selected by the user fromcl.user_session
. - Then, it sends a welcome message using
cl.Message
, which is customized with the name of the selected chat profile.
You can test the application by running:
chainlit run main.py
Use AutoGen to Generate AI Personas
AutoGen, developed by Microsoft, is a cutting-edge framework designed to revolutionize the way large language model (LLM) applications using multiple agents are developed and deployed.
The agent-centric design of AutoGen offers numerous benefits. It naturally handles ambiguity, feedback, progress, and collaboration, making it ideal for tasks that require back-and-forth troubleshooting and cooperation among multiple specialists. This design also allows users to seamlessly opt in or out via an agent in the chat, providing flexibility and user control over the interaction.
By offering customizable and conversable agents, AutoGen harnesses the strengths of advanced LLMs, such as GPT-4, while also addressing their limitations through human and tool integration.
This leads to a seamless conversation flow between multiple agents, optimizing the overall performance and robustness of the application.
Streaming AutoGen chats in Chainlit presents a unique set of challenges and opportunities since Autogen doesn't provide streaming functionalities out of the box.
This section delves into the intricacies of effectively integrating AutoGen's advanced chat functionalities within the Chainlit framework by using monkey patching.
Monkey patching is a powerful technique in Python programming that allows you to modify or extend the behavior of Autogen's existing methods or functions at runtime, essential for customizing and enhancing the interaction between AutoGen's AI personas and the Chainlit interface.
This approach not only facilitates seamless streaming of AutoGen chats but also unlocks new potentials in AI-driven communication and creativity.
To get started, install the AutoGen library:
pip install pyautogen
pip install python-decouple # For loading environment variables
And prepare a .env
file to place the OpenAI API key:
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
As mentioned earlier, we are integrating OpenAI's LLMs into AutoGen to enhance the AI personas with a more nuanced understanding of language and depth of knowledge.
To obtain your OPEN AI API key, you can sign up for OpenAI and then create an API key.
Do not write to a file yet, but the following code demonstrates how to monkey patch with AutoGen and Chainlit:
from autogen import ConversableAgent
def chat_new_message(self, message, sender):
cl.run_sync(
cl.Message(
content="",
author=sender.name,
).send()
)
content = message.get("content")
cl.run_sync(
cl.Message(
content=content,
author=sender.name,
).send()
)
ConversableAgent._print_received_message = chat_new_message
This code snippet demonstrates the use of monkey patching to modify the behavior of ConversableAgent
, which is the basis for AssistantAgent
and UserProxyAgent
classes from the AutoGen library (that you will use later on), specifically for handling new messages in a chat.
-
chat_new_message
is defined to handle new messages. It extracts the content of the message and uses Chainlit's synchronous methodcl.run_sync
to send a new message usingcl.Message
. The initial empty message is used to show a loader in the chat on Chainlit. - The method
_print_received_message
ofConversableAgent
is overridden (monkey patched) with thechat_new_message
function.
This alteration means that whenever a new message is received by all agents, chat_new_message
would be executed instead of the original _print_received_message
method.
Build the AI personas for the YouTube Scriptwriting Tab
Next, let's create the necessary logic to build the AI personas for your YouTube Scriptwriting group chat.
AutoGen allows the creation of AI personas by providing specific system prompts that guide the LLM in the expected behavior.
Then each of these AI personas will be placed in a group chat where they can interact with each other and provide the answer to the user input.
The AI personas for this profile are:
- User
- Content Creator
- Script Writer
- Researcher
- Reviewer
Each AI persona will be configured with a name to identify it with on the Chainlit chat interface.
You will also define a set of configurations for the interface with OpenAI. For this, you will need an API key, which you set earlier in the .env
file.
To keep your code organized and prevent the existing main.py
file from becoming too confusing, let's separate the logic of the different profiles into separate files.
For the YouTube profile, you can start by creating a file called script.py
:
from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager, ConversableAgent
from decouple import config
import chainlit as cl
def chat_new_message(self, message, sender):
cl.run_sync(
cl.Message(
content="",
author=sender.name,
).send()
)
content = message.get("content")
cl.run_sync(
cl.Message(
content=content,
author=sender.name,
).send()
)
You first place the necessary logic for monkey patching, as seen before.
Next, create a function called config_personas
, which will be responsible for configuring the AI personas:
def config_personas():
config_list = [{
"model": "gpt-3.5-turbo-1106", # model name
"api_key": config("OPENAI_API_KEY") # api key
}]
llm_config = {
"seed": 14, # seed for caching and reproducibility
"config_list": config_list, # a list of OpenAI API configurations
"temperature": 0.7, # temperature for sampling
}
The breakdown of the code for Configuration List (config_list
):
- This list includes a dictionary with configuration settings for the LLM (Large Language Model).
- The settings specify the model (
"gpt-3.5-turbo-1106"
) and the OpenAI API key.
And for LLM Configuration (llm_config
):
- This dictionary defines additional parameters for the LLM.
-
seed
is set for caching and reproducibility purposes. -
config_list
references the previously defined configuration list. -
temperature
is set for the sampling process, influencing the creativity and randomness of the model's responses.
With the main configuration complete, begin defining each AI persona, starting with the User itself (still inside the previous function):
user_proxy = UserProxyAgent(
name="User_Proxy",
system_message="A human admin.",
max_consecutive_auto_reply=10,
llm_config=llm_config,
human_input_mode="NEVER"
)
This code creates an instance of UserProxyAgent
, a component of the AutoGen framework:
-
name
: The agent is named"user_proxy"
. -
system_message
: A descriptive message about the agent, here labeled as"A human admin."
. -
max_consecutive_auto_reply
: This sets the maximum number of consecutive automatic replies to10
. It limits how many times the agent will respond automatically without human intervention. -
llm_config
: This parameter is set to thellm_config
dictionary previously defined. It contains the configuration for the large language model (LLM) interaction, including model specifications and API settings. -
human_input_mode
: Set to"NEVER"
, indicating that this agent will not expect or wait for human input in its operation.
Next, you can create the Content Creator persona:
content_creator = AssistantAgent(
name="Content_Creator",
system_message="I am a content creator that talks about exciting technologies about AI. "
"I want to create exciting content for my audience that is about the latest AI technology. "
"I want to provide in-depth details of the latest AI white papers.",
llm_config=llm_config,
)
This code creates an instance of AssistantAgent
, another component of the AutoGen framework:
-
name
: The agent is named"content_creator"
. -
system_message
: Describes the agent. In this case, it's described as a content creator focused on discussing exciting AI technologies, aiming to create engaging content about the latest AI advancements and providing detailed insights into AI white papers. -
llm_config
: The same as before
Now you can move on to create the Script Writer persona:
script_writer = AssistantAgent(
name="Script_Writer",
system_message="I am a script writer for the Content Creator. "
"This should be an eloquently written script so the Content Creator can "
"talk to the audience about AI.",
llm_config=llm_config
)
As you may have noticed, the AI personas are constructed similarly, with the major difference in their system message that defines their behavior.
Next, will you create the Researcher persona:
researcher = AssistantAgent(
name="Researcher",
system_message="I am the researcher for the Content Creator and look up the latest white papers in AI."
" Make sure to include the white paper Title and Year it was introduced to the Script_Writer.",
llm_config=llm_config
)
Finally, you finish the AI personas with the Reviewer:
reviewer = AssistantAgent(
name="Reviewer",
system_message="I am the reviewer for the Content Creator, Script Writer, and Researcher once they are done "
"and have come up with a script. I will double check the script and provide feedback.",
llm_config=llm_config
)
With the AI personas created, they need to communicate with each other, so now you will create the group chat:
group_chat = GroupChat(
agents=[user_proxy, content_creator, script_writer, researcher, reviewer], messages=[]
)
manager = GroupChatManager(groupchat=group_chat, llm_config=llm_config)
return user_proxy, manager
This code snippet sets up a GroupChat
within the AutoGen framework:
- The
GroupChat
constructor is called with two arguments: -
agents
: A list of agent instances, includinguser_proxy
,content_creator
,script_writer
,researcher
, andreviewer
. -
messages
: An empty list, indicating that the chat starts without any pre-existing messages.
And sets up the GroupChatManager
within the AutoGen framework:
- A
GroupChatManager
is instantiated with two arguments: -
groupchat
: TheGroupChat
instance was created earlier, which includes all the agents involved in this group chat. -
llm_config
: The configuration for the large language model, as defined previously.
Finally, you will write the code to start the chat in a new function called start_chat_script
:
def start_chat_script(message, is_test=False):
if not is_test:
ConversableAgent._print_received_message = chat_new_message
user_proxy, manager = config_personas()
user_proxy.initiate_chat(manager, message=message)
This code initiates a chat session in the AutoGen framework:
- First, it checks if this is a test call to determine whether it is running inside Chainlit. If this isn't a test, it activates the monkey patching
- Then it initiates the personas by calling the function defined previously
-
user_proxy
: This is an instance ofUserProxyAgent
, previously created and configured. -
initiate_chat
: This method is called on theuser_proxy
object to start a chat session -
manager
: Passed as an argument, this is the instance ofGroupChatManager
, created earlier, which manages the group chat involving various agents -
message=message
: This argument is the message to start the chat
The last code needed for this file is to help test the AI personas without the need to run Chainlit:
if __name__ == "__main__":
test_message = ("I need to create a YouTube Script that talks about the latest paper about gpt-4 on arxiv and its "
"potential applications in software.")
start_chat_script(test_message, is_test=True)
You can test this group chat by running:
python script.py
And that is all that is necessary for the YouTube Scriptwriting chat profile. Later on, you will integrate it with Chainlit for the chat-style interface.
Build the AI personas for the SaaS Idea Generation Tab
The code for the SaaS profile is very similar to the previous code for the YouTube profile. The AI persona definitions are the primary difference.
To keep the code organized again, you can create a separate file for this profile called saas.py
:
from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager, ConversableAgent
from decouple import config
import chainlit as cl
def chat_new_message(self, message, sender):
cl.run_sync(
cl.Message(
content="",
author=sender.name,
).send()
)
content = message.get("content")
cl.run_sync(
cl.Message(
content=content,
author=sender.name,
).send()
)
def config_personas():
config_list = [{
"model": "gpt-3.5-turbo-1106", # model name
"api_key": config("OPENAI_API_KEY") # api key
}]
llm_config = {
"seed": 14, # seed for caching and reproducibility
"config_list": config_list, # a list of OpenAI API configurations
"temperature": 0.7, # temperature for sampling
}
The initial definitions and settings are identical to the previous YouTube profile code.
Now you can configure the different AI personas:
user_proxy = UserProxyAgent(
name="Admin",
system_message="A human admin. Interact with the planner to discuss the plan. "
"Plan execution needs to be approved by this admin.",
code_execution_config=False,
max_consecutive_auto_reply=10,
llm_config=llm_config,
human_input_mode="NEVER"
)
engineer = AssistantAgent(
name="Engineer",
llm_config=llm_config,
system_message='''Engineer. You follow an approved plan. You write python/shell code to solve tasks. Wrap the
code in a code block that specifies the script type. The user can't modify your code. So do not suggest
incomplete code which requires others to modify. Don't use a code block if it's not intended to be executed by
the executor. Don't include multiple code blocks in one response. Do not ask others to copy and paste the result.
Check the execution result returned by the executor. If the result indicates there is an error, fix the error and
output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed
or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your
assumption, collect additional info you need, and think of a different approach to try.''',
)
planner = AssistantAgent(
name="Planner",
system_message='''Planner. Suggest a plan. Revise the plan based on feedback from admin and critic, until admin
approval. The plan may involve an engineer who can write code and an executor and critic who doesn't write code.
Explain the plan first. Be clear which step is performed by an engineer, executor, and critic.''',
llm_config=llm_config,
)
executor = AssistantAgent(
name="Executor",
system_message="Executor. Execute the code written by the engineer and report the result.",
code_execution_config={"last_n_messages": 3, "work_dir": "feedback"},
)
critic = AssistantAgent(
name="Critic",
system_message="Critic. Double check plan, claims, code from other agents and provide feedback.",
llm_config=llm_config,
)
group_chat = GroupChat(agents=[user_proxy, engineer, planner, executor, critic], messages=[], max_round=50)
manager = GroupChatManager(groupchat=group_chat, llm_config=llm_config)
return user_proxy, manager
The major differences here are on the user_proxy
and executor
.
For user_proxy
:
-
code_execution_config=False
, means that this AI persona will not execute any code -
max_consecutive_auto_reply=10
, the maximum number of consecutive auto-replies to prevent the chat from being stuck in a feedback loop
For executor
:
-
code_execution_config={"last_n_messages": 3, "work_dir": "feedback"}
, means theexecutor
will only pay attention to the last 3 messages, and the directory to place any files to execute code is calledfeedback
.
Finally, you can place the code for starting the chat:
def start_chat_saas(message, is_test=False):
if not is_test:
ConversableAgent._print_received_message = chat_new_message
user_proxy, manager = config_personas()
user_proxy.initiate_chat(manager, message=message)
if __name__ == "__main__":
test_message = (
"I would like to build a simple website that collects feedback from "
"consumers via forms. We can just use a flask application that creates an "
"html website with forms and has a single question if they liked their "
"customer experience and then keeps that answer. I need a thank you html "
"page once they completed the survey. I then need a html page called "
"admin that gives a nice table layout of all of the records from the "
"database. Just use sqlite3 as the database, keep it simple. Also use "
"Bootstrap for the CSS Styling.")
start_chat_saas(test_message, is_test=True)
As before, you can test and run this group chat with:
python saas.py
And that is all the code related to the AI personas. Next, you will learn how to invoke these different group chats from Chainlit itself.
Combine the AI Personas and Chainlit
In the previous sections, you defined all of the code required to invoke the different group chats with the different AI personas.
Now let's see how to integrate and start those chats from the Chainlit interface.
You will continue the previously provided code for the file main.py
:
from saas import start_chat_saas
from script import start_chat_script
# Previous code
# New function for on_message
@cl.on_message
async def on_message(message):
chat_profile = cl.user_session.get("chat_profile")
message_content = message.content
if chat_profile == "YouTube Scriptwriting":
start_chat_script(message_content)
elif chat_profile == "SaaS Product Ideation":
start_chat_saas(message_content)
The asynchronous on_message
function, activated by the @cl.on_message
decorator, is central to handling incoming chat messages.
- It first retrieves the user's current chat profile (like "YouTube Scriptwriting" or "SaaS Product Ideation") and the message content.
- Based on the chat profile, it then directs the conversation flow: for "YouTube Scriptwriting," it triggers a scriptwriting-specific process using
start_chat_script
, and for "SaaS Product Ideation," it initiates a SaaS brainstorming workflow withstart_chat_saas
.
This setup ensures that the chat interface dynamically responds to the user's context and needs, providing a tailored interactive experience based on the selected chat domain.
You can now run the full application with:
chainlit run main.py
Practical Examples
Let's test your application with an example for the YouTube profile and another for the SaaS profile.
You can start the Chainlit application with:
chainlit run main.py
And then write the following in the YouTube Scripwriting profile chat:
I need to create a YouTube Script that talks about the latest paper about GPT-4 on Arxiv and its potential applications in software.
You should see a similar result to this, as the AI personas communicate and organize a script:
<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/ObjVjH86hHU?si=wRKFZ3d6ULDFDLxY"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen
Now you can test the SaaS Product Ideation profile, with the following text in the chat (make sure the select this profile first):
I would like to build a simple website that collects feedback from consumers via forms. We can just use a Flask application that creates an HTML website with forms and has a single question if they liked their customer experience and then keeps that answer. I need a thank you HTML page once they complete the survey. I then need an HTML page called admin that gives a nice table layout of all of the records from the database. Just use SQLite3 as the database, keep it simple. Also, use Bootstrap for the CSS Styling.
You should see a similar result to this, as the AI personas communicate and organize a SaaS product:
<iframe
width="560"
height="315"
src="https://www.youtube.com/embed/XvzcRv8_fe0?si=rd5ydAlz6PPk99Dt"
title="YouTube video player"
frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowfullscreen
As you might have noted, sometimes the AI personas can get stuck in a polite conversation. Also, OpenAI has a consecutive rate limit. If you face this issue, just wait 1 or 2 minutes before trying again.
Deploy to Koyeb
Now that you have the application running locally, you can also deploy it on Koyeb and make it available on the Internet.
Create a repository on your GitHub account called CreativeAutoGen
.
Then create a .gitignore
file in your local directory to exclude some folders and files from being pushed to the repository:
# PyCharm files
.idea
# Python virtual environment
venv
# Environment variables
.env
# Autogen cache
.cache
# Chainlit autogenerated files
.chainlit
Run the following commands in your terminal to commit and push your code to the repository:
echo "# CreativeAutoGen" >> README.md
git init
git add .
git commit -m "first commit"
git branch -M main
git remote add origin [Your GitHub repository URL]
git push -u origin main
You should now have all your local code in your remote repository. Now it is time to deploy the application.
Within the Koyeb control panel, while on the Overview tab, initiate the app creation and deployment process by clicking Create App.
On the App deployment page:
- Select GitHub as your deployment method.
- Choose the repository where your code resides. For example,
CreativeAutogen
. - Select the Buildpack as your builder option.
- Click Build and deployment settings to configure your Run command by clicking the
Override
toggle and adding the same command as when you ran the application locally:chainlit run main.py
. - In the Instance selection, click "Free".
- Click Advanced to view additional settings.
- Click the Add Variable button to add your OpenAI API key named
OPENAI_API_KEY
. - Set the App name to your choice. Keep in mind it will be used to create the URL for your application.
- Finally, click the Deploy button.
Your application will start to deploy. After the deployment process is complete, you can access your app by clicking with the application URL.
As we move forward from the foundational setup and usage of AutoGen with Chainlit, the journey into AI-enhanced creativity is far from over. The next phase involves exploring and implementing potential updates and enhancements to further refine and expand the capabilities of this integration.
Potential Updates: Integrating more advanced natural language processing capabilities to improve the contextual understanding and response accuracy of AI personas. Also, upgrading the system to support the latest versions of AI models as they evolve.
New AI Personas: Developing a diverse range of AI personas, such as a 'Marketing Strategist' for crafting marketing campaigns, a 'Technical Advisor' for in-depth tech guidance, or a 'Creative Writer' specializing in different literary genres. These personas would cater to a wider array of creative needs and industries.
Conclusion
In this tutorial, you learned how to integrate AutoGen with Chainlit to create dynamic AI personas and interact with them in a chat-style interface.
The potential of AutoGen opens a universe of possibilities in diverse creative fields.
We encourage you to apply AutoGen in your domain of expertise, discover unique solutions, and contribute to the ever-expanding landscape of AI-driven creativity.
If you have any questions or feedback, please reach out to us on the Koyeb Community or X/Twitter.
Top comments (0)