DEV Community

Giorgio Boa
Giorgio Boa

Posted on

Reason, Act, Remember: Advanced AI with Microsoft Agent Framework

Artificial Intelligence has moved beyond simple chatbots and into the realm of intelligent agents. These agents are not just passive responders; they are capable of reasoning, acting, and remembering. The Microsoft Agent Framework provides a comprehensive set of tools and libraries that empower developers to build these sophisticated AI systems.

The first step in this journey is installing the dependency.

pip install agent-framework --pre
Enter fullscreen mode Exit fullscreen mode

Now that we have the right package, let's start by understanding what an agent actually is. In the simplest terms, an agent is a piece of software that uses a Large Language Model (LLM) to understand and generate human language.

The framework makes it incredibly easy to bring such an entity to life. With just a few lines of code, a developer can connect to Azure's powerful AI infrastructure. You can give your agent a name and a set of instructions. For instance, you might tell it to be a friendly assistant or a technical expert. The framework handles all the complex communication with the AI model, allowing you to focus on what you want the agent to do.

You can interact with this agent in two ways:

  • ask a question and wait for the complete answer, which is great for short queries
  • for longer responses, you can use streaming, where the agent "types" the answer out in real-time, making the experience feel much more interactive and responsive, just like a human typing a message.
import os
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential

# Create an agent
credential = AzureCliCredential()
client = AzureOpenAIResponsesClient(
    project_endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
    deployment_name=os.environ["AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME"],
    credential=credential,
)

agent = client.as_agent(
    name="HelloAgent",
    instructions="You are a friendly assistant. Keep your answers brief.",
)

# Run agent (non-streaming)
result = await agent.run("What is the capital of France?")
print(f"Agent: {result}")

# Run agent (streaming)
print("Agent (streaming): ", end="", flush=True)
async for chunk in agent.run("Tell me a one-sentence fun fact.", stream=True):
    if chunk.text:
        print(chunk.text, end="", flush=True)
print()
Enter fullscreen mode Exit fullscreen mode

Expanding Capabilities with Tools

However, an AI that can only talk is limited. The real magic happens when you give your agent the ability to interact with the world. This is where the concept of "Tools" comes into play.

Imagine you want your agent to know the current weather. A standard language model only knows what it was trained on and cannot know if it is raining outside right now. You can define standard Python functions and attaching them to your agent.

You could write a simple function that checks a weather API. When you ask the agent about the weather, it analyses your request, the system intelligently decides to call the function you provided. It then takes the result from that function and uses it to formulate a natural language response.

from typing import Annotated
from agent_framework import tool
from pydantic import Field
from random import randint

@tool(approval_mode="never_require")
def get_weather(
    location: Annotated[str, Field(description="The location to get the weather for.")],
) -> str:
    """Get the weather for a given location."""
    conditions = ["sunny", "cloudy", "rainy", "stormy"]
    return f"The weather in {location} is {conditions[randint(0, 3)]} with a high of {randint(10, 30)}°C."

# Create agent with tools
agent = client.as_agent(
    name="WeatherAgent",
    instructions="You are a helpful weather agent. Use the get_weather tool to answer questions.",
    tools=get_weather, <-- here you have to define the tool
)

result = await agent.run("What's the weather like in Seattle?")
print(f"Agent: {result}")
Enter fullscreen mode Exit fullscreen mode

Multi-Turn Interactions with Sessions

If you tell an agent your name at the start of a chat, you expect it to remember that name five minutes later. The framework solves this through "Sessions".

A session acts as a container for the conversation history. When you talk to the agent within a session, the framework automatically keeps track of what has been said.

This allows for multi-turn conversations where the agent maintains context. This creates an experience that feels personal and coherent rather than robotic.

# Create a session to maintain conversation history
session = agent.create_session()

# First turn
result = await agent.run("My name is Alice and I love hiking.", session=session)
print(f"Agent: {result}\n")

# Second turn — the agent should remember the user's name and hobby
result = await agent.run("What do you remember about me?", session=session)
print(f"Agent: {result}")
Enter fullscreen mode Exit fullscreen mode

The Microsoft Agent Framework provides a comprehensive toolkit for building intelligent AI agents that reason, act, and remember. Key features include core agent interaction via LLMs, dynamic tool integration for real-world tasks, and session-based conversation persistence. Finally the journey for building an AI agent is both accessible and powerful.


You can follow me on GitHub, where I'm creating cool projects.

I hope you enjoyed this article, until next time 👋

Top comments (0)