AI agents are becoming an essential part of modern applications — whether it’s customer support, workflow automation, or intelligent assistants that reason and act.
In this post, we’ll build a simple AI agent API using FastAPI and OpenAI, showing how to structure it for real-world use.
🧠 What We’re Building
We’ll create a FastAPI app that:
- Accepts a user query (e.g., “Summarize this article” or “Get today’s top tech news”).
- Sends it to the OpenAI API.
- Returns a structured AI-generated response.
You can later expand this into a multi-agent system or connect it with n8n, Zapier, or LangChain.
⚙️ Setup
Requirements:
pip install fastapi uvicorn openai python-dotenv
Create a .env file with your OpenAI key:
OPENAI_API_KEY=your_openai_api_key_here
🧩 The Code
main.py
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from openai import OpenAI
import os
from dotenv import load_dotenv
load_dotenv()
app = FastAPI()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
class Query(BaseModel):
message: str
@app.post("/agent")
def run_agent(query: Query):
try:
completion = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful AI agent."},
{"role": "user", "content": query.message}
]
)
return {"response": completion.choices[0].message.content}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
Run it with:
uvicorn main:app --reload
Now, send a POST request:
curl -X POST http://127.0.0.1:8000/agent \
-H "Content-Type: application/json" \
-d '{"message": "Write a haiku about FastAPI"}'
⚡ Example Response
{
"response": "FastAPI flies,\nCode flows with lightning speed bright,\nPython dreams take flight."
}
💡 Takeaway
With FastAPI + OpenAI, you can spin up a production-ready AI agent in minutes — and scale it into something much smarter.
If you’re building something similar (like custom AI workflows or real-time assistants), drop a comment or connect — I’d love to hear your approach! 👇
Top comments (0)