Supercharge your AI app deployment with Docker, FastAPI, and LangChain in one seamless containerized pipeline.
๐ง Overview
As AI-powered apps become more complex, managing dependencies, serving endpoints, and ensuring smooth deployment are top priorities. In this post, you'll learn how to dockerize a LangChain agent wrapped with FastAPI โ giving you a ready-to-deploy, production-friendly container for your intelligent applications.
By the end, youโll:
- Create a LangChain agent
- Wrap it with FastAPI for a clean REST interface
- Dockerize the entire setup
- Run it anywhere with just one command
๐ฆ Prerequisites
Before you begin, make sure you have:
- Docker installed
- Python 3.10+ (for local testing)
- An OpenAI API Key or any LLM key supported by LangChain
๐ Project Structure
langchain-agent-api/
โโโ agent_app/
โ โโโ main.py
โ โโโ agent.py
โโโ requirements.txt
โโโ Dockerfile
โโโ .env
pgsql
โจ Step 1: Create the LangChain Agent
agent_app/agent.py
from langchain.agents import initialize_agent, Tool
from langchain.llms import OpenAI
from langchain.utilities import SerpAPIWrapper
import os
def create_agent():
llm = OpenAI(temperature=0, openai_api_key=os.getenv("OPENAI_API_KEY"))
search = SerpAPIWrapper()
tools = [Tool(name="Search", func=search.run, description="Useful for answering general questions.")]
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
return agent
๐ Step 2: Wrap with FastAPI
agent_app/main.py
from fastapi import FastAPI
from pydantic import BaseModel
from agent import create_agent
app = FastAPI()
agent = create_agent()
class Query(BaseModel):
question: str
@app.post("/ask")
async def ask_question(query: Query):
response = agent.run(query.question)
return {"response": response}
๐ Step 3: Define Requirements
requirements.txt
fastapi
uvicorn
langchain
openai
python-dotenv
๐ก Add serpapi or other tools as needed.
๐ ๏ธ Step 4: Dockerfile
Dockerfile
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY agent_app ./agent_app
COPY .env .
CMD ["uvicorn", "agent_app.main:app", "--host", "0.0.0.0", "--port", "8000"]
๐ Step 5: Add Environment Variables
.env
OPENAI_API_KEY=your_openai_key_here
SERPAPI_API_KEY=your_serpapi_key_here
โ ๏ธ Never commit .env to public repos. Use Docker secrets or CI/CD env vars in production.
๐งช Step 6: Build and Run
๐งฑ Build the Docker image
docker build -t langchain-agent-api .
๐ Run the container
docker run --env-file .env -p 8000:8000 langchain-agent-api
๐ฌ Try It Out
Once running, test your agent with:
curl -X POST http://localhost:8000/ask \
-H "Content-Type: application/json" \
-d '{"question": "Who is the CEO of OpenAI?"}'
Your containerized LangChain agent should reply in seconds! ๐ค
๐ฆ Bonus: Add Docker Compose (Optional)
docker-compose.yml
version: "3.8"
services:
langchain:
build: .
ports:
- "8000:8000"
env_file:
- .env
Then run:
docker-compose up --build
๐ Final Thoughts
You now have a production-ready, containerized LangChain agent served via FastAPI. Whether youโre building internal AI tools or deploying to the cloud, this setup gives you repeatability, portability, and power.
Top comments (2)
Great article! I have just one note: the run method in agent.run was deprecated in LangChain versionย 0.1.0 and replaced with invoke. Also, both run and invoke are blocking methods, so since youโre using the async interface, youโll need to call the agent with ainvoke
Hey Thanks for the appreciation and input. Indeed you are correct. Will surely make articles using asynchronous invoke