DEV Community

Cover image for From Prompt to Production: Dockerizing a LangChain Agent with FastAPI
Chandrani Mukherjee
Chandrani Mukherjee

Posted on

From Prompt to Production: Dockerizing a LangChain Agent with FastAPI

Supercharge your AI app deployment with Docker, FastAPI, and LangChain in one seamless containerized pipeline.

๐Ÿง  Overview

As AI-powered apps become more complex, managing dependencies, serving endpoints, and ensuring smooth deployment are top priorities. In this post, you'll learn how to dockerize a LangChain agent wrapped with FastAPI โ€” giving you a ready-to-deploy, production-friendly container for your intelligent applications.

By the end, youโ€™ll:

  • Create a LangChain agent
  • Wrap it with FastAPI for a clean REST interface
  • Dockerize the entire setup
  • Run it anywhere with just one command

๐Ÿ“ฆ Prerequisites

Before you begin, make sure you have:

  • Docker installed
  • Python 3.10+ (for local testing)
  • An OpenAI API Key or any LLM key supported by LangChain

๐Ÿ“ Project Structure

langchain-agent-api/
โ”œโ”€โ”€ agent_app/
โ”‚ โ”œโ”€โ”€ main.py
โ”‚ โ””โ”€โ”€ agent.py
โ”œโ”€โ”€ requirements.txt
โ”œโ”€โ”€ Dockerfile
โ””โ”€โ”€ .env

pgsql

โœจ Step 1: Create the LangChain Agent

agent_app/agent.py

from langchain.agents import initialize_agent, Tool
from langchain.llms import OpenAI
from langchain.utilities import SerpAPIWrapper
import os

def create_agent():
    llm = OpenAI(temperature=0, openai_api_key=os.getenv("OPENAI_API_KEY"))
    search = SerpAPIWrapper()
    tools = [Tool(name="Search", func=search.run, description="Useful for answering general questions.")]
    agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
    return agent
Enter fullscreen mode Exit fullscreen mode

๐Ÿš€ Step 2: Wrap with FastAPI
agent_app/main.py

from fastapi import FastAPI
from pydantic import BaseModel
from agent import create_agent

app = FastAPI()
agent = create_agent()

class Query(BaseModel):
    question: str

@app.post("/ask")
async def ask_question(query: Query):
    response = agent.run(query.question)
    return {"response": response}
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“„ Step 3: Define Requirements
requirements.txt

fastapi
uvicorn
langchain
openai
python-dotenv
Enter fullscreen mode Exit fullscreen mode

๐Ÿ’ก Add serpapi or other tools as needed.

๐Ÿ› ๏ธ Step 4: Dockerfile
Dockerfile

FROM python:3.10-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY agent_app ./agent_app
COPY .env .

CMD ["uvicorn", "agent_app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Enter fullscreen mode Exit fullscreen mode

๐Ÿ”‘ Step 5: Add Environment Variables
.env

OPENAI_API_KEY=your_openai_key_here
SERPAPI_API_KEY=your_serpapi_key_here

โš ๏ธ Never commit .env to public repos. Use Docker secrets or CI/CD env vars in production.

๐Ÿงช Step 6: Build and Run
๐Ÿงฑ Build the Docker image

docker build -t langchain-agent-api .
Enter fullscreen mode Exit fullscreen mode

๐Ÿš€ Run the container

docker run --env-file .env -p 8000:8000 langchain-agent-api
Enter fullscreen mode Exit fullscreen mode

๐Ÿ“ฌ Try It Out
Once running, test your agent with:

curl -X POST http://localhost:8000/ask \
-H "Content-Type: application/json" \
-d '{"question": "Who is the CEO of OpenAI?"}'
Enter fullscreen mode Exit fullscreen mode

Your containerized LangChain agent should reply in seconds! ๐Ÿค–

๐Ÿ“ฆ Bonus: Add Docker Compose (Optional)
docker-compose.yml


version: "3.8"
services:
  langchain:
    build: .
    ports:
      - "8000:8000"
    env_file:
      - .env
Enter fullscreen mode Exit fullscreen mode

Then run:


docker-compose up --build
Enter fullscreen mode Exit fullscreen mode

๐Ÿ Final Thoughts
You now have a production-ready, containerized LangChain agent served via FastAPI. Whether youโ€™re building internal AI tools or deploying to the cloud, this setup gives you repeatability, portability, and power.

Top comments (2)

Collapse
 
dmitriiweb profile image
di

Great article! I have just one note: the run method in agent.run was deprecated in LangChain versionย 0.1.0 and replaced with invoke. Also, both run and invoke are blocking methods, so since youโ€™re using the async interface, youโ€™ll need to call the agent with ainvoke

Collapse
 
moni121189 profile image
Chandrani Mukherjee

Hey Thanks for the appreciation and input. Indeed you are correct. Will surely make articles using asynchronous invoke