DEV Community

Cover image for How to use LangGraph within a FastAPI Backend πŸš€
Anurag Kanojiya
Anurag Kanojiya

Posted on

8 3 3 2 2

How to use LangGraph within a FastAPI Backend πŸš€

In this tutorial, I’ll walk you through how to build a backend to generate AI-crafted emails using LangGraph for structured workflows and FastAPI for API endpoints.


πŸ“Œ Prerequisites

1️⃣ Install Dependencies

Create a requirements.txt file and add these dependencies:

fastapi
uvicorn
pydantic
python-dotenv
google-generativeai
langgraph
langchain
Enter fullscreen mode Exit fullscreen mode

Now, install them using the below command in terminal inside your project folder:

pip install -r requirements.txt
Enter fullscreen mode Exit fullscreen mode

πŸ“Œ Setting Up the Backend

2️⃣ Create backend.py and Import Required Modules

from fastapi import FastAPI
from pydantic import BaseModel
import os
import google.generativeai as genai
import langgraph
from langgraph.graph import StateGraph
from dotenv import load_dotenv
Enter fullscreen mode Exit fullscreen mode

What Do These Modules Do?

  • FastAPI β†’ API framework
  • Pydantic β†’ Request validation
  • os β†’ Load environment variables
  • google-generativeai β†’ Gemini AI for email generation
  • langgraph β†’ Manages AI workflows
  • dotenv β†’ Loads API keys from a .env file

3️⃣ Load Environment Variables & Initialize FastAPI

load_dotenv()
app = FastAPI()

GEMINI_API_KEY = os.getenv("GEMINI_API_KEY")
genai.configure(api_key=GEMINI_API_KEY)
Enter fullscreen mode Exit fullscreen mode

βœ… What do these do?

  • .env helps keep API keys secure instead of hardcoding them.
  • FastAPI() initializes the backend application.

πŸ“Œ Defining the API

4️⃣ Request Model (Pydantic Validation)

class EmailRequest(BaseModel):
    tone: str      
    ai_model: str  
    language: str  
    context: str  
Enter fullscreen mode Exit fullscreen mode

This ensures the API gets structured input:

  • tone β†’ Email tone (e.g., formal, friendly).
  • ai_model β†’ Which AI to use (gemini or mistral).
  • language β†’ Language of the email (e.g., English, French).
  • context β†’ The purpose/content of the email.

5️⃣ AI Email Generation Function (Gemini)

def generate_email_gemini(language: str, tone: str, context: str):
    model = genai.GenerativeModel("gemini-2.0-flash")

    prompt = f"""
    Generate an email in {language} with a {tone} tone. Context: {context}
    Return the response in this format:
    Subject: //subject
    Body: //body
    """

    response = model.generate_content(prompt)
    response_text = response.text.strip()

    subject, body = response_text.split("Body:", 1) if "Body:" in response_text else ("No Subject", response_text)
    subject = subject.replace("Subject:", "").strip()
    body = body.strip()

    return {"subject": subject, "body": body}
Enter fullscreen mode Exit fullscreen mode

βœ… How It Works?

  1. Calls Gemini AI using "gemini-2.0-flash".
  2. Constructs a prompt for email generation.
  3. Parses the AI response and extracts subject & body.

6️⃣ Structuring the Workflow with LangGraph

def generate_email_graph(ai_model: str, tone: str, language: str, context: str):
    def email_generation_fn(state):
        if ai_model == "gemini":
            email = generate_email_gemini(language, tone, context)
        else:
            email = "Invalid AI model selected!"
        return {"email": email}

    graph = StateGraph(dict)  
    graph.add_node("generate_email", email_generation_fn)
    graph.set_entry_point("generate_email")

    return graph.compile()
Enter fullscreen mode Exit fullscreen mode

βœ… Why Use LangGraph?

  • Creates structured AI workflows.
  • Makes it easy to expand functionalities (e.g., adding post-processing).
  • Can be extended with multiple AI models (Mistral, GPT, etc.).

πŸ“Œ FastAPI Routes

7️⃣ Root Endpoint

@app.get("/")
def read_root():
    return {"message": "Hello, AutoComposeBackend is live!"}
Enter fullscreen mode Exit fullscreen mode

This simply confirms the server is running.


8️⃣ AI Email Generation API Endpoint

@app.post("/generate_email")
async def generate_email(request: EmailRequest):
    """Generate an AI-crafted email using Gemini."""
    graph = generate_email_graph(request.ai_model, request.tone, request.language, request.context)
    response = graph.invoke({})
    return response
Enter fullscreen mode Exit fullscreen mode

βœ… How It Works?

  1. Receives input via POST request (EmailRequest).
  2. Calls generate_email_graph to create a workflow.
  3. Executes the AI model and returns the email response.

πŸ“Œ Running the Server

Save backend.py and run on terminal:

uvicorn backend:app --host 0.0.0.0 --port 8080                   
Enter fullscreen mode Exit fullscreen mode

Now, visit:

➑ http://0.0.0.0:8080/docs to test the API using FastAPI’s built-in Swagger UI or postman.


πŸš€ Deploying FastAPI on Railway.app

Now that our FastAPI backend is ready, let's deploy it on Railway.app, a cloud platform for hosting backend applications effortlessly.


1️⃣ Create a Railway Account & New Project

  1. Go to Railway.app and sign up.
  2. Click on "New Project" β†’ "Deploy from GitHub".
  3. Connect your GitHub repository containing the FastAPI backend.

2️⃣ Add a Procfile for Deployment

Railway uses a Procfile to define how your app runs. Create a Procfile in your project root:

web: uvicorn backend:app --host 0.0.0.0 --port $PORT
Enter fullscreen mode Exit fullscreen mode

βœ… Why?

  • uvicorn backend:app β†’ Starts the FastAPI server.
  • --host 0.0.0.0 β†’ Allows Railway to bind it to a public address.
  • --port $PORT β†’ Uses the Railway-assigned port dynamically.

3️⃣ Add Environment Variables in Railway

Since we use API keys, they must be stored securely:

  1. In your Railway Project Dashboard, go to Settings β†’ Variables.
  2. Add environment variables:
    • GEMINI_API_KEY = your_api_key_here

4️⃣ Deploy the App on Railway

  1. Click on Deploy.
  2. Wait for the deployment to complete. Once done, you'll get a public URL (e.g., https://your-app.up.railway.app).

Note: If you do not get a domain automatically then create a public domain manually from the project's networking section.


5️⃣ Test Your Deployed API

Open:

➑ https://your-app.up.railway.app/docs

Here, you can test API endpoints using FastAPI's built-in Swagger UI or on postman.


🎯 Done! Your FastAPI Backend is Live! πŸš€

You now have a FastAPI backend running on Railway.app with LangGraph-powered AI email generation.πŸ’‘


Next tutorial would be about me integrating this FastAPI-LangGraph powered backend with a Jetpack Compose frontend! πŸš€

Top comments (2)

Collapse
 
rohan_sharma profile image
Rohan Sharma β€’ β€’ Edited

Very insightful!! Thanks for writing this blog

Collapse
 
anuragkanojiya profile image
Anurag Kanojiya β€’

Thank you, Rohan. I'm glad you liked it.