In this tutorial, Iβll walk you through how to build a backend to generate AI-crafted emails using LangGraph for structured workflows and FastAPI for API endpoints.
π Prerequisites
1οΈβ£ Install Dependencies
Create a requirements.txt
file and add these dependencies:
fastapi
uvicorn
pydantic
python-dotenv
google-generativeai
langgraph
langchain
Now, install them using the below command in terminal inside your project folder:
pip install -r requirements.txt
π Setting Up the Backend
2οΈβ£ Create backend.py
and Import Required Modules
from fastapi import FastAPI
from pydantic import BaseModel
import os
import google.generativeai as genai
import langgraph
from langgraph.graph import StateGraph
from dotenv import load_dotenv
What Do These Modules Do?
- FastAPI β API framework
- Pydantic β Request validation
- os β Load environment variables
- google-generativeai β Gemini AI for email generation
- langgraph β Manages AI workflows
-
dotenv β Loads API keys from a
.env
file
3οΈβ£ Load Environment Variables & Initialize FastAPI
load_dotenv()
app = FastAPI()
GEMINI_API_KEY = os.getenv("GEMINI_API_KEY")
genai.configure(api_key=GEMINI_API_KEY)
β What do these do?
-
.env
helps keep API keys secure instead of hardcoding them. -
FastAPI()
initializes the backend application.
π Defining the API
4οΈβ£ Request Model (Pydantic Validation)
class EmailRequest(BaseModel):
tone: str
ai_model: str
language: str
context: str
This ensures the API gets structured input:
- tone β Email tone (e.g., formal, friendly).
-
ai_model β Which AI to use (
gemini
ormistral
). - language β Language of the email (e.g., English, French).
- context β The purpose/content of the email.
5οΈβ£ AI Email Generation Function (Gemini)
def generate_email_gemini(language: str, tone: str, context: str):
model = genai.GenerativeModel("gemini-2.0-flash")
prompt = f"""
Generate an email in {language} with a {tone} tone. Context: {context}
Return the response in this format:
Subject: //subject
Body: //body
"""
response = model.generate_content(prompt)
response_text = response.text.strip()
subject, body = response_text.split("Body:", 1) if "Body:" in response_text else ("No Subject", response_text)
subject = subject.replace("Subject:", "").strip()
body = body.strip()
return {"subject": subject, "body": body}
β How It Works?
- Calls Gemini AI using
"gemini-2.0-flash"
. - Constructs a prompt for email generation.
- Parses the AI response and extracts subject & body.
6οΈβ£ Structuring the Workflow with LangGraph
def generate_email_graph(ai_model: str, tone: str, language: str, context: str):
def email_generation_fn(state):
if ai_model == "gemini":
email = generate_email_gemini(language, tone, context)
else:
email = "Invalid AI model selected!"
return {"email": email}
graph = StateGraph(dict)
graph.add_node("generate_email", email_generation_fn)
graph.set_entry_point("generate_email")
return graph.compile()
β Why Use LangGraph?
- Creates structured AI workflows.
- Makes it easy to expand functionalities (e.g., adding post-processing).
- Can be extended with multiple AI models (Mistral, GPT, etc.).
π FastAPI Routes
7οΈβ£ Root Endpoint
@app.get("/")
def read_root():
return {"message": "Hello, AutoComposeBackend is live!"}
This simply confirms the server is running.
8οΈβ£ AI Email Generation API Endpoint
@app.post("/generate_email")
async def generate_email(request: EmailRequest):
"""Generate an AI-crafted email using Gemini."""
graph = generate_email_graph(request.ai_model, request.tone, request.language, request.context)
response = graph.invoke({})
return response
β How It Works?
-
Receives input via POST request (
EmailRequest
). - Calls generate_email_graph to create a workflow.
- Executes the AI model and returns the email response.
π Running the Server
Save backend.py
and run on terminal:
uvicorn backend:app --host 0.0.0.0 --port 8080
Now, visit:
β‘ http://0.0.0.0:8080/docs to test the API using FastAPIβs built-in Swagger UI or postman.
π Deploying FastAPI on Railway.app
Now that our FastAPI backend is ready, let's deploy it on Railway.app, a cloud platform for hosting backend applications effortlessly.
1οΈβ£ Create a Railway Account & New Project
- Go to Railway.app and sign up.
- Click on "New Project" β "Deploy from GitHub".
- Connect your GitHub repository containing the FastAPI backend.
2οΈβ£ Add a Procfile
for Deployment
Railway uses a Procfile
to define how your app runs. Create a Procfile
in your project root:
web: uvicorn backend:app --host 0.0.0.0 --port $PORT
β Why?
-
uvicorn backend:app
β Starts the FastAPI server. -
--host 0.0.0.0
β Allows Railway to bind it to a public address. -
--port $PORT
β Uses the Railway-assigned port dynamically.
3οΈβ£ Add Environment Variables in Railway
Since we use API keys, they must be stored securely:
- In your Railway Project Dashboard, go to Settings β Variables.
- Add environment variables:
-
GEMINI_API_KEY = your_api_key_here
-
4οΈβ£ Deploy the App on Railway
- Click on Deploy.
- Wait for the deployment to complete. Once done, you'll get a public URL (e.g.,
https://your-app.up.railway.app
).
Note: If you do not get a domain automatically then create a public domain manually from the project's networking section.
5οΈβ£ Test Your Deployed API
Open:
β‘ https://your-app.up.railway.app/docs
Here, you can test API endpoints using FastAPI's built-in Swagger UI or on postman.
π― Done! Your FastAPI Backend is Live! π
You now have a FastAPI backend running on Railway.app with LangGraph-powered AI email generation.π‘
Next tutorial would be about me integrating this FastAPI-LangGraph powered backend with a Jetpack Compose frontend! π
Top comments (2)
Very insightful!! Thanks for writing this blog
Thank you, Rohan. I'm glad you liked it.