Step-by-step guide to integrating frontend and backend for production ML apps
In the AI era, it’s no longer enough to train great models; you need to deploy them in apps people can actually use.
This guide walks you through building scalable, real-world AI applications using React (or Next.js) on the frontend and FastAPI on the backend. A powerful stack for shipping intelligent tools quickly and effectively.
Architecture Overview
[React / Next.js] <---> [FastAPI Backend] <---> [ML Model / LLM API]
⬆ ⬇
User Interface Business Logic + Inference
Tech Stack
- Frontend: React.js or Next.js
- Backend: FastAPI (Python, async-ready)
- Model Serving: Custom models, Hugging Face, or OpenAI APIs
- Database (optional): Supabase, PostgreSQL, MongoDB
- Deployment: Vercel (frontend) + Render / Railway / Fly.io (backend)
Step-by-Step Guide
Set Up Your FastAPI Backend
mkdir ml-backend && cd ml-backend
python -m venv venv && source venv/bin/activate
pip install fastapi uvicorn
main.py
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class TextInput(BaseModel):
text: str
@app.post("/api/analyze/")
def analyze_text(payload: TextInput):
result = {"length": len(payload.text), "uppercase": payload.text.upper()}
return result
Run it:
uvicorn main:app --reload
Create the React Frontend
npx create-react-app ml-frontend
cd ml-frontend
npm install axios
App.js
import { useState } from "react";
import axios from "axios";
function App() {
const [text, setText] = useState("");
const [result, setResult] = useState(null);
const handleSubmit = async () => {
const res = await axios.post("http://localhost:8000/api/analyze/", { text });
setResult(res.data);
};
return (
<div style={{ padding: "2rem" }}>
<h2>Text Analyzer</h2>
<textarea value={text} onChange={(e) => setText(e.target.value)} />
<br />
<button onClick={handleSubmit}>Analyze</button>
{result && (
<div>
<p>Uppercase: {result.uppercase}</p>
<p>Length: {result.length}</p>
</div>
)}
</div>
);
}
export default App;
Swap in Your ML Model
Let’s plug in a real Hugging Face sentiment classifier:
pip install transformers
from transformers import pipeline
classifier = pipeline("sentiment-analysis")
@app.post("/api/analyze/")
def analyze_text(payload: TextInput):
result = classifier(payload.text)
return {"label": result[0]['label'], "score": result[0]['score']}
Deployment Tips
- Frontend: Deploy to https://vercel.com
- Backend: Use https://render.com or https://railway.app
Enable CORS in FastAPI:
pip install fastapi[all]
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
Using OpenAI or LangChain
Want to make it LLM-powered?
pip install openai
import openai
@app.post("/api/analyze/")
def analyze_text(payload: TextInput):
response = openai.ChatCompletion.create(
model="gpt-4",
messages=[{"role": "user", "content": payload.text}]
)
return {"reply": response["choices"][0]["message"]["content"]}
Final Thoughts
This stack gives you the best of both worlds:
- Python's flexibility for AI/ML
- React’s power for interactive UIs
Start simple, scale fast.
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.