- A compact enterprise-grade project layout and the key source files (ready-to-run).
- A production Dockerfile (multi-stage, non-root).
- A docker-compose for local dev (Postgres).
- Step-by-step commands to build → push to ECR → deploy to EC2 (including systemd unit example).
- Notes about security & production best-practices and citations for the most important platform commands.
I kept code pragmatic and copy-paste ready. Replace placeholders (YOUR_REGION, AWS_ACCOUNT_ID, DB creds, secret keys) with your values.
1. Project layout (suggested)
fastapi-enterprise/
├─ app/
│ ├─ main.py
│ ├─ core/
│ │ ├─ config.py
│ │ ├─ logging.py
│ │ └─ security.py
│ ├─ db/
│ │ ├─ session.py
│ │ └─ base.py
│ ├─ models/
│ │ └─ user.py
│ ├─ schemas/
│ │ └─ user.py
│ ├─ crud/
│ │ └─ user.py
│ └─ api/
│ ├─ deps.py
│ └─ v1/
│ └─ routers/
│ └─ users.py
├─ alembic/ (optional migrations)
├─ requirements.txt
├─ Dockerfile
├─ docker-compose.yml
├─ .env.example
└─ gunicorn_conf.py (optional)
2. Key source files
All code below fits the project layout above. Keep files under app/
.
app/core/config.py
from pydantic import BaseSettings, AnyUrl
from typing import Optional
class Settings(BaseSettings):
APP_NAME: str = "FastAPI Enterprise"
ENV: str = "production"
DEBUG: bool = False
# Database (asyncpg example)
DATABASE_URL: AnyUrl = "postgresql+asyncpg://postgres:password@postgres:5432/appdb"
# Security
SECRET_KEY: str
ACCESS_TOKEN_EXPIRE_MINUTES: int = 60 * 24
# Gunicorn/worker tuning
WORKERS: int = 4
class Config:
env_file = ".env"
env_file_encoding = "utf-8"
settings = Settings()
app/core/logging.py
import logging
import sys
def setup_logging():
fmt = "%(asctime)s - %(levelname)s - %(name)s - %(message)s"
logging.basicConfig(stream=sys.stdout, level=logging.INFO, format=fmt)
# Optionally integrate with json-loggers or structlog for structured logs
app/core/security.py
from passlib.context import CryptContext
from datetime import datetime, timedelta
import jwt
from app.core.config import settings
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
def hash_password(password: str) -> str:
return pwd_context.hash(password)
def verify_password(plain: str, hashed: str) -> bool:
return pwd_context.verify(plain, hashed)
def create_access_token(subject: str, expires_minutes: int = None):
expire = datetime.utcnow() + timedelta(minutes=(expires_minutes or settings.ACCESS_TOKEN_EXPIRE_MINUTES))
payload = {"sub": subject, "exp": expire}
token = jwt.encode(payload, settings.SECRET_KEY, algorithm="HS256")
return token
app/db/session.py
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker
from app.core.config import settings
engine = create_async_engine(str(settings.DATABASE_URL), future=True, echo=False)
AsyncSessionLocal = sessionmaker(engine, class_=AsyncSession, expire_on_commit=False)
# Dependency
async def get_db():
async with AsyncSessionLocal() as session:
yield session
app/db/base.py
from sqlalchemy.orm import declarative_base
Base = declarative_base()
app/models/user.py
from sqlalchemy import Column, Integer, String, Boolean
from app.db.base import Base
class User(Base):
__tablename__ = "users"
id = Column(Integer, primary_key=True, index=True)
email = Column(String(255), unique=True, index=True, nullable=False)
full_name = Column(String(255), nullable=True)
hashed_password = Column(String(255), nullable=False)
is_active = Column(Boolean, default=True)
app/schemas/user.py
from pydantic import BaseModel, EmailStr
from typing import Optional
class UserCreate(BaseModel):
email: EmailStr
full_name: Optional[str]
password: str
class UserRead(BaseModel):
id: int
email: EmailStr
full_name: Optional[str]
is_active: bool
class Config:
orm_mode = True
app/crud/user.py
from sqlalchemy.future import select
from sqlalchemy import insert
from sqlalchemy.exc import IntegrityError
from app.models.user import User
from app.core.security import hash_password
async def get_user_by_email(db, email: str):
q = await db.execute(select(User).where(User.email == email))
return q.scalars().first()
async def create_user(db, *, user_in):
hashed = hash_password(user_in.password)
db_user = User(email=user_in.email, full_name=user_in.full_name, hashed_password=hashed)
db.add(db_user)
try:
await db.commit()
await db.refresh(db_user)
return db_user
except IntegrityError:
await db.rollback()
raise
app/api/deps.py
from fastapi import Depends
from app.db.session import get_db
async def get_db_dep():
async for s in get_db():
yield s
app/api/v1/routers/users.py
from fastapi import APIRouter, Depends, HTTPException, status
from app.schemas.user import UserCreate, UserRead
from app.api.deps import get_db_dep
from app.crud.user import get_user_by_email, create_user
router = APIRouter(prefix="/users", tags=["users"])
@router.post("/", response_model=UserRead, status_code=status.HTTP_201_CREATED)
async def register(user_in: UserCreate, db=Depends(get_db_dep)):
existing = await get_user_by_email(db, user_in.email)
if existing:
raise HTTPException(status_code=400, detail="User already exists")
u = await create_user(db, user_in=user_in)
return u
@router.get("/{user_id}", response_model=UserRead)
async def read_user(user_id: int, db=Depends(get_db_dep)):
from sqlalchemy.future import select
q = await db.execute(select(__import__("app.models.user", fromlist=["User"]).User).where(__import__("app.models.user", fromlist=["User"]).User.id == user_id))
user = q.scalars().first()
if not user:
raise HTTPException(status_code=404, detail="Not found")
return user
(note: read_user uses inline import to keep example short — in production import models at top.)
app/main.py
import uvicorn
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from app.core.config import settings
from app.core.logging import setup_logging
from app.api.v1.routers import users # ensure package __init__ imports router path
def create_app():
setup_logging()
app = FastAPI(title=settings.APP_NAME, debug=settings.DEBUG)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # restrict in prod
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/health", tags=["health"])
async def health():
return {"status": "ok"}
app.include_router(users.router, prefix="/api/v1")
return app
app = create_app()
if __name__ == "__main__":
uvicorn.run("app.main:app", host="0.0.0.0", port=8000, reload=False)
requirements.txt
fastapi==0.95.2
uvicorn[standard]==0.22.0
SQLAlchemy[asyncio]==1.4.52
asyncpg==0.27.0
pydantic==1.10.10
python-jose==3.3.0
passlib[bcrypt]==1.7.4
gunicorn==21.2.0
PyJWT==2.8.0
(Adjust versions as needed; pin to your policy.)
3. Dockerfile (production, multi-stage)
# Stage 1: build dependencies
FROM python:3.11-slim AS builder
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
RUN apt-get update && \
apt-get install -y --no-install-recommends build-essential libpq-dev gcc && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt .
RUN pip install --upgrade pip
RUN pip wheel --no-cache-dir --wheel-dir /wheels -r requirements.txt
# Stage 2: final image
FROM python:3.11-slim
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# create non-root user
RUN useradd -m appuser
WORKDIR /app
# copy wheels
COPY --from=builder /wheels /wheels
RUN pip install --no-cache /wheels/*
# copy app code
COPY . /app
# give non-root ownership
RUN chown -R appuser:appuser /app
USER appuser
EXPOSE 80
ENV PYTHONPATH=/app
# Use gunicorn + uvicorn workers
CMD ["gunicorn", "-k", "uvicorn.workers.UvicornWorker", "-w", "4", "app.main:app", "-b", "0.0.0.0:80", "--log-level", "info"]
Notes: the wheel stage avoids compiling at container runtime. Adjust worker count to available CPUs.
4. docker-compose.yml (local dev - postgres + app)
version: "3.8"
services:
postgres:
image: postgres:15
restart: always
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: appdb
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
web:
build: .
restart: on-failure
env_file:
- .env
ports:
- "8000:80"
depends_on:
- postgres
volumes:
pgdata:
5. .env.example
SECRET_KEY=supersecretreplace_me
DATABASE_URL=postgresql+asyncpg://postgres:password@postgres:5432/appdb
ENV=development
DEBUG=True
WORKERS=4
6. Build, test locally
# build
docker build -t myorg/fastapi-enterprise:latest .
# run with env
docker run --rm -it -p 8000:80 --env-file .env myorg/fastapi-enterprise:latest
# then hit http://localhost:8000/api/v1/users or http://localhost:8000/health
7. Deploy to AWS EC2 — step-by-step (recommended workflow)
We'll use Amazon ECR as image registry and a single EC2 instance that pulls & runs the container. Two approaches are shown: push to Docker Hub or to ECR. ECR is preferred for private images.
Important — key commands & notes:
- Authenticate to ECR with
aws ecr get-login-password | docker login --username AWS --password-stdin <account>.dkr.ecr.<region>.amazonaws.com
. (AWS docs on get-login-password). ([AWS Documentation][1]) - Install Docker on Amazon Linux 2: use
sudo amazon-linux-extras install docker
thensudo service docker start
. (AWS docs). ([AWS Documentation][2]) - Docker install on Ubuntu: follow Docker Engine install instructions from Docker docs. ([Docker Documentation][3])
- Use Gunicorn with Uvicorn worker in production:
gunicorn -k uvicorn.workers.UvicornWorker -w 4 app.main:app
. ([Uvicorn][4], [FastAPI][5])
Below are the commands.
A — Prepare ECR & push image
(Assumes AWS CLI configured with credentials OR use an EC2 role that can create repos and push)
# 1. create repo (once)
aws ecr create-repository --repository-name fastapi-enterprise --region YOUR_REGION
# 2. build and tag
docker build -t fastapi-enterprise:latest .
AWS_ACCOUNT_ID=YOUR_AWS_ACCOUNT_ID
REGION=YOUR_REGION
ECR_URI=${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/fastapi-enterprise
docker tag fastapi-enterprise:latest ${ECR_URI}:latest
# 3. login to ECR (aws cli v2)
aws ecr get-login-password --region ${REGION} \
| docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com
# (source: AWS docs: pipe get-login-password to docker login). :contentReference[oaicite:4]{index=4}
# 4. push
docker push ${ECR_URI}:latest
B — Provision EC2 (quick guide)
Choose an instance (t3.small or larger). For production scale use autoscaling groups + ALB; below is a single-instance example.
- From AWS Console or CLI create an EC2 instance (Amazon Linux 2 or Ubuntu). Give it an IAM instance profile with permission to pull from ECR (AmazonEC2ContainerRegistryReadOnly) — this avoids embedding keys.
Basic SSH in from your workstation:
ssh -i key.pem ec2-user@EC2_PUBLIC_IP
C — Install Docker on EC2 instance
If using Amazon Linux 2:
sudo yum update -y
sudo amazon-linux-extras install docker -y
sudo service docker start
sudo usermod -a -G docker ec2-user
# Log out and back in to apply group change or run docker with sudo
(Install instructions from AWS docs). ([AWS Documentation][2])
If using Ubuntu (example):
# follow Docker official steps - e.g., for ubuntu 22.04
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
sudo usermod -aG docker $USER
(Use Docker docs for latest specific commands). ([Docker Documentation][3])
D — Pull & run image on EC2
If your instance has an IAM role that allows ECR read, you can pull directly. Otherwise authenticate using AWS CLI (or use the instance role recommended).
# login (if needed)
aws ecr get-login-password --region YOUR_REGION \
| docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com
# pull
docker pull ${ECR_URI}:latest
# run (env file transferred to instance beforehand)
docker stop fastapi || true && docker rm fastapi || true
docker run -d \
--name fastapi \
--restart unless-stopped \
--env-file /home/ec2-user/app/.env \
-p 80:80 \
${ECR_URI}:latest
E — Run as a systemd service (so it restarts on boot)
Create /etc/systemd/system/fastapi.service
(run as root):
[Unit]
Description=FastAPI Container
After=docker.service
Requires=docker.service
[Service]
Restart=always
ExecStartPre=-/usr/bin/docker stop fastapi
ExecStartPre=-/usr/bin/docker rm fastapi
# (Optional) pull latest image on start
ExecStartPre=/usr/bin/aws ecr get-login-password --region YOUR_REGION | /usr/bin/docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com
ExecStartPre=/usr/bin/docker pull ${ECR_URI}:latest
ExecStart=/usr/bin/docker run --rm --name fastapi --env-file /home/ec2-user/app/.env -p 80:80 ${ECR_URI}:latest
ExecStop=/usr/bin/docker stop fastapi
[Install]
WantedBy=multi-user.target
Then on the EC2 instance:
sudo systemctl daemon-reload
sudo systemctl enable fastapi
sudo systemctl start fastapi
sudo journalctl -u fastapi -f
8. Production best-practices & notes
- Use an IAM instance profile (role) with least-privilege for ECR access instead of embedding AWS keys on the EC2 host.
- In production use an ALB (Application Load Balancer) in front of EC2 instances, and an autoscaling group (ASG) for resilience.
- Use separate environments (dev/staging/prod) and secure secrets with AWS Secrets Manager or Parameter Store rather than
.env
checked into source. - For database in production prefer RDS (managed Postgres) and grant network access via security groups.
- Use health checks on the load balancer pointing to
/health
. - Set logging to stdout and ship logs to CloudWatch (or other log aggregator).
- Use metrics & tracing (Prometheus + OpenTelemetry) for observability.
- Set worker counts with a formula like
workers = 2 * CPU_CORES + 1
(tune during load tests). ([Stack Overflow][6]) - Use Gunicorn + uvicorn workers for multi-process handling of CPU-bound tasks (or use uvicorn directly with process-manager if preferred). ([Uvicorn][4], [FastAPI][5])
9. Quick checklist (deploy once)
- Build & test locally with
docker-compose up
. - Create ECR repo and push image (see A).
- Provision EC2 with IAM role (ECR read).
- SSH to EC2, install Docker, configure env, pull image.
- Create systemd unit to run container and allow auto-start.
- Attach ALB & set security groups.
10. Useful references (sources)
- ECR login (use
aws ecr get-login-password | docker login ...
). ([AWS Documentation][1], [AWS CLI Command Reference][7]) - Install Docker on Amazon Linux 2 (amazon-linux-extras install docker). ([AWS Documentation][2])
- Install Docker Engine on Ubuntu (official Docker docs). ([Docker Documentation][3])
- Gunicorn + Uvicorn worker pattern for FastAPI deployment. ([Uvicorn][4], [FastAPI][5])
- Worker-count guidance. ([Stack Overflow][6])
Top comments (0)