DEV Community

TAKUYA HIRATA
TAKUYA HIRATA

Posted on

Deploy Python Apps to Production: Docker, CI/CD, and Cloud Hosting Compared

Disclosure: This post contains affiliate links. We may earn a small commission at no extra cost to you.

TL;DR: Deploying Python applications in 2026 means choosing between containers, serverless, and managed platforms. This guide compares Docker + CI/CD pipelines, serverless (AWS Lambda, Google Cloud Functions), and managed platforms (Railway, Render, DigitalOcean ($200 free credit) App Platform) with real deployment configs and cost breakdowns.

If you've ever shipped a Python project that "works on my machine" and then spent a weekend debugging production, this guide is for you. I'll walk through three deployment strategies with actual configuration files, trade-offs, and the costs nobody tells you about.

Why Does Python Deployment Still Feel Hard in 2026?

Python's packaging ecosystem has improved dramatically — uv, rye, and pdm have replaced the chaos of pip + virtualenv for most teams. But deployment is a different beast. The gap between "it runs locally" and "it serves traffic reliably" involves:

  • Dependency resolution across OS environments
  • WSGI/ASGI server configuration
  • Database connection pooling
  • Secret management
  • Health checks and graceful shutdowns

Let's solve each one across three deployment patterns.

How Do You Containerize a Python App with Docker?

Docker remains the gold standard for reproducible deployments. Here's a production-ready Dockerfile for a FastAPI application:

# Stage 1: Build dependencies
FROM python:3.12-slim AS builder
WORKDIR /app
COPY pyproject.toml uv.lock ./
RUN pip install uv && uv pip install --system -r pyproject.toml

# Stage 2: Production image
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /usr/local/lib/python3.12 /usr/local/lib/python3.12
COPY --from=builder /usr/local/bin /usr/local/bin
COPY . .

# Non-root user for security
RUN useradd -m appuser && chown -R appuser:appuser /app
USER appuser

EXPOSE 8000
HEALTHCHECK --interval=30s --timeout=5s \
  CMD curl -f http://localhost:8000/health || exit 1

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]
Enter fullscreen mode Exit fullscreen mode

Key decisions:

  • Multi-stage build: Cuts image size from ~1.2GB to ~180MB
  • Non-root user: Prevents container escape vulnerabilities
  • Health check: Orchestrators (Kubernetes, ECS) use this to restart unhealthy containers
  • 4 workers: Match to your CPU cores — 2 * cores + 1 is the uvicorn recommendation

CI/CD Pipeline with GitHub Actions

name: Deploy
on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Build and push
        run: |
          docker build -t myapp:${{ github.sha }} .
          docker tag myapp:${{ github.sha }} registry.example.com/myapp:latest
          docker push registry.example.com/myapp:latest
      - name: Deploy to production
        run: |
          ssh deploy@prod "docker pull registry.example.com/myapp:latest && docker-compose up -d"
Enter fullscreen mode Exit fullscreen mode

Cost: Docker deployment on a $12/month DigitalOcean droplet handles 5,000-10,000 requests/minute for a typical API. That's hard to beat for side projects and early-stage products.

What About Serverless for Python?

Serverless eliminates server management entirely. AWS Lambda + API Gateway is the most common pattern:

# handler.py — AWS Lambda function
import json
from mangum import Mangum
from app.main import app  # Your FastAPI app

handler = Mangum(app, lifespan="off")
Enter fullscreen mode Exit fullscreen mode
# serverless.yml
service: my-python-api
provider:
  name: aws
  runtime: python3.12
  memorySize: 512
  timeout: 30

functions:
  api:
    handler: handler.handler
    events:
      - httpApi:
          path: /{proxy+}
          method: ANY
Enter fullscreen mode Exit fullscreen mode

Pros: Zero server management, auto-scaling, pay-per-invocation ($0.20 per 1M requests).

Cons: Cold starts (500ms-2s for Python), 15-minute execution limit, no persistent connections (WebSockets require API Gateway v2), vendor lock-in.

Best for: Sporadic traffic, webhook handlers, scheduled tasks. Not ideal for real-time APIs or long-running processes.

How Do Managed Platforms Compare?

Managed platforms like Railway, Render, and DigitalOcean App Platform offer the middle ground — you push code, they handle infrastructure.

Platform Free Tier Paid Starts At Auto-Deploy Custom Domains Docker Support
Railway $5 credit/mo $5/mo + usage Yes (GitHub) Yes Yes
Render Free (sleep after 15min) $7/mo Yes (GitHub) Yes Yes
DigitalOcean App Platform $0 (static) $5/mo Yes (GitHub) Yes Yes
Fly.io $0 (3 shared VMs) $1.94/mo per VM Yes (CLI) Yes Yes

For a FastAPI app with a PostgreSQL database:

  • Railway: ~$10-15/mo (app + database)
  • Render: ~$14/mo (web service + managed Postgres)
  • DigitalOcean: ~$17/mo (App Platform + managed database)

Deployment is One Command

# Railway
railway up

# Render (via render.yaml)
git push origin main  # auto-deploys

# DigitalOcean App Platform
doctl apps create --spec .do/app.yaml
Enter fullscreen mode Exit fullscreen mode

What About Database Migrations in Production?

Regardless of deployment strategy, database migrations need special handling:

# alembic/env.py — Safe migration pattern
from alembic import context
from sqlalchemy import pool

def run_migrations_online():
    connectable = engine_from_config(
        config.get_section(config.config_ini_section),
        prefix="sqlalchemy.",
        poolclass=pool.NullPool,  # Don't pool during migrations
    )
    with connectable.connect() as connection:
        context.configure(connection=connection, target_metadata=target_metadata)
        with context.begin_transaction():
            context.run_migrations()
Enter fullscreen mode Exit fullscreen mode

Run migrations as a pre-deploy hook, not inside your application startup. Railway and Render both support pre-deploy commands in their config files.

Which Strategy Should You Choose?

Scenario Recommended Why
Side project, learning Managed platform (Railway/Render) Fastest to ship
Production API, predictable traffic Docker + VPS (DigitalOcean) Best cost/performance ratio
Spiky/unpredictable traffic Serverless (AWS Lambda) Auto-scales to zero
Enterprise, multi-region Kubernetes (EKS/GKE) Full control, but complex
AI/ML inference Docker + GPU cloud (Lambda Labs, RunPod) GPU access required

Key Takeaways

  1. Docker multi-stage builds cut Python image sizes by 80% and should be your default
  2. Managed platforms are the fastest path to production — start here, migrate when you outgrow them
  3. Serverless excels for event-driven workloads but Python cold starts are a real trade-off
  4. Database migrations should always be a separate step from application startup
  5. Health checks are not optional — every deployment strategy needs them for reliability
  6. Cost-wise, a $12/month VPS beats serverless until you hit ~50M requests/month

Useful Resources

  • Docker official Python guide — Best practices for Python containers
  • DigitalOcean — Excellent documentation and $200 free credit for new accounts. Their App Platform and managed databases are a solid choice for Python deployments.
  • FastAPI deployment docs — Framework-specific deployment guidance

Stay Updated

I publish deep-dive technical articles 3x/week on AI agents, Python architecture, and developer tooling. Follow me here on dev.to or subscribe to the newsletter to get them in your inbox.


This article was generated with AI assistance and reviewed for accuracy. If you found it helpful, consider supporting the author:

Buy Me A Coffee

Top comments (0)