I wanted to deploy a modern python + react application in the easiest possible way. My application uses:
Frontend:
"@tailwindcss/vite": "^4.1.11",
"react": "^19.1.0",
"react-dom": "^19.1.0",
"react-router": "^7.7.1",
"react-router-dom": "^7.7.1",
"tailwindcss": "^4.1.11"
"vite": "^7.0.6"
Backend:
fastapi
uvicorn[standard]
sqlalchemy
pydantic
chromadb
I had been developing in pycharm using a venv. That is stable and predictable, but does not allow for CD (continuous deployment) which is what I wanted.
So it was time to deploy, I switched to:
docker
docker-compose
caddy
Here's how I did it.
My application layout:
├── backend
│ ├── auth_providers
│ ├── config
│ ├── data
│ ├── db
│ ├── db.py
│ ├── Dockerfile
│ ├── main.py
│ ├── models
│ ├── __pycache__
│ ├── requirements.txt
│ ├── routers
│ ├── server.js
│ ├── services
│ ├── tests
│ └── utils
├── Caddyfile.dev
├── Caddyfile.prod
├── docker-compose.override.yml
├── docker-compose.prod.yml
├── docker-compose.yml
├── frontend
│ ├── dist
│ ├── Dockerfile
│ ├── index.html
│ ├── node_modules
│ ├── package.json
│ ├── package-lock.json
│ ├── public
│ ├── src
│ └── vite.config.js
├── package.json
├── package-lock.json
└── README.md
I decided to use caddy because, as long as you have a domain name mapped to your aws instance it provides an ssl certificate for that domain using Let's Encrypt. It all truly "just works". For someone who spent years configuring certificates in Apache, this is amazing!
PREPARING FOR DEPLOYMENT
Transitioning from my .venv environment to a docker container on my localhost was challenging.
In my .venv environment, I had written my app with this straightforward structure:
myapp
├── backend
├── frontend
But with Docker I changed the architecture to three different containers governed by a docker-compose.yml file:
- backend container:
myapp
├── backend
├── frontend_dist
├── requirements.txt
- frontend container (builds asssets served by caddy. Not needed in prod):
myapp
├── src
├── dist
├── vite.config.js
- caddy (web server) container (serves assets in prod):
srv
├── assets
├── index.html
├── manifest.json
So a lot of work just changing paths and ports. I could have prevented this by containerizing with Docker from the get-go. Oh well, 20-20 hindsight!
TROUBLESHOOTING:
Here are some issues and solutions I encountered.
ISSUE: docker compose up returns this error:
api | ModuleNotFoundError: No module named 'backend'
SOLUTION: Because I was using /app for the docker environment as opposed to /myapp in my .venv environment, i had to add "app" to my PYTHONPATH
in backend/Dockerfile:
WORKDIR /app
ENV PYTHONPATH=/app
COPY backend /app/backend
then when I mounted /backend to /app/backend in container from docker-compose.yml, the paths in main.py made sense.
ISSUE: Anything other than an exact url match resulted in a 404 error.
'http://mysite/login' worked
'http://mysite/bla' gave 404 error
SOLUTION: I needed redirect, so used a react catch-all route, the path="*"
route ie:
<Routes>
<Route path="/login" element={<Login />} />
<Route
path="/"
...
/>
<Route path="*" element={<Navigate to="/" replace />} />
</Routes>
Also, configure Caddyfile like this with try_files
before file_server
, inside handle {}
block:
handle {
try_files {path} /index.html
file_server
}
ISSUE: npm run dev
gives this on startup:
TypeError: crypto.hash is not a function
SOLUTION: Be sure to work with node 20 (or 18+) which has upgraded Crypto API. So instead of FROM node:18-bullseye
, use FROM node:20
in your frontend Dockerfile.
ISSUE: "RuntimeError: Directory '../frontend/dist/assets' does not exist"
SOLUTION: in /app/main.py I had: app.mount("/static/assets", StaticFiles(directory="../frontend/dist/assets"), name="static")
I had to define an env variable in docker-compose.yml and access it in main.py to determine whether I was in local architecture or docker container. I then used it to create a prefix which I used to build paths.
docker-compose.yml:
environment:
- APP_ENV=docker
main.py:
ENV = os.getenv("APP_ENV", "local")
if ENV == "docker":
path_to_dist = Path('/app/frontend_dist')
else:
path_to_dist = Path(__file__).resolve().parent.parent / 'frontend' / 'dist'
path_to_assets = path_to_dist / 'assets'
path_to_index = path_to_dist / 'index.html'
HINT:* digging around in my running containers helped a lot when trying to see where things were mounted.
docker exec -it <mycontainer> sh
docker exec -it <mycontainer> bash #bash is better, install it when you can
HINT: my .dockerignore contained /frontend_dist, causing it not to be seen. Always check your .dockerignore. Here are the dockerignore entries I did need:
.venv/
__pycache__
*.pyc
node_modules
ISSUE: docker ran, but I got a blank screen and this error:
Loading module from “http://localhost:5000/static/assets/index-CaU4slfD.js” was blocked because of a disallowed MIME type (“”). localhost:5000
SOLUTION: In docker, I was trying to mount assets like this: /app/frontend_dist/assets/
but main.py had: app.mount("/static", StaticFiles(directory=FRONTEND_DIST), name="static")
causing assets to be mounted from wrong place.
So I changed main.py to: app.mount("/assets", StaticFiles(directory=path_to_dist), name="assets")
and in vite.config.js I switched to this "base" value:
export default defineConfig({
base: "/",
ISSUE: 405 error (Method Not Allowed)
SOLUTION:
The problem was that my ports were not aligned, between backend/Dockerfile
, frontend/Dockerfile
, docker-compose.yml
, and Caddyfile
.
This can be challenging when trying to handle the transition from .venv to Docker, while maintaining different environments for dev and production.
These tips are not bulletproof security but may help you get unstuck from 405 errors.
This is what I needed:
*caddy
serving on port 80 in the container, and 443 in production.
*frontend
hot-reloading on 5173 for testing.
*api
Caddy reverse-proxies from port 5000 (host) to api:8000 (container), for dev.
Caddy (dev): host :5000 → Caddy :80 (in container).
*Caddy (prod): :443 with TLS.
*Vite (dev): :5173 with hot reload.
*Reverse proxy (dev): Caddy proxies /api → api:8000.
vite.config.js:
server: {
proxy: {
'/api1': 'http://localhost:5000', // adjust to your backend
'/api2': 'http://localhost:5000',
Caddyfile (dev environment only):
handle /api1/* {
reverse_proxy api:8000
}
handle /api2/* {
reverse_proxy api:8000
}
HINT: If you include the path such as "api1" in your route, use handle
. If you want it stripped, use handle_path
.
docker-compose (dev environment):
services:
api:
ports:
- "8000:8000"
caddy:
ports:
- "5000:80"
frontend:
build:
context: .
dockerfile: frontend/Dockerfile
container_name: frontend
ports:
- "5173:5173" # default Vite dev server port for testing and hot-reloading
Here is a simple dev/prod diagram for ports:
Dev:
Browser → Vite :5173
└─ proxy /api1,/api2 → Caddy :5000 → reverse_proxy → api:8000
Prod:
Browser → Caddy :443
├─ / (static Vite build)
└─ /api → reverse_proxy → api:8000
ALSO REMEMBER: These issues are usually not caused by CORS, so look elsewhere.
ISSUE: Could not access my api2.
SOLUTION: All apis needed to be added as proxy in vite.config.js, ie:
server: {
proxy: {
'/api': 'http://localhost:5000',
'/api2': 'http://localhost:5000' #the one that I could not access.
}
}
ISSUE: Needed different ports and values for dev/prod.
SOLUTION: I wound up splitting up my docker-compose and Caddyfiles. Here's an example of how I called them:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up --build
FINAL TIPS
It may be no surprise that I used AI extensively for this project. Here are some final tips from GPT-4 that I thought were particularly useful:
✅ Final Tips
Action | Recommended? |
---|---|
Run FastAPI directly from .venv | ❌ Not when testing Docker |
Keep .venv in project root | ✅ (but add it to .dockerignore) |
Use Docker for full-stack testing | ✅ Definitely |
Use uvicorn outside Docker | Only for quick local debugging |
DOCKER COMMANDS that I used:
docker compose build --no-cache
docker compose up
or
docker compose up --build
FURTHER READING
Here is a good high-level writeup on why you may want to use docker instead of venv.
Top comments (0)