"It works on my machine."
Every developer knows this phrase. I had just finished building a Water Quality Prediction system using Python, Flask, and Docker. It was running perfectly on my laptop. I could send a curl request to localhost:5000 and get a prediction back instantly.
But "localhost" doesn't help the world. If I wanted this model to actually be useful—perhaps as a backend for a mobile app or an IoT river monitoring system—I needed a public URL.
In this post, I’ll walk through the final (and surprisingly easiest) step of my project: deploying a Dockerized Flask app to the cloud using Render.
The Setup: What We Are Deploying
Before we jump into the cloud, here is a quick snapshot of what I built:
- The Brain: A Random Forest Classifier trained to predict water quality classes (0, 1, 2).
- The Body: A Flask API that accepts JSON data (pH, Turbidity, etc.) and returns predictions.
-
The Container: A
Dockerfilethat packages everything into a portable unit.
Because I had already done the hard work of Dockerizing the app (fixing file extension errors and network timeouts!), moving to the cloud was straightforward.
Why I Chose Render?
I looked at a few options like Vercel and Heroku.
- Vercel is great for frontends, but it’s "serverless," which means it puts Python apps to sleep aggressively. This causes "cold starts" where the model takes 10+ seconds to load into memory on the first request.
- Render supports Docker natively. It basically says, "Give me your Dockerfile, and I will run it exactly like you did on your laptop." Plus, it has a generous free tier.
Step 1: Git Push
The first step was getting my code off my laptop. I created a new repository on GitHub and pushed my code.
git init
git add .
git commit -m "Ready for deployment"
git remote add origin [https://github.com/my-username/water-quality-api.git](https://github.com/my-username/water-quality-api.git)
git push -u origin main
`
Crucial Tip: I made sure my .gitignore file prevented me from uploading unnecessary junk like __pycache__ or virtual environments.
Step 2: Connecting to Render
This process was refreshingly simple:
- I logged into the Render Dashboard.
- Clicked New + -> Web Service.
- Connected my GitHub account and selected the repository.
Here is where the magic happened. Render automatically detected my Dockerfile. I didn't have to configure Python versions or install system dependencies manually. I simply selected the Free plan and hit Create Web Service.
Step 3: The Build (Cloud Edition)
Watching the build logs on a remote server is a satisfying experience. I saw the exact same steps scrolling by that I saw in my local terminal:
Installing numpy...Installing pandas...Exporting to image...
After about 5 minutes, the logs went green: Your service is live.
The Moment of Truth
Render gave me a shiny new URL: https://catfish-water-quality-prediction.onrender.com.
It was time to test it. I opened my terminal and sent a request—not to localhost, but to the real internet.
bash
curl -X POST [https://catfish-water-quality-prediction.onrender.com/predict](https://catfish-water-quality-prediction.onrender.com/predict) \
-H "Content-Type: application/json" \
-d '{
"Temp": 67.4,
"Turbidity (cm)": 10.1,
"DO(mg/L)": 0.2,
"pH": 4.7,
"Plankton (No. L-1)": 6069.6,
"BOD (mg/L)": 7.4,
"CO2": 10.1,
"Alkalinity (mg L-1 )": 218.3,
"Hardness (mg L-1 )": 300.1,
"Calcium (mg L-1 )": 337.1,
"Ammonia (mg L-1 )": 0.2,
"Nitrite (mg L-1 )": 4.3,
"Phosphorus (mg L-1 )": 0.005,
"H2S (mg L-1 )": 0.06
}'
The Result:
json
{
"water_quality_prediction": 2
}
Success! My model is now accessible from anywhere in the world.
Conclusion
Taking a Machine Learning model from a Jupyter Notebook to a live API feels like a superpower.
The combination of Docker + Render is incredibly powerful for Data Scientists. Docker ensures the environment is consistent, and Render takes care of the infrastructure. If you have a model sitting on your hard drive gathering dust, I highly recommend trying this workflow to set it free.
Top comments (1)
This feels like applying the “Twelve-Factor App” philosophy in practice: you've cleanly separated code, config, and environment with Docker, then let Render handle the stateless deploy so your ML model moves from laptop artifact to real, web-ready service.