I recently launched sttrace.com, it's a site where people can practice real world software engineering problems. When it was finally time to move it from localhost to the internet I had to figure out a way to host it without burning a hole in my pocket.
It's a simple website with a React client, Node.js backend server and a container manager REST API service. If I spin up one medium-sized EC2 instance, I can probably fit all the services along with a Postgres server and a self-hosted image registry in there.
But looking at the cloud costs I realized that this setup would be very expensive. Then I realized I already have 2 servers running in my homelab. It's nothing fancy, 2 Dell Optiplex machines both running Linux. It'd be perfect for hosting. I can run my main monolith service on one server (the smaller one) and put the container manager and database on the other server. The second server is the bigger one since the container manager and DB will require more resources.
The challenge would be to get around CGNAT. I tried some tunneling services which can give me public URLs for the services I was running on my homelab but the paid tier was too costly for me at that time.
So I decided to implement my own SSH tunnel. First I needed something to tunnel to, so I got an elastic IP and a t2.micro EC2 instance on AWS. I started an nginx service on it and routed all the traffic to local ports.
Once the nginx was running I started an SSH tunnel from my homelab to the EC2 instance.
I then shoved all my startup commands for the required services into a bash script and created a systemd service.
The website was then fully live🥳
This setup costs only peanuts, but is quite susceptible to internet or power outages.
Top comments (0)