DEV Community

Cover image for From $200 monthly bill to $17, a simple yet effective approach
Puspender
Puspender

Posted on • Edited on

From $200 monthly bill to $17, a simple yet effective approach

I don't know where do I start from, so I am starting from what my application is and how it was hosted.
So this is my side hustle, a social media for a particular set of audience which I can't disclose for whatever reasons. I started it when I was a newbie to the #webdev world, so every technology excited me. I wanted to learn each and every cool tools the tech industry uses and that's where my personal AWS journey started. I had few reasons to host my application on AWS

  1. I thought my idea is gonna be super duper hit and AWS is the right tool from day one.
  2. Every service is just one click away to use.
  3. I wanted to learn AWS, doing hands-on.
  4. AWS means speed.

My application was (and still is) a monolithic application with components below (with respective AWS service)

  1. Spring Boot backend (Fargate/ECS)
  2. Next.js frontend (Fargate/ECS)
  3. ElasticSearch (Opensearch now)
  4. Postgres (RDS)
  5. S3
  6. SES
  7. Git (for source code and deployment pipeline)

Other paid services which were getting used from AWS

  1. NAT Gateway
  2. Elastic IP
  3. ELB
  4. ECR

AWS Bill

If you are reading this article, I am assuming you know what all these services does, at least the basics. So as you can see a lot of services I might not needed like NAT, Elastic IP, ELB.
Or, I might not needed any of these !!

Welcome to the world of Self Hosting

So instead of paying AWS for every services, why don't I self host them? And on top of that, why don't I host them on a single machine (later I will tell you the benefits).
Then I started looking for the options. AWS EC2? Naah, as I am looking to reduce the bills, why not to look for more cheaper options. That's where I heard of Hostinger. Oh man, under their KMS 8 plan I got 8vCPU, 32GB RAM, 400GB disk and 32 TB of bandwidth and that too @ just $386 for two years. Applied all possible discounts (affiliate + 2 years advance payment).

And now I have a machine where my super duper idea can run. But how? How do I run them in a simplest way possible. I don't want to go back to the era where we were doing the deployments manually. I also don't want to install linux packages of each and every service. Do I need Orchestration platform like Kubernetes or Nomad? Yes, those helps in deployments of your docker containers. But do I actually need them? of course not. I don't have multiple machines, I have one single machine. But you know the curious kid in me who wants to learn and implement everything from scratch.
So I kept my curiosity aside, and finalised the simplest approach - Docker Compose. The simplest I can think of. A single YAML file, single run command and you have all your services up and running. That's what I wanted and needed.

New Architecture

New architecture on VPS
Short description: I am self hosting Postgres, Meilisearch (Elasticsearch replacement), Nginx (reverse proxy - replacement for ELB), Fargate is removed as I can run my application containers easily (docker run). I push the docker image to ECR, then on my VPS I pull those image and run them through docker compose. All this is done using GitHub workflow (sample below)

name: Deploy production site on Hostinger

on:
  push:
    branches: [ main ]

env:
  AWS_REGION: ${{ secrets.AWS_REGION }}
  AWS_ACCOUNT_ID: ${{ secrets.AWS_ACCOUNT_ID }}
  APP_NAME: mysocial-rest-api
  ECR_REPOSITORY: mysocial-prod-restapi
  IMAGE_TAG: latest

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:

    - name: Cancel Previous Runs
      uses: styfle/cancel-workflow-action@0.4.1
      with:
        access_token: ${{ github.token }}

    - name: Checkout
      uses: actions/checkout@v4
      with:
       ref: ${{ github.event.pull_request.head.sha }}

    - name: Configure AWS credentials
      uses: aws-actions/configure-aws-credentials@v1
      with:
        aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
        aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        aws-region: ${{ secrets.AWS_REGION }}

    - name: Login to Amazon ECR
      id: login-ecr
      uses: aws-actions/amazon-ecr-login@v2

    - name: Build, tag, and push image to Amazon ECR
      id: build-image
      env:
        ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
      run: |
        # Build a docker container and push it to ECR
        docker build -t $ECR_REPOSITORY .
        docker tag $ECR_REPOSITORY $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
        echo "Pushing image to ECR..."
        docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
        echo "Image pushed to ECR"

    - name: SSH to VPS and deploy Docker image
      uses: appleboy/ssh-action@v0.1.5
      env:
          IMAGE: ${{ secrets.AWS_ACCOUNT_ID }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com/${{ env.ECR_REPOSITORY }}:${{ env.IMAGE_TAG }}
      with:
        host: ${{ secrets.HOSTINGER_VPS_HOST }}
        username: ${{ secrets.HOSTINGER_VPS_USER }}
        key: ${{ secrets.GA_SSH_PRIVATE_KEY }}
        passphrase: ${{ secrets.GA_SSH_PRIVATE_KEY_PASSPHRASE }}
        envs: IMAGE, AWS_REGION, AWS_ACCOUNT_ID
        script: |

          echo "Login to ECR"
          aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com

          echo "Pulling image"
          docker pull $IMAGE
          echo "Image pulled"

          echo "Restarting the updated container" #container name (backend-api) is taken from docker-compose.yml
          docker-compose -f /opt/mysocial-apps/docker-compose.yml pull backend-api
          docker-compose -f /opt/mysocial-apps/docker-compose.yml rm -f backend-api
          docker-compose -f /opt/mysocial-apps/docker-compose.yml up -d backend-api
          echo "Service restarted"
Enter fullscreen mode Exit fullscreen mode

Prerequisite

  1. Server setup
  2. All required dependencies to be installed using docker-compose
  3. Implement security measures on VPS

Lets do these step by step

  1. Download Ubuntu (latest version) on VPS.
  2. Add new user admin_user

    adduser admin_user
    usermod -aG sudo admin_user
    
  3. Disable root user

    sudo vim /etc/ssh/sshd_config
    PermitRootLogin No
    sudo systemctl restart ssh
    
  4. Login using ssh key (passwordless)

    # Create ssh key on local. Change -C according to the user. Using passphrase is a must.
    ssh-keygen -t ed25519 -b 4096 -C "admin-user-personal"
    
    # Add that key to server (to the user which you want to use. e.g `admin_user`)
    ssh-copy-id -i ~/.ssh/id_ed25519.pub admin_user@<server-ip>
    
  5. You can now login using ssh key. This will not ask for password.

    ssh admin_user@<server-ip>
    
  6. Install Fail2Ban

    sudo apt install fail2ban
    
  7. Install and Configure a Malware Scanner

  8. Change ssh port to 222 (choose any non used port, not 22 which is default). Good to have security practice.

    sudo vim /etc/ssh/sshd_config
    # Change port to 222
    sudo systemctl restart ssh
    
  9. Disable ports other then 80, 443 snf ssh port. Official AWS doc

  10. Download aws cli (using snap). Make sure have Access key-secret to authenticate cli on VPS

    • Check if snap present snap version
    • Install sudo snap install aws-cli --classic
    • check installation aws --version
    • configure aws aws configure
  11. Keep Your System and Software Up-to-Date

    sudo apt update && sudo apt upgrade -y
    

Install docker manually

Reference article: Hostinger Documentation

sudo apt install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

sudo apt update

apt-cache policy docker-ce

sudo apt install docker-ce docker-ce-cli containerd.io

sudo systemctl enable docker

docker --version
Enter fullscreen mode Exit fullscreen mode

Add admin_user to docker group so that it can run docker commands without sudo

sudo usermod -aG docker admin_user
Enter fullscreen mode Exit fullscreen mode

Make sure you logout and login again.

Docker Componse file on server

You need to have the docker-compose.yml on the server. I usually keep all of my files on /opt directory. So below is the directory structure.

/opt/mysocial-apps
├── docker-compose.yml
└── nginx
    └── ngnix.conf
Enter fullscreen mode Exit fullscreen mode

Let's Encrypt certificate

  1. Save env vars at /etc/environment
  2. Run website using 80 port nginx

        server {
            listen 80;
            listen [::]:80;
    
            server_name mysocial.com www.mysocial.com;
    
            location ~ /.well-known/acme-challenge/ {
                root /var/www/certbot;
            }
    
            # Proxy requests to the Next.js web app
            location / {
                proxy_pass http://web-app:3000;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_cache_bypass $http_upgrade;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
            }
    
            # Proxy requests to the Spring boot rest-api
            location /rest-api {
                proxy_pass http://backend-api:8081;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_cache_bypass $http_upgrade;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
            }
        }
    
  3. Validate certificate path mounted on the ngnix docker. In docker-compose, certificates paths are mentioned. Check if they got created on OS and docker container

  4. Generate cert in --staging

    sudo certbot certonly --webroot -w /var/www/certbot --email help@mysocial.com -d mysocial.com -d www.mysocial.com --force-renewal --agree-tos --no-eff-email --staging
    
  5. Check the url. https://www.mysocial.com

  6. If the url loads fine, generate the cert removing --staging and restart the ngnix

  7. Edit nginx.conf and add server 443

        server {
            listen 80;
            listen [::]:80;
            server_name mysocial.com www.mysocial.com;
    
            # Certbot challenge location
            location /.well-known/acme-challenge/ {
                root /var/www/certbot;
            }
    
            # Redirect HTTP traffic to HTTPS
            return 301 https://$host$request_uri;
        }
    
        server {
            listen 443 ssl http2;
            listen [::]:443 ssl http2;
    
            server_name mysocial.com www.mysocial.com;
    
            ssl_certificate /etc/letsencrypt/live/mysocial.com/fullchain.pem;
            ssl_certificate_key /etc/letsencrypt/live/mysocial.com/privkey.pem;
            ssl_protocols TLSv1.2 TLSv1.3;
            ssl_ciphers HIGH:!aNULL:!MD5;
    
            # Certbot challenge location
            location /.well-known/acme-challenge/ {
                root /var/www/certbot;
            }
    
            # Serve robots.txt
            location /robots.txt {
                root /etc/nginx; # path inside docker container
            }
    
            # Proxy requests to the Next.js web app
            location / {
                proxy_pass http://web-app:3000;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_cache_bypass $http_upgrade;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
            }
    
            # Proxy requests to the Spring boot rest-api
            location /rest-api {
                proxy_pass http://backend-api:8081;
                proxy_http_version 1.1;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection 'upgrade';
                proxy_cache_bypass $http_upgrade;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
            }
    
        }
    
  8. Restart nginx docker-compose restart nginx

  9. Check if everything is working fine, it should be.

What next?

  1. Auto backups of DB using cron.
  2. Auto renew certificates using cron.
  3. Keep an eye on malware and other logs. Regularly update libraries on the system.
  4. Implement rate limiting on your sever.

What did we miss

  1. Above architecture has a downtime involved when the new images are pulled and run. But this can easily be fixed using blue-green deployment or similar strategy.
  2. Managed service reduces headache of maintaining the services yourself. For example managing backup of DB, OS updates etc. Again, the backup can/must be done using cron jobs in above setup.

What did we gain

  1. Money
  2. Speed - All your services are now communicating on localhost, so no network latency.
  3. Simplicity

Takeaways

  1. Your idea is not super duper on day one.
  2. You don't always need aws like cloud providers from starting.
  3. No, AWS doesn't meant scale and faster system.
  4. You can scale monolith much higher then what you would have expected. Stackover is still monolith.
  5. You can scale a self hosted application. I know few startups who have raised millions from investors and they still have their startups running on VPS/similar. On a VPS I purchased, I can easily handle thousands of customers daily. I will do a benchmark some day [TODO on me]

I could have eliminated the need for ECR and created the docker image by pulling git repo on the server, but I found ECR approach more simpler.

I am still using AWS services (S3, SES, ECR) which are costing me ~ $1 monthly.

Feel free to reach out in comments or on my social media handles :)

Top comments (0)