DEV Community

Lakshit Singh
Lakshit Singh

Posted on

Self-Hosting from Scratch: My Journey

Choosing The Pieces: Getting The Machine

Since the dawn of my Computer Science journey, I have wanted a server, something that would separate me from all those hosting platforms, and their limitations. Something I could call my own.
As I went along building projects, the need for a server only increased, I had grown from having 1-2 projects that required hosting to more than 4-5. And that was when the limitations of these hosting platforms became apparent.

I didn't just want to buy a VPS service, I wanted to go further, understand how it works. How the projects that I build are served to the world. What these Hosting Platforms do behind the scene. And I was not disappointed.


The Resource Revelation

Initially, when I was shocked with how little resources an app may require, for example, I found that a 6 core CPU with 32GB RAM and decent storage would be considered top of the line. Like, people host sites on Raspberry Pi's, a PC is a huge jump compared to that.
Another thing with server's is that they want resources from a very specific bracket, server's are most affected by CPU and RAM, everything else matters less.

Along with this specified spec-set, I also wanted to future-proof my server, so 10 years down the line, I wont be stuck with some dumb choices I made now. That is why I chose to go with the AM4 Platform, since AMD had announced support for a few more years, and Intel being on the down recently.


Hunting for Parts

With this in mind, I went to online stores, To Nehru Place (Biggest Tech Market in Asia) surfing through tens of shops to understand my options and getting advice from the shopkeepers. Finally, My exact spec sheet came to be:-

  • CPU: AMD Ryzen 5 3400G - 4 cores
  • RAM: 16GB DDR4 3200MHz
  • MOBO: Gigabyte A520MK v2
  • DISK: 300GB HDD

With the server finally built, It was time to dive into the world of Virtual Machines.


What is a VM? Understanding Virtual Machines

Before I actually explored how Virtual Machines worked, they were just a way to have another OS on top of my OS. While that is the crux of what a VM is, it is so much more than that.


Virtualization and Hypervisors

Like I said before, a VM allows an isolated OS to work on a host machine, the keyword being isolated, this implied that on one machine I could have 10-20 (or more) Virtual Machines (if the hardware allows it) and they would never affect each other, what is going on inside one VM is completely isolated from the processes from another.

This meant that instead of what I thought a server would be (A computer running tens of processes). I could isolate each of my work from each other, exactly what Docker does, but on a larger level.

Hypervisors were something even more foreign and fascinating to me, in a nutshell, Hypervisors manage the resources to each Virtual Machine, Like how much of the host machine's resource a Virtual Machine can have.


What is a Virtual Environment? Why Proxmox?

From my understanding, a Virtual Environment is a generic term for an isolated system, its an expansion on VM's. A Virtual Environment can either be a :

  • Virtual Machine - A full OS with its own kernel, like a real computer.
  • Container - A partial OS that shares the host's kernel.

And this is the crux of self-hosting.

Proxmox, is different from Virtual Environments, It provides an interface to interact with your Host machine, and also manage multiple Virtual Environments. One of the things I love about Proxmox, is that it provides a Web App to interact with your machine, that includes, this makes sure that I have to almost never attach a monitor to my server to interact with it.

Proxmox Diagram

Setting Up the VM

After understanding how VM's and Proxmox worked, It was finally time to boot up the server and install Proxmox.


Installing Proxmox

To people who have installed Linux on their machines before, this process like for any other OS.

  • You obtain an ISO
  • Etch it into a USB
  • Plug it in the PC
  • Get into the BIOS to boot from USB

And Voila! You just have to follow the instructions and you are done.

Although Proxmox was a little more complex than an Ubuntu installation, but it was nothing an Arch Linux user like me couldn't handle. (I use arch btw!).


Setting up a VM

Once the Web GUI was up and running, Creating a VM was pretty easy, just upload the ISO to the Web Interface, and choose that ISO when creating and that's it.

I went with Ubuntu Server, although every fiber of my being wanted to go Arch, or something Arch-based, I just didn't want to mess something up, since I wont be able to access the server and only have remote access through the web.


SSH + Networking

While I was setting up SSH, for the VM, I was confused:

"why is my Proxmox IP different from my VM IP?"
"and more so, both are different from my IP that ifconfig.me shows!"

That was when I learnt the important distinction between the Public IP and Local IP.
Public IP is what identifies your router or your LAN to other users on the internet, whereas Local IP is the address of each of your devices connected to your LAN, which is your WiFi. It is to locate what devices are connected to a network.

To my surprise, a Proxmox VM acts as a separate machine, and doesn't just use the same IP as the Proxmox, while it makes sense this would be the way it worked, I just never realized it.

This will allow me to better manage all my VM's, being able to SSH into each one, and not having to traverse the tree to access them.

My Hosting Architecture

My goal for my applications was to have them isolated, so something like nvm wont bother me when I start adding more stuff. The best option was to use Docker containers and of course, Nginx, to actually route requests to the specific containers.


What is Nginx?

Nginx is a web-server and reverse-proxy agent. In simpler terms, it routes the requests anyone makes to a specific port etc.
The flow is:

  • Nginx gets request
  • It compares to the rules I write
  • Sends the request to correct app/port
  • Returns the response back to the user

This is also called reverse proxy.

Currently my nginx configuration only has my main domain, when I add more projects, I plan to divide them into subdomains, that would make it more complex than it is now.

Since I had a domain from cloudflare, it already provided a proxy and SSL cert, so I luckily didn't have to configure that.


Docker Compose

For my applications, I went with Docker containers, since they ease the process of managing different applications, having isolated environments. Also with Docker, I could create github workflows (more in coming sections) that would save me a lot of work on each update.

With Docker Compose, I could simply have a compose consisting of an nginx image and all my containers, and utilizing Dockers fantastic networking system, manage all my servers from just one compose.yml.

services:
  nginx:
    image: nginx:latest
    container_name: nginx
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/conf:/etc/nginx/conf.d  # Nginx site configs
      - ./nginx/certs:/etc/ssl          # SSL certs if needed
      - ./nginx/logs:/var/log/nginx     # Optional logs
    depends_on:
      - app1
      - app2
    networks:
      - webnet

  app1:
    image: my-app1-image
    container_name: app1
    ports:
      - "3000:3000"  # optional, for dev/testing
    networks:
      - webnet

  app2:
    image: my-app2-image
    container_name: app2
    networks:
      - webnet

networks:
  webnet:
    driver: bridge
Enter fullscreen mode Exit fullscreen mode

This would mean that adding a new service is as simple is creating a new entry in compose.yml, and uploading the image to DockerHub.


CI/CD & Github Actions

A CI/CD, Continuous Integration & Continuous Deployment, as the name suggests, is a process that would update my deployments whenever I push a change.

When I was researching about this, I got to know about Github Actions, and how it worked. And I was... impressed to say the least, Once I had the workflow working properly, I didn't have to do anything whenever a change was pushed. The Action would:

  • Build the Docker image
  • Push the Docker image to DockerHub
  • SSH into my server
  • Pull the Docker image
  • Restart my Docker Compose

This eliminated the need for me to do anything at all.

The Setup Begins: Problems Galore


Router Problems

When I did setup all this, I just expected it to work, but this was far from what I imagined, and that was when I realized that I had to configure my Router too.

My Router is my gateway to the internet, but it also serves as a protection from the internet. Any request that would come to my Public IP would have to go through my Router.

And That is the reason I had to configure my router, and allow Port Forwarding, which tells my router to forward any request coming from 80 (HTTP Port) or 443(HTTPS Port) to my server, using the local IP.

The problem? Well, my Router did not allow me to forward Port 80 or 443. Whenever I would, it would say that Port is already in use. The only thing I could find was that maybe the router had a Gateway interface for remote usage as well. And no way to disable that.


ISP Problems

Well lucky for me, I had an old router lying around that I could use for now. And to my surprise, It allowed me to port forward!

"The problem is solved! Right?"
No

If it wasn't the router, It was the ISP. For some reason, I could not access my port remotely, but if I did with local IP, it would perfectly. After wracking around trying to figure it out, I finally settled on the fact that my ISP is just blocking all requests at my ports.

Through some research and help, I found that cloudflare tunnels could help solve my problem. And after configuring it, I could finally access my site remotely.


The Silly Mistake That Cost Hours

For some context, the site I was trying to serve was my Portfolio website, in svelte. And svelte has 2 ways you could build it, having static html pages. Or an SPA(Single Page Application).

While I was trying it out, I chose the static method that seemed faster at that time, as I am not well versed in how building frontend applications work. But that was a bad choice.

After I tried that, and then learned more, I wanted to switch to the SPA (since it is smoother to traverse).

My error? In my compose.yml

pf:
    build: ./pf
    container_name: pf
    volumes:
      - pf-static:/app/build
    expose:
      - "5173"
    networks:
      - web
    environment:
      VITE_MAILER_URI: http://mailer:8888
      PORT: 5173
Enter fullscreen mode Exit fullscreen mode

Notice anything?

- pf-static:/app/build
Yeah, I kept overwriting the build directory.

This was such a pain to solve, I kept trying to fix things, since I thought the error was in the application itself not building.
I even resorted to cloning the repository and building it locally!


Finally Online! Whats Next?

The feeling when I finally got it working was so exciting! I was jumping out of my chair after hours and hours of configuring. It was one of my most proud moments. I felt accomplished.


Whats Running?

Currently I have only my Portfolio and a Mailer API (That sends a mail to me). Which you can check it out here.Its still in development, only works on desktop for now.

I also have a local mailer API that sends email to me with various details. But that is not accessible from the public url.


Whats Planned?

I have a lot of aspirations and expectations from the server:

  • Gitea (Personal Git Service)
  • Other Projects
  • Adblocker
  • VPN
  • Grafana & Prometheus (Monitoring)

Top comments (1)

Collapse
 
sumitguptagit profile image
Sumit Gupta

Good work! Looking forward to get your server working.