DEV Community

Cover image for Self-hosting: What's the point of homelab?
Błażej Adamczyk
Błażej Adamczyk

Posted on

Self-hosting: What's the point of homelab?

My previous post on Why You Should Self-Host Everything blow up to my surprise. I'm positively encouraged to write more. However, I realized that not everyone quite realized what was the purpose of self-hosting in that instance! And I wrote the article under the assumption that it's obvious! Well then, let me explain.

Assumption #1: Public facing service

Self-hosted homelab was never intended to be accessible from the web. By design it is available for internal network only. It's for you. For your own personal projects. For hacking, pentesting, trying new technologies, learning. You can spin up redis, kafka, postgres with ease and learn integration with it in your home environment. You can setup rss reader, media server, password vault, etc. in your home network without access from outside world. This is the benefit of self-hosting because you can create your own tools that only you have access to.

Assumption #2: Why not on your working PC?

For some use cases - of course, Docker Desktop is enough. Especially in case of local development environment. But the downside is that it becomes unavailable as soon as your main device is down. What if you want to have an access to your tools while your main pc is down? What if there are background jobs you want to run, or maybe you have notification bot running? The main benefit if micro-pc is that it has low power consumption of around 30 watt-hours per hour. For comparison fully-fledged stationary PC consumes between about 100-150 watt-hours per hour on light work, and laptops around 50 watt-hours per hour. That's up to 5 times difference! And that's not accounting for PC monitor!
So yeah - for something that can potentially work 24/7, it's definitely more cost-effective.

Assumption #3: Private access

Now, let's say, you are out of home, maybe you were sent by company to a conference, or to work in different hub for a while. How do you set up remote access? Didn't I mentioned that they are not public facing services? Well, I did. But you don't have to expose your services to the web, to actually use them remotely. Please welcome VPN - the actual intended way of using them (unlike what "private vpn" services promote).
There are two solutions for VPN, and two ways to set them up. Some routers have VPN server built in, like Asus:

Asus Router Dashboard
This is actually screenshot from custom built firmware for Asus Router

In case of cheaper ones, you can actually setup your VPN via docker container, and simply port-forward from router to your VPN instance.

Linuxserver's OpenVpn image

And you can choose between OpenVPN and Wireguard, both are solid options, however Wireguard a much newer and faster than OpenVPN.

But doesn't that mean you are exposing open port to the world? Yeah, but there are ways to mitigate that:

  • use fail2ban, to ban anyone on #ish attempt that fails.
  • use non standard port and udp-only approach, you can avoid noobs and script kiddies this way
  • use tunneling - this way you are not exposing anything, but connect via proxy to actually connect to your network. You can achieve that by yourself setting up reverse-proxy vpn on vps (now your network is just client with pass-through) or using service like Twingate

I might cover all of that in the future, but at least I wanted to leave you something to go on on yourself.

Assumption #4: Small number of services

Now, one of the arguments was - if you have small number of containers, you can use raspberry pi - and that's correct, but in that case wouldn't it be better to just run it on your machine? I'm talking about running more then 10-15 containers constantly. Right now, my personal server is running 27 services and counting! I'm literally replacing a lot of subscription-based productivity services with my own - all in my network.
That's what most companies do, when they ask you to connect to work VPN, most of their internal network is self hosted, and available for employees only.
Of course, I'm using raspberry pi in my own setup, but only as dns server and network intrusion detection system (nids). My NUC however, is running home assistant, wiki / notes, rss reader, media server, meal planner, automation tool, searx, password vault and task/project tracker, among others. And the best thing is - because it's all on docker, I'm not even using that much resources I would with VPS or VM. Here's proof:

NUC Dashboard

CPU utilization at ~15%
RAM utilization at ~32%
SSD utilization at ~10%

Traefik dashboard

And this is number of apps exposed to my network. It does not count for dependencies. So as you can see, you don't need strong CPU for something like that. Simple i3 2-core 4-thread CPU is enough. What you need is RAM. I have 32 GB of DDR4, and I'm using only 1/3 of it, but as my infrastructure grows I might need an upgrade... maybe... or maybe not.
It all depends on use case.

Assumption #5: Why not cloud? Why not RACK?

Now you might ask, why not spin up AWS EC2 if CPU is not that demanding? Well let's keep in mind, that what containers uses the most is RAM. For my current setup I would need about 12-16 GB RAM instance. Currently only R6G Large fulfill that demands which equals to $80 monthly which is $960 yearly. For that amount of money you can buy both NUC and NAS which will collectively exceed what AWS has to offer and stay with you for years!

RACK on the other hand is really fun project to have, but it's way more expensive one, takes up a lot more space, and consumes more energy. So if you are about cost-cutting, that's not the most cost-effective option.

Practical use cases

Now that we tackled most of the assumptions, let's discuss who actually benefits such setup.

Use Case #1: Startups and small business

Having a small on-premise pc, that runs business infrastructure is a blessing, granted you have at least one person that know how to maintain it. It means that you can set up your infrastructure according to your employees needs. There are a ton of tools that helps running a business and doesn't cost a dime if you host them yourself, starting on LDAP, and ending with CRM.

Use Case #2: Network Nerds

I don't even have to mention it in this case right? Chances are you already have one. Don't confuse with "networking" during IT conference.
You own a homelab to play with infra. You create virtual networks, set up credentials, firewalls and policies. You are basically refining your skills as IT guy, because you like it.

Use Case #3: Penetration Testers

Homelab is great environment to actually deploy vulnerable apps and OS, to try and hack into them. Yeah sure, there are sites like hackthebox that provide that experience, but if you want to see ransomeware running wild, VM or container are safe environment for that.

Use Case #4: DevSecOps

Now, I don't have to explain docker containers for you. But kubernetes on the other hand, is a different story, and with homelab you can test deployment script for cheap. And I know, probably like some of you I tested kubernetes on VM stack, and was not impressed. Well, let me tell you that there is such thing like kubernetes in docker (kind), and it runs 100x better on vm-less machine than vm stack!

Use Case #5: Cloud Native Developers

Basically same category as DevSecOps. You might put Openstack or Rancher on top of your infra. Or you might add Gitlab to keep your repos private with additional CI/CD pipeline to deploy locally. It makes total sense if you want your apps available internally. After all it's part of the fun.

Use Case #6: Automation, Media Server, Documents and News Hosting.

And the final case is almost simply "others". You can set up Home Assistant, N8N, Plex/Jellyfin, Paperless. All of that are convenience software. It's just helps to organize your life better and automate a bunch of staff. It also works great with smart homes, when you have local network enabled devices, like lights, air conditioning, blinders. You don't need fancy service to set up your smart home, you can do it yourself one bit at a time. Granted, if you only have only couple of services, newest raspberry pi might be more then enough.

Upsides and downsides

Of course running homelab has it's upsides and downsides. It's not for everyone. For me, opportunity to learn and shear fun of setting it up outshines the frustration of maintaining it. But let me recap some pros and cons:

Pros:

  • Your personal network, hidden behind the curtain
  • Full control over what's in the
  • Free open-source stuff
  • Fun way to improve your skills and learn
  • Depending on setup you can cut quite a lot of monthly costs

Cons:

  • Steep initial cost
  • Takes quite some time to set up
  • Requires above average technical skills
  • Adds up to your electricity bill
  • Requires regular semi-maintenance

Summary

The number of use cases for homelab might not be that big and for many it might not be convenient to maintain it themselves. With using third party, you often pay for exactly that: convenience. However if you are into this stuff, you don't have to use your homelab for exactly one use case. You can mix and match according to your needs. And that's the beauty of homelab - flexibility. That's why I chose to run mine. For every paid service I might need at the moment, there is an open source version, that I might be able to run myself. And for everything that is client-facing - I will use AWS.

Top comments (0)