DEV Community

Cover image for What Building a Home Server Actually Taught Me About Infrastructure
Mahesh Cheemalapati
Mahesh Cheemalapati

Posted on

What Building a Home Server Actually Taught Me About Infrastructure

“Infrastructure always felt like this invisible layer beneath software engineering — important enough that everyone depends on it, but abstract enough that most developers never really touch it.”

A few weeks ago, I wanted to deploy my portfolio and a few personal projects, so I started researching hosting options.

Like most developers, I went through the usual rabbit hole.

  • Cloud providers
  • Free tiers
  • VPS comparisons
  • Deployment tutorials
  • “Deploy in 5 minutes” videos

At first, everything looked straightforward.

Pick a provider.

Push your code.

Point a domain at it.

Done.

But the more I looked into it, the more I started wondering:

What actually happens underneath all of this?

If I deploy an application to the cloud, where does it really run?

How are servers secured?

How do applications stay online 24/7?

What exactly happens when traffic moves through a VPN?

As developers, we interact with infrastructure constantly, but most of the time we experience it through abstractions.

  • Cloud dashboards
  • Managed databases
  • Deployment platforms
  • Serverless runtimes

Convenient abstractions make building software easier.

But they also make the underlying systems feel invisible.

And honestly, at some point, the engineer in me got annoyed by how much of modern infrastructure I was using without fully understanding.

Because every large system started somewhere.

At some point:

  • Google was just servers in a room
  • Netflix was just an application someone deployed
  • Infrastructure was still infrastructure before it became “the cloud”

So I started asking myself a different question:

Could I build a small version of this myself?

Not enterprise scale.

Not production-grade infrastructure.

Not something that replaces AWS.

Just enough to actually understand the moving pieces.

That’s when I came across the Raspberry Pi 5.


Why the Raspberry Pi Interested Me

I remembered using Raspberry Pis during my Master’s program for smaller academic projects, but I had never seriously thought of one as an actual server.

The more I researched, though, the more interesting the idea became.

I wanted something small. Something constrained. Something that would force me to understand what I was doing instead of hiding everything behind a dashboard.

A mini PC would probably be more powerful. A cloud VM would probably be easier. But that wasn’t really the point.

I was not trying to build the most powerful server.

I was trying to build something that would teach me how servers actually work.

That’s what made the Raspberry Pi interesting.

It sits in this weird middle ground where it is:

  • cheap enough to experiment with
  • powerful enough to run real workloads
  • constrained enough to force you to learn

You have to think about networking.

You have to think about storage.

You have to think about power.

You have to think about reliability.

You have to think about security.

You don’t just deploy software.

You build the environment the software runs on.

And that changes the learning experience completely.

So I broke the project into phases.

The first goal was simple:

Build a secure home VPN.

Eventually, I want this setup to become a platform for:

  • hosting personal projects
  • running Docker containers
  • AI experiments
  • self-hosted tooling

But I wanted to start with the fundamentals first.

Because that’s where the real learning happens.


The Hardware Mistake That Immediately Humbled Me

The first lesson came before the server was even fully assembled.

I bought a Kingston NV3 1TB NVMe SSD for the Raspberry Pi M.2 HAT+ because, on paper, everything looked compatible.

It was NVMe.

The Pi supported NVMe.

Problem solved, right?

Wrong.

The SSD physically did not fit.

That’s when I learned something I somehow never paid attention to before:

Not all NVMe drives are physically the same size.

The Raspberry Pi M.2 HAT+ only supports 2230 and 2242 drives.

The Kingston NV3 was 2280.

The drive was literally too long for the board.

It was such a small mistake, but it perfectly introduced me to what infrastructure work actually feels like.

Tiny details matter.

When you work closer to hardware, assumptions become expensive very quickly.

And honestly, I’m glad I made the mistake early because it forced me to slow down and actually read specifications instead of trusting product labels.

Right now, the server is still running from a 64GB microSD card while I search for the correct 2242 SSD.

Ironically, the mistake taught me more than the successful setup probably would have.

“The fastest way to understand infrastructure is to break something you thought would just work.”


The Moment Security Stopped Feeling Theoretical

There’s something psychologically different about exposing a machine to the internet.

The second I enabled remote access, security stopped feeling like a theoretical checklist from tutorials.

This was not just me reading about firewalls anymore.

This was a real Linux machine connected to my actual home network.

That changes your mindset immediately.

Instead of thinking:

“How do I make this work?”

you start thinking:

“How do I make this safe to leave running 24/7?”

That shift ended up changing how I approached the entire project.

I started from a deny-by-default mindset.

Expose as little as possible.

Open only what is necessary.

Reduce the attack surface first.

So the Pi only exposes what it needs for SSH and Tailscale.

Everything else stays closed unless there is a real reason for it to exist publicly.

Then I installed Fail2ban.

Before this project, Fail2ban was something I had maybe heard about but never really understood why people used it.

Now it makes complete sense.

The idea is simple:

If repeated authentication failures happen, the offending IP gets banned automatically.

But what surprised me was not the tool itself.

It was realizing how quickly infrastructure work changes the way you think about systems.

Applications optimize for features.

Infrastructure optimizes for resilience.

That distinction feels obvious in hindsight, but building a server yourself makes you experience it directly.


What a Home VPN Actually Is

Before this project, my understanding of VPNs was mostly shaped by commercial VPN marketing.

  • Hide your IP
  • Watch region-locked Netflix
  • Use public WiFi safely

That was basically my mental model.

But then I started thinking about something more relatable.

Have you ever talked about a trip and suddenly started seeing travel ads everywhere?

Or searched for cookware once and then every website, app, and Amazon notification suddenly thinks you are building a professional kitchen?

It gets annoying.

And while a home VPN does not magically solve all tracking or privacy problems, building one helped me understand what parts of privacy I could actually control myself.

A home VPN is different from a commercial VPN.

Instead of routing traffic through some company’s infrastructure, you route traffic through your own home network.

That means when I am traveling or connected to public WiFi:

  • my traffic encrypts back to my house
  • my devices behave like they are home
  • I control the network path
  • I can use my own DNS filtering

That realization completely changed how I think about privacy and networking.

You are not buying trust.

You are building it yourself.

And honestly, that was probably the biggest conceptual shift of this project.


The Difference Between “Working” and “Infrastructure”

At first, everything seemed perfect.

Tailscale worked.

The VPN connected.

My Mac connected.

My iPhone connected.

Then I rebooted the Raspberry Pi.

That’s when things broke.

  • Exit node functionality disappeared
  • IP forwarding stopped working
  • Routing behaved inconsistently

And that taught me one of the most important lessons of the entire project:

There is a massive difference between:

“I got this working once”

and:

“this survives reboot reliably forever.”

The fixes themselves were not particularly complicated.

Some sysctl configuration.

Some systemd services.

Some persistent kernel settings.

But the lesson was much bigger than the commands.

Infrastructure is about persistence.

Reliable systems survive:

  • reboots
  • failures
  • unexpected states
  • power interruptions

That realization honestly felt like crossing a boundary from hobby scripting into actual operational engineering.

“Software runs. Infrastructure stays running.”


What This Project Actually Taught Me

Technically, I learned a lot.

  • Linux networking
  • Firewall management
  • VPN routing
  • SSH hardening
  • Service persistence
  • Infrastructure troubleshooting

But honestly, the deeper lesson had very little to do with individual technologies.

It was about systems thinking.

A lot of software engineering focuses on isolated components.

  • APIs
  • Frameworks
  • Databases
  • Algorithms

Infrastructure forces you to think differently because suddenly everything becomes interconnected.

Networking affects security.

Persistence affects reliability.

Hardware affects software behavior.

Routing affects user experience.

You stop thinking only in features.

You start thinking in systems.

And honestly, I think that mindset shift is the real reason home labs are so valuable for developers.


Final Thoughts

I originally thought this project would teach me how to run a VPN.

Instead, it taught me how infrastructure behaves.

It taught me why operational reliability matters.

It taught me how networking layers interact.

It taught me why security is about layers instead of tools.

It taught me what self-hosting actually means.

More importantly, it changed my relationship with technology.

Before this project, infrastructure felt abstract.

Now it feels tangible.

I can debug routing problems I barely understood a month ago.

I can trace how different layers interact.

I can see how much engineering exists underneath the simple act of “deploying an app.”

And honestly, that is probably the real value of building a home lab.

Not the server itself.

The understanding you gain from owning the entire stack.

Top comments (0)