DEV Community

Cover image for Building My Digital Playground: How I Built a Self-Sufficient Homelab That Never Sleeps
Odewole Abdul-Jemeel
Odewole Abdul-Jemeel

Posted on

Building My Digital Playground: How I Built a Self-Sufficient Homelab That Never Sleeps

Introduction

It started with a simple motivation — I wanted a reliable and affordable way to experiment, learn, and host personal services without running them on my daily machine. I've always believed that the best way to understand systems is to build them, break them, and rebuild them better.

My goals were straightforward: I needed fairly decent, cheap computing power on my network for learning purposes and hosting fun side projects. More importantly, I wanted to understand how real-world infrastructure works — from virtualization to networking to service orchestration. There's something deeply satisfying about knowing that the services you use daily are running on hardware you control, configured exactly the way you want.

So, I decided to create a homelab, a dedicated environment for learning, self-hosting, and tinkering. My objectives were clear:

  • Learning: Experiment with virtualization, orchestration, networking, and automation

  • Self-hosting: Run my own services—from productivity tools to media servers

  • Reliability: Maintain 24/7 uptime and gain real-world infrastructure experience

Over time, this small project evolved into a fully functional, solar-powered three-node cluster, running everything from media servers to automation pipelines. My setup consists of three Dell OptiPlex 7070 machines, all connected via a Ruijie RG-ES205GC-P 5-Port Gigabit Cloud Managed PoE+ Switch, with upstream connectivity through a FiberOne 50Mbps fiber connection. Not exactly enterprise-grade, but more than enough for what I needed to accomplish.

In fact, my personal website jemeel.dev, a fully ClaudeCode generated portfolio website, is publicly available and running on Dokploy from this very homelab. It's incredibly satisfying to tell people, "Yeah, that's running on hardware in my room."

Here's how it all came together.

Hardware & Power Infrastructure

At the heart of the lab are three Dell OptiPlex 7070 machines — compact, efficient, and affordable. I chose Dell OptiPlex machines because they're reliable, relatively power-efficient, and, most importantly, affordable on the used market.

The Nodes:

🖥️ Dell OptiPlex 7070 SFF – Core i7, 32 GB RAM, 2 TB SSD

🖥️ Dell OptiPlex 7070 SFF – Core i7, 32 GB RAM, 1 TB SSD

💻 Dell OptiPlex 7070 Micro – Core i7, 32 GB RAM, 1 TB SSD

Dell Optiplex Micro Motherboard

Each node packs a Core i7 processor and 32GB of RAM, not bleeding edge, but plenty of horsepower for running multiple virtual machines and containerized services. The total storage across all nodes is 4TB, which is more than sufficient for my current needs.

These nodes are connected via a Ruijie RG-ES205GC-P 5-Port Gigabit Cloud Managed PoE+ Switch, with upstream internet provided by a FiberOne 50 Mbps fiber connection.

Ruijie Switch

The Cost Breakdown

Let's talk numbers, because one of the biggest questions people ask is: "How much does this actually cost?"

Initial Investment:

  • 3× Dell OptiPlex 7070 units: ₦520,000 each = ₦1,560,000

  • Ruijie RG-ES205GC-P Switch: ₦45,000

  • 2TB SSD (replacement—more on this later): ₦220,000

  • Total initial cost: ₦1,825,000 (~$1,200 USD at current rates)

Monthly Operating Costs:

  • Internet (50Mbps fiber, shared with home): ₦25,000

  • Electricity (minimal due to solar): ~₦5,000

  • Total monthly cost: ₦30,000 (~$20 USD)

Now, let me put this in perspective. For comparison, running equivalent infrastructure on cloud platforms would cost significantly more:

Cloud Cost Comparison (Approximate Monthly Costs):

AWS EC2:

  • 3× t3.xlarge instances (4 vCPU, 16GB RAM each): ~$220/month

  • 4TB EBS storage: ~$400/month

  • Data transfer (moderate): ~$50/month

  • Total: ~$670/month (₦1,005,000)

DigitalOcean:

  • 3× droplets (8GB RAM, 4 vCPU): ~$240/month

  • 4TB block storage: ~$400/month

  • Bandwidth overages: ~$30/month

  • Total: ~$670/month (₦1,005,000)

Google Cloud Platform:

  • 3× n2-standard-4 instances: ~$250/month

  • 4TB persistent SSD: ~$680/month

  • Network egress: ~$40/month

  • Total: ~$970/month (₦1,455,000)

Heroku:

  • Performance-M dynos (comparable): ~$500/month

  • Heroku Postgres: ~$200/month

  • Add-ons and scaling: ~$100/month

  • Total: ~$800/month (₦1,200,000)

Vercel Pro + Infrastructure:

  • Vercel Pro: $20/month

  • Backend hosting (Railway/Render): ~$100/month

  • Database (PlanetScale/Supabase): ~$50/month

  • Total: ~$170/month (₦255,000) - Limited to web apps only

My Homelab ROI: After just 3 months, my homelab pays for itself compared to AWS/GCP. After 6 months, I'm saving over ₦6 million annually compared to traditional cloud hosting. The only ongoing costs are internet (which I'd have anyway) and minimal electricity due to solar power.

Of course, cloud platforms offer advantages like global distribution, automatic scaling, and managed services. But for learning, experimentation, and self-hosting personal projects, the homelab is unbeatable in terms of cost-effectiveness.

Power & Uptime Design

Living in Nigeria means dealing with inconsistent power supply, so I couldn't just plug everything into the wall and hope for the best. All nodes are connected to a dedicated outlet powered by both the grid and a 3.5KVA solar inverter. The setup automatically switches between power sources, keeping everything online even during extended outages—a must for 24/7 operation.

There's nothing more frustrating than having your services go down because of a power outage, especially when you're hosting productivity tools you rely on daily. This dual-power configuration provides an uninterrupted 24/7 power supply essential for services that need to stay online. The solar setup has been a game-changer, reducing my electricity costs to nearly nothing while ensuring my services remain accessible even during the longest power outages.

Network Topology

Each node communicates over Gigabit LAN, ensuring low latency and quick data replication. All three nodes are connected via the Ruijie switch, which handles local network traffic at gigabit speeds. The switch connects to my FiberOne router, which provides a stable 50Mbps upload and download connection.

I know 50Mbps isn't blazing fast by modern standards, but for a homelab serving primarily personal use and a handful of external services, it's perfectly adequate. The keyword here is "stable"; consistent connectivity matters more than raw speed for most self-hosted services. The fiber line is stable enough for remote access, backups, and moderate traffic applications.

That said, I'm currently in discussions with my ISP about upgrading to a higher dedicated line. With my website now publicly accessible and plans to self-host more projects, the additional bandwidth would provide headroom for growth and better performance for external users.

Virtualization Layer

When it came to choosing a virtualization platform, I went with Proxmox VE. Why Proxmox? It's free, stable, and incredibly flexible. It supports both KVM virtualization for full virtual machines and LXC containers for lightweight workloads. Plus, the web-based management interface makes it easy to manage everything from anywhere on the network. It allowed me to cluster all three nodes, share resources, and manage VMs and containers from a unified interface.

Cluster Setup

Each node was joined into a single Proxmox cluster, enabling live migration and centralized backups. I configured all three nodes into a Proxmox cluster, which allows them to work together as a unified system. This means I can migrate VMs between nodes, manage all three from a single interface, and set up high availability if needed. The cluster setup was straightforward - Proxmox makes it easy to add nodes to an existing cluster through the web UI.

Proxmox Node Summary page

Proxmox DataCenter Summary Page

Storage Architecture

For storage, I maintain shared storage through NFS, hosted on one of the larger nodes for reliability. This allows all nodes to access shared storage, making it easy to migrate VMs without worrying about moving disk images around. I also configured regular backups to ensure I don't lose everything if a disk decides to give up on life.

And speaking of disks giving up — let me tell you about an expensive lesson I learned the hard way.

A Costly Mistake: The 1TB SSD Incident

Early in my homelab journey, I ran into a Proxmox cache issue. In my attempt to resolve it, I mistakenly (yeah, the word was invented for blame shifting reasons like this) deleted some critical memory addresses that completely crashed one of my 1TB SSDs. The drive became completely undiscoverable and unrecoverable — no amount of troubleshooting, recovery tools, or desperate Googling could bring it back to life.

This was a painful lesson in several ways:

First, the obvious cost of ₦220,000 for a replacement 2TB SSD (I upgraded the capacity while I was at it). Second, the time lost in rebuilding and reconfiguring services. But most importantly, it taught me the absolute importance of proper backups and understanding what you're doing before you execute commands, especially when dealing with storage and memory management.

Since then, I've implemented much more rigorous backup procedures and I always, always double-check before running any storage-related commands. I also document every significant change I make to the infrastructure. And lastly, it thought me VM managements and deep understanding of proxmox.

Base Templates & Remote Access

One of my favorite optimizations was creating a headless Ubuntu VM template with cloud-init. To speed up provisioning, I use this template which means I can spin up new Ubuntu VMs in seconds with pre-configured networking, SSH keys, and basic packages already installed. No more spending 20 minutes setting up a new VM every time I want to experiment with something.

Ubuntu VM Template Hardware Configurations

Ubuntu VM Template CloudInit config

For remote access and interconnectivity, I configured Tailscale VPN, which allows me to securely connect and manage the cluster from anywhere. Tailscale creates a mesh VPN that lets me connect to my homelab from anywhere without exposing services directly to the internet. It's like having a secure tunnel straight into my network, which is especially useful when I need to access my services while away from home.

Service Grouping Strategy

One of the most important decisions when building a homelab is figuring out how to organize your services. Do you run everything on one machine? Do you create a separate VM for every service? The answer, as usual, is somewhere in between.

To keep things organized, I grouped services by category and performance demand.

Cluster Role Division

🧩 Node 1 & 2: Dedicated to Dokploy (self-hosted PaaS) for deployments, configured in a Docker Swarm setup (primary and worker nodes)—combining 3TB of storage and 64GB RAM. These two nodes are connected via Docker Swarm, providing a powerful platform for deploying containerized applications without much hassle.

⚙️ Node 3: Hosts self-contained services (media, productivity, and infrastructure utilities). This node is reserved for self-hosted services and tools that I want to keep separate from the Dokploy environment.

Philosophy Behind Isolation

Rather than running everything in a single container stack, I prefer isolating services into dedicated VMs or grouped Docker stacks. This separation provides a layer of isolation—if something goes wrong in my deployment environment, my core services remain unaffected. This makes it easier to scale, back up, or tear down without affecting unrelated services.

I organized my services into clear categories: Infrastructure, Media, and Productivity. This makes it easier to manage resources and understand dependencies. Infrastructure services like DNS and reverse proxies get priority allocation since everything else depends on them. Media services can be more resource-hungry, so they get their own allocation. Productivity tools are somewhere in between.

Resource Allocation Philosophy

I follow a simple principle: allocate conservatively, scale when needed. It's tempting to give every VM maximum resources, but that's wasteful. Instead, I start small and monitor performance. If a service consistently maxes out its allocated resources, I increase them. This approach ensures I'm making the most of my limited hardware.

Key Self-Hosted Services

This is where things get interesting. Let me walk you through the major services running in my homelab and why I chose to self-host them.

1. Infrastructure & Networking (VM 1: 4GB RAM, 2 CPUs)

This VM handles the core network and infrastructure management stack that everything else depends on:

  • Caddy: My primary reverse proxy and web server. Caddy is lightweight and handles SSL automation beautifully. It automatically handles SSL certificates, which means I don't have to manually manage Let's Encrypt renewals. It's simple, fast, and just works.

  • Portainer: A web-based Docker management interface that provides visual Docker management. While I'm comfortable with Docker CLI, Portainer provides a nice visual overview of all my containers, makes it easy to check logs, and simplifies management when I don't want to SSH into a server.

  • AdGuard Home: Network-wide ad blocking and DNS management. Every device on my network benefits from ad blocking without needing individual browser extensions. It also gives me detailed insights into DNS queries and lets me block specific domains.

  • Uptime Kuma: A beautiful uptime monitoring and tracking tool. It pings all my services regularly and alerts me if anything goes down. The dashboard gives me a quick overview of service health.

2. Productivity & Personal Management (VM 2: 4GB RAM, 2 CPUs)

My digital workspace lives here. This VM runs services that help me stay organized and productive:

  • Traggo: Time tracking made simple. I use it to track how much time I spend on different projects and tasks. Unlike cloud-based alternatives, my time tracking data stays on my servers.

  • Paperless-NGX: Document management system that OCRs and organizes all my documents. I scan everything—bills, receipts, important documents—and Paperless makes them searchable and accessible from anywhere.

  • SureFinance: Self-hosted finance tracker for personal finance tracking. I wanted to understand where my money goes without giving a third-party service access to my financial data.

  • Vikunja: Project and task management. It's like Todoist or Trello, but self-hosted. I use it to organize both personal and work projects.

  • Mixpost: Social media scheduler and management tool. Helps me schedule and manage social media posts without relying on expensive SaaS solutions.

All of these integrate into my daily workflow seamlessly, replacing SaaS tools with privacy-respecting self-hosted alternatives. The beauty of self-hosting productivity tools is that you control your data and can customize integrations exactly how you want them.

3. Media (VM 3: 12GB RAM, 4 CPUs)

For media and storage, media services tend to be resource-intensive, so I gave this VM more horsepower. This VM runs:

  • Immich: Self-hosted, AI-powered photo and video library. Think Google Photos, but you own your data. It's got machine learning-powered photo recognition, automatic backups from my phone, and a beautiful interface.

  • Nextcloud: My personal cloud storage and collaboration platform for file sync. I use it for file syncing, calendar management, and sharing files with others. It's replaced Dropbox and Google Drive for me entirely.

These services make file management, streaming, and backup seamless within my network. The media VM gets more resources because photo recognition and video transcoding can be demanding. I allocated 12GB of RAM and 4 CPU cores to ensure smooth performance, especially when Immich is processing a batch of photos or Nextcloud is syncing large files.

Easier Deployment with Dokploy: My Self-Hosted PaaS

If there's one decision that transformed my homelab from a collection of manually-managed containers into a streamlined deployment platform, it was setting up Dokploy. This self-hosted Platform-as-a-Service has become the backbone of my entire infrastructure.

Dokploy Usage Monitoring Dashboard

Why Dokploy?

When researching self-hosted PaaS solutions, I had several options: Coolify, CapRover, Dokku, and Dokploy. I chose Dokploy for three main reasons:

  1. Modern, Intuitive UI: Clean, responsive interface that's actually enjoyable to use

  2. Simplicity Without Sacrificing Power: Easy enough for quick deployments, powerful enough for complex applications

  3. Active Development: Recent, actively maintained, with responsive developer support

I seriously considered Coolify, which is also excellent, but Dokploy's UI and overall user experience won me over.

What Makes Dokploy Special?

Dokploy is essentially a self-hosted alternative to Heroku, Vercel, and Railway. Here's what makes it powerful:

GitHub Integration & One-Click Deployments

Dokploy connects directly to my GitHub repositories. Push code, and it automatically builds, deploys, and configures everything. My website, jemeel.dev, goes from code push to production in less than 5 minutes:

  1. Push code to GitHub

  2. Dokploy receives webhook, pulls code, builds Docker image

  3. Deploys to Docker Swarm

  4. Traefik configures routing

  5. Let's Encrypt provisions SSL

  6. Site is live with HTTPS

This is Vercel-level developer experience on hardware I own.

Dokploy Project Dashboard

Docker Compose Support

Native Docker Compose support means I can paste existing compose files directly into Dokploy. For complex multi-container applications (API + frontend + database + cache), one compose file orchestrates everything.

Database Provisioning

Dokploy spins up popular databases with a few clicks: PostgreSQL, MySQL, MongoDB, Redis, MariaDB. Need a database? Click "Add Database," choose PostgreSQL, set a password—done. No manual container management or volume configuration.

Database Provisioning in Dokploy

Traefik Integration & SSL Management

Dokploy works seamlessly with Traefik for automatic routing and Let's Encrypt for SSL certificates. Deploy a service with a domain name, and SSL just works—automatically provisioned and renewed.

Cloudflare Tunnel Integration

For public-facing services, I use a three-layer architecture:

  • Cloudflare Tunnel: HTTPS, DDoS protection, WAF, bot filtering

  • Traefik: HTTP routing, load balancing, service discovery

  • Dokploy: Application hosting and orchestration

Cloudflare handles external threats and SSL termination, Traefik manages internal routing, and Dokploy hosts the applications.

When This Works Best:

  • Public websites and web applications

  • REST APIs with moderate traffic

  • Personal projects and portfolios

  • Small to medium business applications

  • Content management systems

When This Has Limitations:

  • Latency-sensitive applications (adds 20-50ms)

  • Very high bandwidth needs (4K streaming to multiple users)

  • Applications requiring static IPs for allowlisting

  • Strict compliance requirements needing direct IP control

For most use cases—personal projects, small business apps, moderate-traffic websites—this setup is excellent.

Dokploy vs. Popular PaaS Platforms

Let's compare costs for hosting 10 moderate applications with databases:

Heroku:

  • 10 apps + databases: $160/month (₦240,000) = ₦2,880,000/year

Vercel + Supabase:

  • Pro plan + databases + bandwidth: $85/month (₦127,500) = ₦1,530,000/year

Railway:

  • 10 apps with databases: ~$115/month (₦172,500) = ₦2,070,000/year

Dokploy on My Homelab:

  • Hardware (amortized over 3 years): ₦50,694/month

  • Internet: ₦25,000/month

  • Electricity: ₦5,000/month

  • Total: ₦80,694/month = ₦968,328/year

After the first year, when hardware is paid off, costs drop to just ₦30,000/month.

First-year savings: Over ₦1.9 million compared to Heroku. After three years: Over ₦6 million saved. Plus, I own the hardware—it's an asset with resale value.

Feature Comparison:

Dokploy matches or exceeds commercial platforms on most features:

  • ✅ GitHub integration (same as all platforms)

  • ✅ Automatic SSL (same as all platforms)

  • ✅ Database hosting (built-in, unlike Vercel)

  • ✅ Docker Compose support (better than Heroku)

  • ✅ Unlimited deployments (Heroku charges per app)

  • ✅ No sleep mode (Heroku free tier sleeps)

  • ✅ Full backend support (better than Vercel's serverless-only)

  • ❌ Global CDN (Vercel wins here)

  • ❌ Automatic scaling (Railway/Heroku win)

For full-stack applications with traditional databases, Dokploy is far more capable and cost-effective than any commercial PaaS.

My Primary Use Case: Self-Hosting Everything

My philosophy: if it can run in a container, it's going on Dokploy. Every side project, every experiment, every tool I build gets deployed to my homelab.

This approach provides:

  • Financial Freedom: Never worry about deployment costs

  • Learning Opportunities: Every deployment teaches DevOps skills

  • Data Ownership: Complete control over my data

  • Portfolio Advantage: Demonstrates end-to-end technical capability

Backup & Recovery

My backup strategy ensures quick recovery:

  • Nightly: Application and database backups via Proxmox

  • Weekly: Docker volume snapshots

  • Monthly: Off-site sync to external drives

After the SSD incident, this strategy proved its worth—I restored a complete node in under an hour. Having the Docker Swarm across two nodes provides redundancy: if one node fails, services continue on the other.

The Developer Experience

What I love most is the workflow simplicity:

  1. Write code

  2. Push to GitHub

  3. Watch it automatically deploy

  4. Done

No SSH-ing into servers, no manual Docker commands, no editing configs. Just pure development focus. It's Heroku-level simplicity on infrastructure I own.

Dokploy transformed my homelab from a technical experiment into a practical platform that genuinely competes with commercial PaaS offerings. It's the difference between having servers and having infrastructure.

Networking

Networking is often the most challenging part of a homelab, but it's also the most critical. You need secure, reliable access to your services both from inside and outside your network. My network stack is designed around both security and accessibility.

Cloudflare Tunnel

For external access, I route external traffic through Cloudflare Tunnel. This is a brilliant solution that eliminates the need to expose my home IP address or open ports on my router. The tunnel creates a secure connection from my homelab to Cloudflare's edge network, and Cloudflare handles all the external traffic.

The benefits are significant and include:

  • DDoS protection: Cloudflare automatically protects against attacks

  • Web Application Firewall (WAF): Filters malicious traffic before it reaches my network and rate limiting

  • SSL/TLS encryption: End-to-end encryption without managing certificates manually

  • Rate limiting: Prevents abuse and resource exhaustion (configurable)

  • Bot protection: Blocks automated attacks and filtering

  • Zero Trust Access: Optional additional security layer for sensitive services

This setup is particularly important now that my website is publicly accessible. Without Cloudflare Tunnel, I'd be directly exposing my home network to the internet, which is a security nightmare. Instead, all traffic flows through Cloudflare's infrastructure first, providing multiple layers of protection.

Reverse Proxies

Internally, I run both Traefik and Caddy as reverse proxies, depending on the use case, handling SSL certificates and dynamic service discovery. Traefik excels at dynamic service discovery with Docker — it automatically detects new containers and configures routing. Caddy is simpler and handles static routing beautifully with automatic HTTPS.

This combination gives me a clean, domain-based structure like:

paperless.intellect.lab

portainer.intellect.lab

immich.intellect.lab

jemeel.dev

Between Cloudflare Tunnel, Traefik, and Caddy, I have a robust networking setup that's secure, performant, and relatively easy to manage.

Performance, Challenges & Lessons Learned

Building and maintaining a homelab isn't all smooth sailing. Balancing resources across multiple VMs and containers took some trial and error. Here are some of the challenges I've faced and what I learned from them.

Resource Distribution

With three nodes and limited resources, careful resource distribution is essential. 32GB per node is adequate for most workloads, but RAM can bottleneck quickly when running multiple containers. I constantly monitor CPU, RAM, and disk usage to ensure no single VM or container is starving the others. Proxmox's built-in monitoring makes this easier, but it still requires regular attention.

Network Limitations

The 50Mbps connection (~6.25 MB/s) is the biggest bottleneck in my setup and caps throughput, so I optimized caching and limited high-traffic services. While it's sufficient for most use cases, there are clear limitations:

Performance Envelope:

  • Throughput: ~6.25 MB/s maximum

  • Static sites: ~100 concurrent users, ~12 requests per second

  • API-heavy applications: ~300-500 concurrent users, ~100 requests per second

  • Video streaming: Best kept local or limited quality

Capacity Examples:

Scenario: Static website serving

  • Page size: 500 KB

  • Concurrent users: ~100

  • Requests per second: ~12

Scenario: API-heavy application

  • Response size: 50 KB average

  • Concurrent users: ~300-500

  • Requests per second: ~100

Despite limitations, performance remains smooth for my use cases. My website loads quickly for visitors, and I haven't experienced any significant slowdowns even with moderate traffic. These aren't huge numbers, but for personal use and small-scale services, they're perfectly adequate. The key is understanding the limitations and designing services accordingly.

However, as I scale up and plan to self-host more projects, the upgrade I'm negotiating with my ISP will provide much more breathing room. With 10x the bandwidth, I could comfortably handle:

  • 1,000+ concurrent users on static sites

  • Significantly improved video streaming quality

  • Faster backup and replication

  • Multiple high-traffic services running simultaneously

Maintenance and Troubleshooting

Regular maintenance is crucial. I've established a routine:

  • Weekly: Check service health, review logs for errors

  • Monthly: Update all services and base images, verify backups

  • Quarterly: Full system review, optimize resource allocation

When things break (and they will), having good documentation and backups is essential. I maintain a simple wiki documenting every service, its configuration, and common troubleshooting steps. This saved my life during the SSD incident—I could rebuild everything relatively quickly because I had documented the entire setup.

Future Improvements

A homelab is never truly finished. There's always something to improve, optimize, or add. There's always room to grow. Here are my plans for the near future:

Monitoring Dashboards: I plan to set up proper monitoring with Prometheus + Grafana. While Uptime Kuma tells me if services are up or down, I want deeper insights into performance metrics, resource usage trends, and potential bottlenecks. Prometheus will collect metrics from all services, and Grafana will visualize them in beautiful dashboards.

Automating Cluster Scaling: Currently, my high availability is somewhat manual. I want to implement more automated failover routines and mechanisms so that if a node goes down, services automatically migrate to healthy nodes without manual intervention. This requires more sophisticated orchestration, but it's worth the effort for critical services. I'm also exploring ways to automate scaling based on load.

🌐 Bandwidth Upgrade: I'm actively working with my ISP to secure a dedicated connection. This increase in bandwidth will enable me to:

  • Host more public-facing projects without performance concerns

  • Improve response times for external users

  • Handle significantly more concurrent traffic

  • Stream high-quality media without buffering

🖥️ Expanding the Cluster: As my workloads increase, I'm considering adding a few more Proxmox nodes to the cluster. I plan to self-host ALL my side projects moving forward — no more paying for Vercel, Heroku, or similar platforms (at least till there is a justification) when I have perfectly capable infrastructure at home. Each new project is an opportunity to learn something new while saving money on hosting costs.

🎓 AWS Local Mimic for Certifications: One of my more ambitious plans is to create a local AWS-like environment for hands-on learning. I want to build something that mimics AWS services locally—think LocalStack on steroids. This would allow me to:

  • Practice for AWS certifications without incurring costs

  • Understand AWS architecture at a deeper level

  • Experiment with complex multi-service architectures

  • Build skills that transfer directly to professional cloud environments

The goal isn't to perfectly replicate AWS, but to create enough similarity that I can practice real-world scenarios and prepare for professional certifications. Imagine setting up S3-like object storage, EC2-like compute instances, RDS-like databases, and Lambda-like serverless functions—all running locally on my homelab. It's an ambitious project, but that's exactly why I am investing in the homelab.

Conclusion

Building this homelab has been one of the most rewarding technical projects I've ever undertaken. It started as a simple desire to have cheap, reliable computing power for side projects, but it evolved into a comprehensive learning experience covering virtualization, networking, containerization, and infrastructure management. It taught me more about systems architecture, resource planning, and automation than any course could.

What have I learned? Quite a lot, actually.

First, infrastructure work is humbling. Things will break, often at the worst possible time. Whether it's accidentally destroying an SSD or dealing with unexpected service failures, you learn to troubleshoot under pressure and develop a healthy respect for backup systems.

Second, start simple and don’t despise starting from scratch. Try new things, read about the technologies, map out a plan, and tick off the checklist. Don't let the complexity intimidate you because every expert started as a beginner who kept going.

Third, self-hosting is empowering. There's something deeply satisfying about using services you built and control. No surprise pricing changes, no arbitrary feature removals, no worrying about a service shutting down. You're in control. When someone visits jemeel.dev, they're connecting to hardware sitting in my room, configured and maintained by me. That's incredible.

The economics make sense. For roughly ₦1.8 million upfront and ₦30,000 monthly, I have infrastructure that would cost ₦1+ million per month on AWS or GCP. The homelab paid for itself in three months and will save me millions annually. Plus, I own the hardware; it's an asset that retains value.

The homelab has dramatically improved my daily productivity and experimentation workflow. I can spin up test environments in minutes, deploy new ideas without worrying about cloud costs, and learn by doing rather than just reading documentation. More importantly, this project bridged the gap between theoretical knowledge and practical implementation. Reading about the tech is one thing; actually configuring it, troubleshooting networking issues, recovering from failures, and yes, even breaking SSDs, is something else entirely.

Now, every time I deploy a new app or spin up a container, I'm reminded of the beauty of learning by doing. This homelab isn't just a cluster of PCs —  it's my digital playground, my sandbox for ideas, and a daily reminder that curiosity is the best teacher.

If you're considering building your own homelab, my advice is simple: just start. You don't need expensive hardware or a perfect plan. Start with what you have, run a few services, break things (hopefully not your SSDs), fix them, and keep learning. The journey is the reward.

And remember, whether you're using old Dell PCs like me or something fancier, the principles remain the same. Learn, experiment, iterate, and most importantly, have fun building your digital playground. The mistakes you make will teach you more than any tutorial ever could.

Keep GOing, keep building, and enjoy the journey.

Top comments (1)

Collapse
 
lovestaco profile image
Athreya aka Maneshwar

Awesome bud!