After months of procrastination, I finally started building my home lab. Day 1 was all about establishing the foundation: organizing the project structure, setting up Docker Compose, and deploying a complete monitoring stack with Prometheus, Node Exporter, and Grafana.
The Hardware Reality Check
I started with an HP Stream (4GB RAM), but quickly realized it wasn't going to cut it. Container operations were painfully slow—startup times exceeded 2-3 minutes, and the system was constantly swapping memory. After struggling for a bit, I switched to my main PC with 8GB RAM and a stronger processor. The difference was immediate and dramatic. What took minutes now took seconds.
Lesson learned: Hardware constraints directly impact your development velocity. Don't underestimate the importance of adequate resources when building infrastructure.
Day 1 Project Structure
Before jumping into Docker, I organized the project directory to ensure scalability and maintainability. Here's what I created:
My organized homelab directory structure - separation of concerns from day one
The directory structure separates concerns into distinct layers:
- monitoring/ - Prometheus and Grafana configurations with provisioning
- cicd/ - Git server (Gitea) and CI/CD automation (Drone) for future deployment
- apps/ - Application deployments including the sample-app
- docs/ - Documentation and screenshots
- scripts/ - Utility scripts for setup and maintenance
This structure makes it easy to:
- Add new services without cluttering the root directory
- Version control configurations separately from code
- Scale the project as it grows
- Onboard new team members quickly
Understanding the Monitoring Stack
My Day 1 setup consists of three core components working together:
Prometheus is the time-series database that collects metrics. It scrapes endpoints at regular intervals (I set mine to 15 seconds) and stores the data. Think of it as the "data collector" of the stack.
Node Exporter is a lightweight agent that runs on the system and exposes hardware and OS metrics—CPU usage, memory consumption, disk I/O, network traffic, and more. It exposes these metrics on port 9100 in a format that Prometheus understands.
Grafana is the visualization layer. It connects to Prometheus as a data source and allows you to create beautiful dashboards, set up alerts, and explore metrics interactively. It's the "pretty face" of your monitoring system.
Together, they form a complete monitoring pipeline: Node Exporter collects metrics → Prometheus stores them → Grafana visualizes them.
Docker Compose Architecture
I used Docker Compose to orchestrate all three services. Here's what each service does:
Prometheus Service:
- Runs the official Prometheus image
- Exposes port 9090 for the web UI
- Mounts the prometheus.yml configuration file to define scrape targets
- Uses a named volume to persist time-series data across container restarts
- Joins a custom "monitoring" network for service-to-service communication
Node Exporter Service:
- Runs the official Node Exporter image
- Exposes port 9100 where metrics are available
- Mounts the host's /proc and /sys directories to access system metrics
- Also joins the monitoring network so Prometheus can scrape it
Grafana Service:
- Runs the official Grafana image
- Exposes port 3000 for the web UI
- Uses environment variables to set the admin password
- Mounts a volume for persistent dashboard and configuration data
- Depends on Prometheus being available before it starts
- Joins the monitoring network
The services communicate through a custom Docker bridge network called "monitoring." This means Prometheus can reach Node Exporter at http://node-exporter:9100 and Grafana can reach Prometheus at http://prometheus:9090—all without exposing these services to the host network unless necessary.
All 7 containers running smoothly - monitoring, Git server, and CI/CD infrastructure
Prometheus Configuration
The prometheus.yml file defines what metrics to collect and from where. I configured two scrape jobs:
The first job targets Prometheus itself on port 9090, allowing it to collect its own internal metrics. This is useful for monitoring the health of Prometheus itself.
The second job targets Node Exporter on port 9100. This is where all the system metrics come from—CPU, memory, disk, network, and more. Prometheus scrapes this endpoint every 15 seconds and stores the data.
The global configuration sets the scrape interval (how often to collect metrics) and evaluation interval (how often to evaluate alert rules). I kept both at 15 seconds for a good balance between data granularity and resource usage.
Prometheus successfully scraping metrics from both itself and Node Exporter - all targets UP and healthy
Setting Up Grafana
After the containers started, I accessed Grafana on port 3000 and performed the initial setup:
First, I added Prometheus as a data source. The key here is using the service name from Docker Compose—http://prometheus:9090—rather than localhost. This works because all services are on the same Docker network.
Next, I created my first dashboard. I added panels to visualize:
- CPU usage over time (using the
node_cpu_seconds_totalmetric) - Available memory (using
node_memory_MemAvailable_bytes) - Disk I/O operations (using
node_disk_io_time_seconds_total) - Network traffic (using
node_network_transmit_bytes_total)
I set the dashboard to auto-refresh every 30 seconds so I could see real-time updates.
Real-time system monitoring with Grafana - CPU at 21%, RAM at 68.8%, and 6.9 days uptime. Beautiful visualization of all critical metrics.
Why This Stack Matters
This monitoring foundation is crucial because:
Visibility - You can't optimize what you don't measure. With Prometheus and Grafana, I have complete visibility into system performance.
Alerting - Grafana allows me to set up alerts that trigger when metrics cross thresholds. This is essential for production systems.
Scalability - This stack is designed to grow. As I add more services to my home lab, I can add new scrape targets to Prometheus without changing the core setup.
Learning - Building this on Day 1 teaches fundamental DevOps concepts: containerization, networking, time-series databases, and visualization.
Current System Status
Looking at my dashboard, I can see:
Resource Usage:
- RAM: 5.5GB / 7GB used (68.8%)
- SWAP: 2.3GB / 16GB used (14.6%)
- CPU: 4 cores averaging 21.3% load
- Root FS: 233 GiB total, 81.8% utilized
- Uptime: 6.9 days
Running Services:
- Grafana (visualization)
- Prometheus (metrics database)
- Node Exporter (system metrics)
- Gitea (self-hosted Git server)
- Gitea-DB (PostgreSQL database)
- Drone Server (CI/CD orchestrator)
- Drone Runner (build executor)
All containers healthy and running smoothly! ✅
Key Takeaways from Day 1
- Hardware matters - The jump from 4GB to 8GB made a massive difference
- Organization from the start - A clean directory structure saves headaches later
- Docker Compose is powerful - Orchestrating multiple services with a single YAML file is incredibly efficient
- Monitoring from Day 1 - Having visibility into metrics from the beginning helps identify bottlenecks early
- Docker networking - Custom networks simplify service communication and keep things clean
What's Next?
Day 1 is just the foundation. My immediate next steps include:
Day 2 - Git Server & CI/CD:
- Configure Gitea webhooks for automated builds
- Create complete CI/CD pipelines with Drone
- Deploy sample applications through automated pipelines
Week 1 Goals:
- Fine-tuning Prometheus scrape intervals and retention policies
- Creating more sophisticated Grafana dashboards with custom queries
- Setting up alerting rules and notification channels
- Adding more exporters (PostgreSQL exporter, etc.)
Future Plans:
- Implementing Loki for log aggregation
- Moving to Kubernetes on Oracle Cloud Free Tier
- Infrastructure as Code with Terraform and Ansible
- Service mesh implementation
Lessons Learned
1. Start with Adequate Hardware
Don't try to run production-grade infrastructure on inadequate hardware. The difference between 4GB and 8GB RAM wasn't just performance—it was the difference between frustration and productivity.
2. Monitoring First, Applications Second
By establishing monitoring before deploying applications, I have baseline metrics and can see exactly how each new service impacts system resources.
3. Docker Compose for Local Development
Docker Compose made it trivial to spin up complex multi-container applications. What would have taken hours to configure manually took minutes with docker-compose.yml.
4. Document Everything
I documented every step in my README and took screenshots throughout. Future me will thank present me when I need to recreate this or explain it in interviews.
For Others Starting Out
If you want to build a similar homelab:
You Don't Need Expensive Hardware
- 8GB RAM is sufficient for a robust local setup
- Can start with just monitoring and add services incrementally
- Cloud free tiers (Oracle, AWS, GCP) available when you outgrow local resources
Start with Monitoring
- Don't jump straight to complex applications
- Build observability first
- You'll thank yourself when things break
Docker Compose is Your Friend
- Learn it early, use it often
- Makes complex setups simple
- Infrastructure as code from day one
One Service at a Time
- Don't try to deploy everything at once
- Get one thing working perfectly
- Then add the next piece
Conclusion
Building a home lab teaches you so much about infrastructure, containerization, and monitoring. Day 1 might seem simple—just three services in Docker Compose—but it's the foundation for everything that comes next.
The key takeaway: start simple, measure everything, and iterate. With the right monitoring in place from Day 1, you'll have the visibility needed to make informed decisions about what to build next.
Looking at my Grafana dashboard now, seeing all those metrics flowing in real-time, I know this foundation will serve me well as I continue building out the homelab.
Tomorrow: Adding Git server webhooks and creating my first automated CI/CD pipeline!




Top comments (0)