Monitoring isn't just about knowing when things break, it's about understanding how your systems behave under real-world conditions and making data-driven decisions before issues impact users. In this comprehensive guide, I will walk you through building a robust monitoring solution for Express.js applications using the industry standard Prometheus ecosystem.
Why Monitoring Matters More Than You Think
During my journey into DevOps practices, I discovered that monitoring is often treated as an afterthought after CI/CD pipeline deployment. Teams build amazing applications, deploy them to production, and then set up monitoring as a reactive measure. However, I believe that as part of the journey to become a DevOps engineer, after learning containerization, the next critical step should be gaining hands-on experience in setting up monitoring with open-source toolkits for system optimization.
This approach transforms monitoring from a reactive necessity into a proactive strategy that enables better system understanding and performance optimization.
The Monitoring Stack Architecture
I chose the Prometheus ecosystem because it's become the de facto standard for container native monitoring. Here's why this combination works so well:
Prometheus: The Storage for Time-Series Data
Prometheus serves as the storage layer for time-series data, exposing its metrics endpoint at http://localhost:9090/metrics
to be scraped by monitoring tools. The time-series database design excels at storing metrics with timestamps, enabling powerful queries and aggregations. When you need to understand how your application performed during last week's traffic spike, Prometheus gives you that historical context.
In your compose.monitoring.yml
file, you can configure data retention in the command section by adding --storage.tsdb.retention.time=30d
to maintain 30 days of historical data for trend analysis.
Node Exporter: The Metrics Collector
Node Exporter bridges the gap between application metrics and infrastructure health. It collects and exposes hardware statistics—CPU usage, memory consumption, disk I/O, and network activity—in Prometheus format. This integration means you can correlate application slowdowns with system resource constraints, making troubleshooting significantly more effective.
The collector operates by reading system files and translating them into Prometheus-compatible metrics, providing comprehensive visibility into your infrastructure's health.
Grafana: From Data to Insights
Raw metrics are meaningless without proper visualization. Grafana transforms Prometheus data into meaningful dashboards that tell the story of your application's health. The ability to create alerts based on metric thresholds means your monitoring system becomes proactive rather than reactive.
With Grafana's extensive dashboard library, you can leverage pre-built visualizations or create custom dashboards tailored to your specific monitoring needs.
Implementation
1. Repository Setup
# Clone the project repository
git clone https://github.com/noruwa03/express-js-nginx-lb
cd express-js-nginx-lb
2. Application Configuration
# Navigate to application directory
cd app
# Install Node.js dependencies
npm install
3. Environment Configuration
Create a .env
file in the app/
directory with the following variables:
# Port
PORT=8080
# Database Configuration for Neon DB or AWS RDS
DB_CONN_LINK=postgresql://username:password@host:port/database
4. Local Development (Optional)
# Start development server with hot reload to test
npm run dev
5. Production Deployment
# Return to project root
cd ..
# Add name to compose.yml
name: monitoring-system
# Change ports of nginx service in compose.yml from 3000:80 to 4000:80, port 3000 will be used by Grafana service
nginx:
ports:
- 4000:80
nginx:
ports:
- 4000:80
# Add include option for compose.monitoring.yml
include:
- compose.monitoring.yml
Create compose.monitoring.yml
file
services:
node-exporter:
image: prom/node-exporter
container_name: node-exporter
command:
- --path.procfs=/host/proc
- --path.rootfs=/rootfs
- --path.sysfs=/host/sys
- --collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc|run/user)($|/)
- --collector.filesystem.fs-types-exclude=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
ports:
- 9100:9100
networks:
- monitoring
prometheus:
image: prom/prometheus
container_name: prometheus
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- 9090:9090
command:
- --config.file=/etc/prometheus/prometheus.yml
# - --storage.tsdb.retention.time=30d
networks:
- monitoring
depends_on:
- node-exporter
grafana:
image: grafana/grafana
container_name: grafana
ports:
- 3000:3000
networks:
- monitoring
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
- GF_SERVER_ROOT_URL=http://localhost:3000/grafana/
- GF_SERVER_SERVE_FROM_SUB_PATH=true
volumes:
- grafana-storage:/var/lib/grafana
depends_on:
- prometheus
networks:
monitoring:
driver: bridge
volumes:
grafana-storage:
Add monitoring directory and create monitoring.yml file
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: node-exporter
static_configs:
- targets:
- node-exporter:9100
- job_name: prometheus
static_configs:
- targets:
- prometheus:9090
Architecture Overview
Application Structure (Before Monitoring)
express-js-nginx-lb/
├── app/ # [Unchanged] app source code
├── nginx/ # [Unchanged] Load balancer config
├── compose.yml # [Updated] Core application services
└── README.md
Enhanced Structure (With Monitoring Stack)
express-js-nginx-lb/
├── app/ # [Updated] .env file added with PORT and DB_CONN_LINK
├── nginx/ # [Unchanged] Load balancer config
├── monitoring/ # [New] Monitoring configurations
│ └── prometheus.yml # Metrics collection rules
├── compose.monitoring.yml # [New] Monitoring stack services
├── compose.yml # [Updated] Core application services
└── README.md
# Launch complete monitoring stack
docker compose up
Accessing Your Monitoring Stack
Once your stack is running, you can access the different components through these endpoints:
Prometheus Query Interface
Access Prometheus at http://localhost:9090/query
to explore your metrics and run PromQL queries. The interface allows you to:
- Execute queries to retrieve specific metrics
- Visualize data trends over time
- Monitor the health of your scrape targets
Prometheus Targets Monitoring
Navigate to the targets section at http://localhost:9090/targets
to verify that Prometheus is successfully scraping metrics from Node Exporter and other configured endpoints. This view shows the status of each target and helps troubleshoot connectivity issues.
Grafana Dashboard Configuration
Initial Login
Access Grafana at http://localhost:3000
using the default credentials:
- Username: admin
- Password: admin
These credentials are configured in your Docker Compose file:
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin
Adding Prometheus as a Data Source
- Navigate to Configuration → Data Sources
- Click "Add data source"
- Select Prometheus
- Set the URL to
http://prometheus:9090
(using Docker service name) - Click "Save & Test" to verify the connection
Importing Pre-built Dashboards
Grafana's community provides excellent pre-built dashboards:
- Visit
https://grafana.com/grafana/dashboards/
- Search for "Node Exporter Full" dashboard
- Copy the dashboard ID (1860)
- In Grafana, navigate to Dashboards → Import
- Paste the ID and click "Load"
- Select Prometheus as your data source
- Click "Import" to complete the setup
This dashboard provides comprehensive system monitoring including CPU, memory, disk, and network metrics with professional visualizations.
Key Takeaway
The combination of Prometheus, Node Exporter, and Grafana provides a solid foundation that scales from development environments to enterprise production systems. By implementing this stack early in your development process, you build reliability practices into your application lifecycle rather than retrofitting them later.
Future Enhancements
Advanced Monitoring
Implement container-specific monitoring using cAdvisor for granular insights into Docker container resource usage, complementing the current system-level monitoring approach.
Centralized Logging
Integrate OpenTelemetry or Promtail as log collectors, Loki for log aggregation, and enhance Grafana dashboards with log correlation capabilities for comprehensive observability.
Distributed Tracing
Add Jaeger or Zipkin integration to trace request flows through the microservice architecture, Tempo as storage and Grafana for visualization to enable detailed performance analysis and bottleneck identification across service boundaries.
Production Deployment
Implement GitHub Actions CI/CD pipeline for automated testing, building, and deployment to cloud infrastructure (AWS ECS, Google Cloud Run, or Azure Container Instances).
You can find the full project of this article at Github link
Top comments (0)
Some comments may only be visible to logged-in visitors. Sign in to view all comments.