In this tutorial, weβll build a tiny, production-style Go service that you can deploy, observe, and ship via CI/CD.
π‘ Why this project?
- Real-world skills: containerization, observability, CI/CD, and documentation.
- Interview-friendly: shows us understand build β ship β monitor.
- Lightweight: a small Go service with
/health
and/metrics
that we can extend later.
π― What weβll build
A Go service exposing:
GET /health
: returns {"status":"OK"}
GET /metrics
: Prometheus metrics (custom counter & latency histogram)A Dockerized app pushed to DockerHub
A Prometheus + Grafana stack to scrape & visualize metrics.
A GitHub Actions pipeline to test, build, and publish the image.
βοΈ Prereqs:
- Docker & Docker Compose.
- GitHub account (for CI/CD).
- DockerHub account (to publish the image).
- Go 1.24.x
π§ͺ Run it now
(Replace negin007 with your DockerHub username if you fork this.)
docker pull negin007/mini-monitoring-app:latest
docker run -d -p 8080:8080 negin007/mini-monitoring-app:latest
Test it:
curl http://localhost:8080/health
curl http://localhost:8080/metrics
π§ Project structure
mini-monitoring-app/
ββ main.go
ββ go.mod
ββ Dockerfile
ββ .dockerignore
ββ docker-compose.yml
ββ prometheus.yml
ββ .github/
β ββ workflows/ci.yml
ββ README.md
πͺ The Steps
1. Write Go service
Minimal main.go
with /health
and /metrics
.(Full file: main.go)
2. Instrumentation
We expose two custom metrics:
- Counter
health_requests_total
β ever-increasing count of health checks. - Histogram
response_latency_seconds
β measures latency in buckets.
Why a histogram? Because you can calculate percentiles like P95 using Prometheus:
histogram_quantile(0.95,sum(rate(response_latency_seconds_bucket[1m])) by (le))
-rate(...[1m])
β per-second rate over 1 minute
-sum(... ) by (le)
β aggregate all instances by bucket(le = βless or equalβ)
-histogram_quantile(0.95, ...)
β compute the 95th percentile latency (P95).
3. Dockerize the service
Dockerfile: (Full file: Dockerfile)
Build & run locally:
docker build -t yourname/mini-monitoring-app:local .
docker run -p 8080:8080 yourname/mini-monitoring-app:local
4. Validate endpoints locally:
We should see {"status":"UP"}.
curl http://localhost:8080/health
curl http://localhost:8080/metrics
5. Add Monitoring Stack (Prometheus + Grafana)
prometheus.yml
config:
global:
scrape_interval: 5s
scrape_configs:
- job_name: "mini-monitoring-app"
static_configs:
- targets: ["app:8080"]
docker-compose.yml
spins up app + Prometheus + Grafana.
(Full files: prometheus.yml , docker-compose.yml)
Run all services:
- docker-compose up -d : App β http://localhost:8080
- Prometheus β http://localhost:9090 (check Status β Targets)
- Grafana β http://localhost:3000 (login admin / admin)
6. Build a Grafana dashboard
Two common panel types:
- Graph/Time series β trends over time (great for latency/throughput)
- Gauge/Stat β current value or single number (great for βcount nowβ)
Example panels:
- Stat: health_requests_total (total health checks)
- Time series (P95 latency):
histogram_quantile(0.95, sum(rate(response_latency_seconds_bucket[1m])) by (le))
7. CI/CD with GitHub Actions
Workflow .github/workflows/ci.yml
:
- Runs Go tests
- Builds Docker image
- Pushes image to DockerHub
Required secrets(Repo β Settings β Secrets β Actions):
- DOCKERHUB_USERNAME β your DockerHub username
- DOCKERHUB_TOKEN β DockerHub access token (Read/Write)
(Full file:ci.yml)
β Conclusion
This tiny monitoring app shows how a real-world service can be built, observed, and shipped with modern practices:
- Go for a lightweight service
- Prometheus + Grafana for observability
- Docker & GitHub Actions for CI/CD automation
π Repo: Mini-Monitoring-App
Thank you for reading! π
Top comments (0)