π Introduction
When running load tests, we often get numbers like requests per second or average latency β but that doesnβt tell us how our system is actually behaving inside.
So I wanted to bring Prometheus observability and K6 performance testing together β to visualize live latency, error rate, CPU usage, and throughput in real-time while load tests are running.
Thatβs how Prometheus-K6-Fusion was born β a simple yet powerful open-source stack combining Go, Prometheus, K6, and Grafana.
βοΈ What is Prometheus-K6-Fusion?
prometheus-k6-fusion is a lightweight Go-based API server that provides CRUD operations for sample data objects and exposes Prometheus metrics.
Its a lightweight and easy to start observability + performance testing lab.
It includes:
- π’ A Go-based API instrumented with Prometheus metrics
- π΅ K6 load test scripts that generate real traffic
- π Prometheus to scrape the metrics
- π΄ Grafana dashboard to visualize everything in real-time
In short β itβs a single repo that lets you see how load affects latency, error rate, and resource usage instantly.
Target Audience
- Developers learning Prometheus integration in Go.
- Teams exploring observability setups with Prometheus and Grafana.
- People practicing performance testing with k6.
- Interview/demo projects to showcase system design, metrics, and automation skills.
Why Itβs Useful
- Provides a ready-to-run environment for:
- CRUD REST API (in Go)
- Prometheus metrics exposure
- k6 traffic generation
- Ideal for learning, teaching, or demonstrating:
- Metrics instrumentation best practices
- Monitoring pipelines
- Performance & load testing workflows
Components at a glance:
-
Prometric (Go API) β exposes Prometheus metrics like
http_requests_total, latency histograms, and memory usage - K6 β generates load (GET, POST, DELETE) to the API
-
Prometheus β scrapes the
/metricsendpoint every few seconds - Grafana β displays dashboards that correlate load, latency, and system behavior
Prometric
Prometric = Prometheus + Metric
This provides CRUD operations for Person objects. It uses an in-memory database and exposes Prometheus metrics for observability.
Features
- RESTful API for managing Person objects (Create, Read, Update, Delete)
- In-memory storage (no external database required)
- Built-in Prometheus metrics for monitoring
- Runs on port :7080 by default
API Endpoints
| Method | Endpoint | Description |
|---|---|---|
| GET | /person/list |
List all persons |
| GET | /persons/{id} |
Get a specific person |
| POST | /person |
Create a new person |
| PUT | /person/{id} |
Update an existing person |
| DELETE | /person/{id} |
Delete a person |
| GET | /metrics |
Prometheus metrics endpoint |
K6 Script
I use Grafan k6 to generate traffic against the prometric API. This k6-scripts demonstrates a simple scenario that exercises the CRUD endpoints for Person objects.
The script does the following:
- Creates some 50K Objects (ie Person) in ~20 mins (50000 iterations shared among 50 VUs, maxDuration: 10m).
- Tries to get Person by Random Id for 10 mins (20.00 iterations/s for 10m0s, maxVUs: 10).
- Tries to get Person list for 10 mins (10.00 iterations/s for 10m0s, maxVUs: 5).
- Updates the Person for 10 mins (5.00 iterations/s for 2m0s, maxVUs: 5).
- Deletes about 1500 Persons randomly within 10 mins (1500 iterations shared among 2 VUs,maxDuration: 10m0s).
These above iterations are enough to generate some adequate prometheus metrics which can be used to play with prometheus and grafana dashboard.
π Metrics Exposed
| Metric | Type | Description |
|---|---|---|
http_requests_total |
Counter | Total HTTP requests processed |
http_requests_in_progress |
Gauge | Active requests being handled |
http_request_duration_seconds |
Histogram | Request latency |
person_store_count |
Gauge | Person records in memory |
person_created_total |
Counter | Successful creations |
person_deleted_total |
Counter | Deletions |
person_not_found_total |
Counter | Failed lookups |
person_payload_size_bytes |
Histogram | POST payload size |
app_cpu_usage_percent |
Gauge | CPU usage (%) |
app_memory_usage_megabytes |
Gauge | Memory usage (MB) |
These cover both application-level and system-level observability.
Quick Start
Run Locally
- Clone the repo and run the app
$ git clone https://github.com/peek8/prometheus-k6-fusion.git
$ cd prometheus-k6-fusion
$ go run main.go
The API server will start on http://localhost:7080. You can use tools like curl or Postman to interact with the endpoints for testing. Access Prometheus metrics at:
http://localhost:7080/metrics.Install Grafan k6 at your local machine and run the k6-scripts from the repo:
$ k6 run ./scripts/k6-scripts.js
And now if you hit the metric endpoint, you will see different metric values keep changing.
Use Docker
- Run the api server first:
$ docker run \
--rm -p 7080:7080 \
ghcr.io/peek8/prometric:latest
With this, The API server will start on http://localhost:7080.
- Run the k6 script with grafana/k6 image:
$ docker run -i --rm \
-e BASE_URL=http://host.docker.internal:7080 \
grafana/k6:latest run - < ./scripts/k6-scripts.js
if you are using podman, use BASE_URL=http://host.containers.internal:7080.
Prometheus and Grafana
Run Prometheus
Run prometheus using the prometheus.yml file:
$ docker run \
-p 9090:9090 \
-v ./prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus
N.B: If you are using podman use host.containers.internal as targets at prometheus.yml file, ie:
targets: ["host.docker.internal:7080"]
Run Grafana
Run Grafan using docker:
$ docker run -d -p 3000:3000 grafana/grafana
Then grafana will be available at http://localhost:3000, use admin:admin as to login for the first time.
Grafana Dashboard
You can create a nice grafana dashboard using these metrics using the ready-to-import Grafana JSON dashboard.
At this json the Datasource name is prometric-k6, if you have already a datasource use that in the json file Or you can create it using running the grafana-datasource.sh where the credentials used is admin/admin, Change it to your own user name/password.
Top comments (0)