What is Grafana k6?
Grafana k6 (commonly just called k6) is an open-source, QA/SDET-friendly, and extensible load testing tool built by Grafana Labs. It lets you write performance test scripts in JavaScript or TypeScript and execute them against your APIs.
It is designed for:
- QA Engineers and SDETs who need reliable, repeatable load scenarios
- Developers who want to write tests as code
- SREs who want to validate SLOs before and after deployments
k6 is built in Go, which means the test runner itself is blazing fast and extremely memory-efficient — it can simulate thousands of virtual users on a single machine without breaking a sweat.
Why k6 Over Other Tools?
JMeter has been the industry default for years. But the ecosystem has moved on. Here's why k6 is the modern choice:
| Feature | k6 | JMeter | Locust |
|---|---|---|---|
| Script Language | JavaScript / TypeScript | XML (GUI) | Python |
| Resource Efficiency | Very High (Go runtime) | Low (JVM overhead) | Medium |
| CI/CD Integration | Native | Plugins required | Manual |
| Code Reusability | ES6 modules, imports | Limited | Medium |
| Built-in Metrics | Rich, structured | Verbose | Basic |
| Threshold/Pass-Fail | First-class | Plugin-based | Manual |
| HTML/Cloud Reports | Built-in + plugins | Plugins | Manual |
| Browser Testing | Yes (k6 browser API) | Via WebDriver | No |
Key Advantages of k6
1. Scripted in JavaScript — No proprietary DSL. If you know JS, you're ready. Tests live in your codebase alongside unit and e2e tests.
2. Extremely resource-efficient — k6 is written in Go. Unlike JMeter (which runs on the JVM), a single instance of k6 can run 30,000 to 40,000 virtual users depending on the available resources.
3. Built-in thresholds — You define pass/fail criteria directly in your test script. k6 exits with a non-zero code if thresholds are violated, making it CI/CD-native.
4. Real-time web dashboard — Since v0.49.0, k6 ships with a built-in live web dashboard. No extra tooling needed for real-time monitoring.
5. Grafana ecosystem — k6 integrates natively with Grafana dashboards, Prometheus, InfluxDB, and Grafana Cloud for enterprise-grade observability.
Installation
k6 has no runtime dependencies. There's no JVM, no Node.js — just a single binary.
Windows
Via Chocolatey (recommended):
choco install k6
Via winget:
winget install k6 --source winget
Or download the installer directly from the k6 GitHub Releases page and run the .msi file.
macOS
Via Homebrew (recommended):
brew install k6
Verify installation:
k6 version
# k6 v1.x.x (go1.x.x, ...)
Project Folder Structure
k6 is just JavaScript files — but a clean structure makes your test suite maintainable and team-friendly. While k6 does not enforce a strict, mandatory folder structure, Grafana recommends adopting a modular and organized approach, particularly for complex projects. The following folder structure can be followed:
my-k6-project/
│
├── config/
│ ├── smoke.json # Smoke test options (1 VU, 1 iteration)
│ ├── load.json # Normal load test options
│ └── stress.json # Stress test options (high VUs)
│
├── helpers/
│ ├── auth.js # Reusable login / token helpers
│ ├── request.js # Base HTTP request wrappers
│ └── utils.js # Utility functions (random data, etc.)
│
├── scenarios/
│ ├── login-flow.js # End-to-end login scenario
│ └── checkout-flow.js # End-to-end checkout scenario
│
├── reports/ # Auto-generated HTML/JSON reports go here
│
├── .env # Environment variables (BASE_URL, tokens, etc.)
├── main.js # Main entry point — imports & runs scenarios
└── README.md
What Each Folder/File Does
-
config/— Separates test configuration (VUs, duration, stages) from test code. Switch between smoke, load, and stress tests by just changing the config JSON, not the scripts. -
helpers/— Reusable utility code: authentication flows, HTTP wrappers, random data generators. Import these in your scenarios to keep things DRY. -
scenarios/— Each file represents a meaningful user journey (login, checkout, search). These contain the actual test logic. -
reports/— Where generated HTML and JSON output lands after a test run. -
main.js— The orchestrator. It imports scenarios and wires them up with configuration.
Configuration & Utilities
Environment Variables
k6 reads environment variables via __ENV. This lets you keep base URLs and secrets out of your scripts:
// main.js
export const BASE_URL = __ENV.BASE_URL || 'https://api.staging.example.com';
Load Configuration (Stages):
BASE_URL=https://api.prod.example.com k6 run main.js
Or use a config JSON:
// config/load.json
{
"vus": 50,
"duration": "5m",
"thresholds": {
"http_req_duration": ["p(95)<500"],
"http_req_failed": ["rate<0.01"]
}
}
Load it at runtime:
k6 run --config config/load.json main.js
Load Configuration (Stages)
Stages let you ramp virtual users up and down, simulating realistic traffic patterns:
// main.js
export const options = {
stages: [
{ duration: '1m', target: 20 }, // Ramp up to 20 VUs
{ duration: '3m', target: 20 }, // Hold at 20 VUs
{ duration: '1m', target: 50 }, // Spike to 50 VUs
{ duration: '2m', target: 50 }, // Hold the spike
{ duration: '1m', target: 0 }, // Ramp down
],
};
Reusable Helper — helpers/auth.js
// helpers/auth.js
import http from 'k6/http';
import { check } from 'k6';
export function getAuthToken(baseUrl, username, password) {
const payload = JSON.stringify({ username, password });
const params = { headers: { 'Content-Type': 'application/json' } };
const res = http.post(`${baseUrl}/auth/login`, payload, params);
check(res, {
'login successful': (r) => r.status === 200,
});
return res.json('token');
}
Reusable Request Wrapper — helpers/request.js
// helpers/request.js
import http from 'k6/http';
export function get(url, token) {
return http.get(url, {
headers: {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json',
},
});
}
export function post(url, body, token) {
return http.post(url, JSON.stringify(body), {
headers: {
Authorization: `Bearer ${token}`,
'Content-Type': 'application/json',
},
});
}
A Complete Test Script
// scenarios/login-flow.js
import { check, sleep } from 'k6';
import { getAuthToken } from '../helpers/auth.js';
import { get } from '../helpers/request.js';
const BASE_URL = __ENV.BASE_URL || 'https://api.example.com';
export default function () {
// Step 1: Login and get token
const token = getAuthToken(BASE_URL, 'testuser', 'password123');
// Step 2: Use token to call a protected endpoint
const res = get(`${BASE_URL}/api/v1/profile`, token);
check(res, {
'profile status 200': (r) => r.status === 200,
'has user id': (r) => r.json('id') !== undefined,
});
sleep(1); // Think time between iterations
}
HTML Reporter Setup
k6 offers two built-in ways to generate HTML reports, plus a popular community plugin.
Option 1 — Built-in Web Dashboard (Recommended since v0.49.0)
This is the officially supported approach from Grafana. No dependencies needed.
K6_WEB_DASHBOARD=true \
K6_WEB_DASHBOARD_EXPORT=reports/html-report.html \
k6 run main.js
While the test runs, open http://localhost:5665 in your browser to see real-time metrics. At the end, an HTML report is saved to reports/html-report.html.
Customize the port if needed:
K6_WEB_DASHBOARD=true \
K6_WEB_DASHBOARD_PORT=8080 \
K6_WEB_DASHBOARD_EXPORT=reports/html-report.html \
k6 run main.js
Option 2 — benc-uk/k6-reporter (Community Plugin)
This plugin produces a standalone HTML file with charts, check results, and threshold pass/fail indicators — great for sharing with stakeholders.
Add this to your test script:
// main.js
import { htmlReport } from 'https://raw.githubusercontent.com/benc-uk/k6-reporter/main/dist/bundle.js';
import { textSummary } from 'https://jslib.k6.io/k6-summary/0.1.0/index.js';
export { default } from './scenarios/login-flow.js';
export function handleSummary(data) {
return {
'reports/summary.html': htmlReport(data),
stdout: textSummary(data, { indent: ' ', enableColors: true }),
};
}
Run normally:
k6 run main.js
A reports/summary.html file is generated automatically after the run.
Running Your Tests
Basic Run
k6 run main.js
Specify VUs and Duration via CLI
k6 run --vus 20 --duration 2m main.js
Run with a Config File
k6 run --config config/load.json main.js
Pass Environment Variables
BASE_URL=https://staging.example.com k6 run main.js
Run with JSON Output
k6 run --out json=reports/results.json main.js
Multiple Outputs at Once
k6 run --out json=reports/results.json --out csv=reports/results.csv main.js
Test Types Reference
| Test Type | Purpose | Typical Config |
|---|---|---|
| Smoke | Sanity check — does it work at all? | 1 VU, 1-2 min |
| Load | Normal expected traffic | 50–100 VUs, 5–10 min |
| Stress | Push beyond normal limits | 200+ VUs, ramp up |
| Spike | Sudden traffic surge | 0 → 500 VUs instantly |
| Soak | Long-running reliability check | 50 VUs, 1–8 hours |
# Smoke
k6 run --config config/smoke.json main.js
# Load
k6 run --config config/load.json main.js
# Stress
k6 run --config config/stress.json main.js
Understanding the Output
After a test run, k6 prints a summary to the terminal. Here's what a real output looks like:
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: main.js
✓ login successful
✓ profile status 200
checks.........................: 100.00% ✓ 14820 ✗ 0
http_req_duration..............: avg=220ms min=105ms med=195ms max=4.51s p(90)=360ms p(95)=430ms
http_req_failed................: 0.00% ✓ 0 ✗ 7410
http_reqs......................: 7410 24.7/s
vus............................: 1 min=1 max=50
Statistical Values Explained
| Stat | What it Means |
|---|---|
| avg | Arithmetic mean — useful overview but can be skewed by outliers |
| min | Fastest observed value — best-case scenario |
| max | Slowest observed value — worst-case, often an outlier |
| med | Median (50th percentile) — half of requests were faster than this |
| p(90) | 90% of requests completed within this time |
| p(95) | Industry standard SLO target |
| p(99) | Tail latency — how bad is the worst user experience? |
💡 Pro Tip: Always use p(95) or p(99) for your thresholds, not avg. Average hides slow outliers. If p(95) is 430ms, it means 95% of your users got a response within 430ms — that's the metric that reflects real user experience.
Key Metrics Explained
| Metric | Description |
|---|---|
http_req_duration |
Total time for the request. This is your main performance indicator. |
http_req_waiting |
Time waiting for the server's first byte (TTFB). Reflects server processing time. |
http_req_connecting |
Time to establish TCP connection. High values indicate network issues. |
http_req_tls_handshaking |
TLS/SSL handshake time. Only relevant for HTTPS. |
http_req_failed |
Percentage of non-2xx/3xx responses. Keep this at 0% or very low. |
http_reqs |
Total number of requests made. |
vus |
Current active virtual users. |
iterations |
Total number of times the default function was executed. |
data_received / data_sent
|
Bandwidth — useful for spotting payload issues. |
checks |
Pass/fail count of your check() assertions. |
Success Criteria with Thresholds
Thresholds are k6's built-in pass/fail mechanism. You define what "passing" means, and k6 exits with code 1 if any threshold is breached — making it perfect for CI/CD pipelines.
Basic Thresholds
export const options = {
thresholds: {
// 95% of requests must complete below 500ms
http_req_duration: ['p(95)<500'],
// 99% of requests must complete below 1000ms
'http_req_duration{expected_response:true}': ['p(99)<1000'],
// Less than 1% of requests can fail
http_req_failed: ['rate<0.01'],
// All checks must pass
checks: ['rate==1.0'],
},
};
Thresholds Per Endpoint (using Tags)
export default function () {
const loginRes = http.post(
`${BASE_URL}/auth/login`,
JSON.stringify({ username: 'user', password: 'pass' }),
{ tags: { endpoint: 'login' } }
);
const profileRes = http.get(
`${BASE_URL}/api/profile`,
{ tags: { endpoint: 'profile' } }
);
}
export const options = {
thresholds: {
'http_req_duration{endpoint:login}': ['p(95)<300'],
'http_req_duration{endpoint:profile}': ['p(95)<200'],
},
};
Using Thresholds in CI/CD
# GitHub Actions example
- name: Run k6 load test
run: k6 run --config config/load.json main.js
# If thresholds fail, this step fails and the pipeline stops
Recommended Threshold Starting Points
| Metric | Suggested Threshold |
|---|---|
http_req_duration p(95) |
< 500ms for APIs, < 2000ms for pages |
http_req_duration p(99) |
< 1500ms for APIs |
http_req_failed |
< 0.01 (less than 1% errors) |
checks |
== 1.0 (100% of checks pass) |
Conclusion
Grafana k6 represents a significant shift in how engineering teams approach performance testing. By treating tests as code — written in JavaScript, versioned in Git, and embedded in CI/CD pipelines — k6 removes the friction that traditionally kept load testing siloed in QA teams or skipped altogether.
Quick summary of what we covered:
- k6 is modern and QA/SDET/developer-native — no Java, no XML, no GUI dependency
- Installation is a single binary — works on Windows, macOS, and Linux
- Project structure matters — separate config from code, keep helpers reusable
- Built-in HTML dashboard (since v0.49.0) gives real-time visual feedback with zero dependencies
- Focus on p(95) and p(99) over averages for realistic SLO validation
- Thresholds are your CI gate — define pass/fail criteria in code, automate quality enforcement
The most important step? Start small. A smoke test with 1 VU is infinitely better than no test at all.
Top comments (0)