Assignment
Build a CI Load Test for http-echo applications using Kind (Kubernetes in Docker) and the k6 load testing tool.
Solution Overview
This project implements a comprehensive load testing solution using:
- Kind: Creates a multi-node Kubernetes cluster for realistic testing environments
- NGINX Ingress Controller: Routes HTTP requests to different services based on hostnames
- k6: Performs load testing with detailed metrics and performance analysis
Architecture
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β k6 Load Test βββββΆβ NGINX Ingress βββββΆβ http-echo apps β
β β β Controller β β β
β - foo.localhost β β β β - foo service β
β - bar.localhost β β (Port 80/443) β β - bar service β
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β
βΌ
ββββββββββββββββββββ
β Kind Cluster β
β β
β - Control plane β
β - Worker node β
ββββββββββββββββββββ
Project Structure
kind-http-echo-test/
βββ README.md # This documentation
βββ kind-config.yaml # Kind cluster configuration
βββ k8s/ # Kubernetes manifests
β βββ http-echo-foo.yaml # Foo service deployment
β βββ http-echo-bar.yaml # Bar service deployment
βββ loadtest/ # Load testing scripts
βββ load-test.js # k6 load test script
Prerequisites
Ensure you have the following tools installed in your CI environment:
CI Workflow
This project is designed for automated CI/CD pipelines that trigger on each pull request to the default branch. The workflow consists of four main stages executed sequentially:
Stage 1: Environment Setup
Purpose: Prepare the CI runner with all necessary tools and dependencies, supporting both AMD64 and ARM64 agents.
Key Activities:
- Checkout the source code from the repository
- Install k6 load testing tool via package manager
- Download and install Kind for local Kubernetes clusters
- Set up kubectl for cluster management
- Configure necessary permissions and environment variables
Stage 2: Cluster Provisioning
Purpose: Create and configure a complete Kubernetes environment for testing.
Key Activities:
- Create a multi-node Kind cluster using the project configuration
- Install NGINX Ingress Controller for request routing
- Wait for all control plane components to become ready
- Deploy the http-echo applications (foo and bar services)
- Verify all pods are running and services are accessible
- Configure ingress rules for hostname-based routing
Stage 3: Load Testing
Purpose: Execute comprehensive load testing against the deployed services.
Key Activities:
- Verify endpoint availability with health checks
- Execute k6 load test script with predefined scenarios
- Monitor real-time performance metrics during testing
- Generate detailed test results in JSON format
- Track custom metrics for both foo and bar services
- Capture response times, error rates, and throughput data
Test Configuration:
- Ramp-up: 10s to reach target load
- Sustained load: 30s at peak virtual users
- Ramp-down: 10s to zero load
- Performance thresholds: P95 < 500ms, Error rate < 10%
Stage 4: Results Analysis
Purpose: Validate performance requirements and clean up resources.
Key Activities:
- Parse JSON test results for key performance indicators
- Validate against predefined performance thresholds
- Generate human-readable test summary
- Fail the pipeline if performance criteria are not met
- Upload test artifacts for later analysis
- Clean up the Kind cluster to free resources
- Provide detailed feedback on test outcomes
Success Criteria:
- All requests complete successfully (< 10% error rate)
- Response times meet SLA requirements (P95 < 500ms)
- Both services perform within acceptable ranges
- No infrastructure errors during testing
π Load Testing Configuration
The load-test.js script provides comprehensive load testing capabilities with the following configuration:
π Test Execution Stages
The load test follows a three-phase approach for realistic performance assessment:
| Phase | Duration | Virtual Users | Purpose |
|---|---|---|---|
| πΌ Ramp-up | 10 seconds |
0 β 10 users |
Gradual load increase to target capacity |
| π― Sustained Load | 30 seconds |
20 users |
Peak performance evaluation |
| π½ Ramp-down | 10 seconds |
20 β 0 users |
Graceful load reduction |
β‘ Performance Thresholds
Critical performance criteria that must be met for test success:
- Response Time: π 95% of requests must complete under 500ms
- Error Rate: β Must remain below 10%
- Service Availability: β Both foo and bar services must be responsive
π Custom Metrics & Monitoring
Advanced monitoring capabilities for detailed performance analysis:
π― Host-Specific Metrics
-
Per-host Response Times: Individual performance tracking for
fooandbarservices - Load Distribution: Ensures balanced traffic across both deployments
- Service Health: Real-time availability monitoring
π Real-Time Tracking
- Error Rate Monitoring: Continuous failure rate assessment
- Request Distribution: Balanced load verification across services
- Performance Degradation: Early detection of performance issues
π Test Results & Output
Comprehensive reporting for performance analysis:
π₯οΈ Console Output
- Real-time metrics display during test execution
- Live performance indicators and alerts
- Immediate feedback on test progress
π Detailed Reports
-
JSON Report: Machine-readable results saved as
loadtest-results.json - Per-Host Breakdown: Individual service performance analysis
- Threshold Validation: Pass/fail status for each performance criterion
π Key Metrics Captured
- Request rate (requests/second)
- Response time percentiles (P90, P95, P99)
- Error distribution by service
- Host-specific performance characteristics
π Monitoring and Metrics
k6 Metrics Collection
Comprehensive performance data captured during load testing:
- Request Rate: Requests per second across all endpoints
- Response Time: Average, minimum, maximum, P90, and P95 percentiles
- Error Rate: Percentage of failed requests with detailed error categorization
- Host-specific Metrics: Individual performance analysis per service (foo/bar)
π Sample Test Results
Example output from a successful load testing execution:
{
"Total Requests": 450,
"Requests/sec": "9.00",
"Request Duration": {
"avg": "25.34ms",
"min": "10.12ms",
"max": "89.45ms",
"p90": "45.67ms",
"p95": "52.34ms"
},
"Failed Requests": "0.00%",
"Foo Host Duration": {
"avg": "24.56ms",
"p90": "44.23ms",
"p95": "51.12ms"
},
"Bar Host Duration": {
"avg": "26.12ms",
"p90": "47.11ms",
"p95": "53.56ms"
}
}
π Project Completion Notes
π‘ Development Experience
-
GitHub Copilot Integration:
- Successfully leveraged GitHub Copilot (GPT-4.1) to generate code with the following prompts:
- Create a Kind configuration for a Kubernetes cluster with 2 nodes, adding taints to the control-plane node to ensure only the ingress-nginx controller is deployed on it
- Add a step for deploying an Ingress controller to handle incoming HTTP requests in the CI workflow
- Create 2 http-echo deployments using the
hashicorp/http-echo:1.0image, one serving a "bar" response and another serving a "foo" response. Configure cluster/ingress routing to send traffic for the "bar" hostname to the bar deployment and "foo" hostname to the foo deployment on the local cluster (routing both http://foo.localhost and http://bar.localhost), where ingress rules are based on the request's host header - Generate randomized traffic loads for both bar and foo hosts and capture load testing results using k6
- Add a CI workflow step at the beginning to modify
/etc/hosts, aliasingfoo.localhostandbar.localhostto127.0.0.1 - Create a CI workflow that triggers on each pull request to the master branch
- Use Github-copilot (Claude Sonnet 4) to write this document
Kind Learning Curve: This was my first experience with Kind (Kubernetes in Docker). I spent considerable time understanding how Kind works, particularly the configuration required to expose services to localhost for external access and testing.
Time Investment: The complete project took approximately 5 hours to complete, including research, implementation, testing, and documentation.
Source: https://github.com/vumdao/kind-http-echo-test
Ref: Goodnotes Take Home Assignment - DevOps Engineering
Note: The solution did not meet the team's expectations.
Top comments (0)