DEV Community

Cover image for Distributed JMeter Load Testing with Docker: Multi-Region Testing Guide
Rama Krishna Reddy Arumalla
Rama Krishna Reddy Arumalla

Posted on • Edited on

Distributed JMeter Load Testing with Docker: Multi-Region Testing Guide

Distributed JMeter Load Testing with Docker: Multi-Region Testing Guide

Learn how to build and run distributed load tests across multiple AWS regions using JMeter and Docker containers.


Table of Contents

  1. Introduction
  2. Architecture Overview
  3. Complete JMeter Test Script
  4. Dockerizing JMeter
  5. Local Distributed Testing
  6. AWS Multi-Region Deployment
  7. Test Data Generation
  8. Results Analysis
  9. Quick Start (5 Minutes)
  10. Customization Guide
  11. Troubleshooting

Introduction

Load testing is critical for understanding your application's performance under stress. But testing from a single location doesn't reveal how your system performs under realistic geographic load distribution.

This guide shows you how to:

  • Build a complete JMeter test script for a real-world Order Management System
  • Containerize JMeter with Docker for reproducible testing
  • Run distributed tests locally with 1 master coordinating 3 slave agents
  • Deploy to multiple AWS regions for geographic load distribution
  • Analyze results with detailed performance metrics

By the end, you'll have a production-ready load testing setup that scales from your laptop to a global infrastructure.

What You'll Build

Your Machine/AWS
├── JMeter Master (Orchestrator)
└── 3+ JMeter Slaves (Load Generators)
    ├── Region 1: us-east-1
    ├── Region 2: eu-west-1
    └── Region 3: ap-southeast-1
Enter fullscreen mode Exit fullscreen mode

Expected outcome: 5,000+ requests per test run with detailed performance metrics.


Architecture Overview

Traditional vs. Distributed Load Testing

TRADITIONAL:
┌─────────────────────┐
│   Single Machine    │
│  + JMeter Master    │
│  + All Load Threads │
└─────────────────────┘
Problem: Single point of failure, network bottleneck

DISTRIBUTED:
┌──────────────┐
│   Master     │─ Coordinates
└──────────────┘
      ├─ Slave 1 (100 threads)
      ├─ Slave 2 (100 threads)
      └─ Slave 3 (100 threads)
Result: 300 concurrent users, geographic distribution
Enter fullscreen mode Exit fullscreen mode

RMI Communication

JMeter uses Java RMI (Remote Method Invocation) for master-slave communication:

  • Master sends test plan to slaves
  • Slaves execute requests and send back results
  • Master aggregates metrics
  • Results preserved for analysis

Complete JMeter Test Script

Order Management System Example

Here's a production-ready test script covering complete CRUD operations:

<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="5.0" jmeter="5.5">
  <hashTree>
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan"
              testname="Order Management System - Load Test" enabled="true">
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments"
                   guiclass="ArgumentsPanel" testclass="Arguments"
                   testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments">
          <elementProp name="BASE_URL" elementType="Argument">
            <stringProp name="Argument.name">BASE_URL</stringProp>
            <stringProp name="Argument.value">${__property(api.base.url,http://localhost:8080/api)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="REGION" elementType="Argument">
            <stringProp name="Argument.name">REGION</stringProp>
            <stringProp name="Argument.value">${__property(region,us-east-1)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
          <elementProp name="AUTH_TOKEN" elementType="Argument">
            <stringProp name="Argument.name">AUTH_TOKEN</stringProp>
            <stringProp name="Argument.value">${__property(auth.token,demo-token-123)}</stringProp>
            <stringProp name="Argument.metadata">=</stringProp>
          </elementProp>
        </collectionProp>
      </elementProp>
    </TestPlan>
    <hashTree>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup"
                   testname="Order Management - Thread Group" enabled="true">
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController"
                     guiclass="LoopControlPanel" testclass="LoopController"
                     testname="Loop Controller" enabled="true">
          <booleanProp name="LoopController.continue_forever">false</booleanProp>
          <stringProp name="LoopController.loops">10</stringProp>
        </elementProp>
        <stringProp name="ThreadGroup.num_threads">${__property(num.threads,10)}</stringProp>
        <stringProp name="ThreadGroup.ramp_time">${__property(ramp.time,30)}</stringProp>
        <booleanProp name="ThreadGroup.scheduler">false</booleanProp>
      </ThreadGroup>
      <hashTree>
        <ConfigTestElement guiclass="HttpDefaultsGui" testclass="ConfigTestElement"
                           testname="HTTP Request Defaults" enabled="true">
          <stringProp name="HTTPSampler.domain">${BASE_URL}</stringProp>
          <stringProp name="HTTPSampler.protocol">http</stringProp>
          <stringProp name="HTTPSampler.concurrentPool">4</stringProp>
          <stringProp name="HTTPSampler.connect_timeout">20000</stringProp>
          <stringProp name="HTTPSampler.response_timeout">20000</stringProp>
        </ConfigTestElement>
        <hashTree/>

        <HeaderManager guiclass="HeaderPanel" testclass="HeaderManager"
                       testname="HTTP Header Manager" enabled="true">
          <collectionProp name="HeaderManager.headers">
            <elementProp name="Authorization" elementType="Header">
              <stringProp name="Header.name">Authorization</stringProp>
              <stringProp name="Header.value">Bearer ${AUTH_TOKEN}</stringProp>
            </elementProp>
            <elementProp name="Content-Type" elementType="Header">
              <stringProp name="Header.name">Content-Type</stringProp>
              <stringProp name="Header.value">application/json</stringProp>
            </elementProp>
            <elementProp name="X-Region" elementType="Header">
              <stringProp name="Header.name">X-Region</stringProp>
              <stringProp name="Header.value">${REGION}</stringProp>
            </elementProp>
            <elementProp name="X-Request-ID" elementType="Header">
              <stringProp name="Header.name">X-Request-ID</stringProp>
              <stringProp name="Header.value">${__uuid()}</stringProp>
            </elementProp>
          </collectionProp>
        </HeaderManager>
        <hashTree/>

        <!-- 1. CREATE ORDER -->
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy"
                          testname="1. Create Order" enabled="true">
          <stringProp name="HTTPSampler.path">/orders</stringProp>
          <stringProp name="HTTPSampler.method">POST</stringProp>
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments"
                        guiclass="HTTPArgumentsPanel" testclass="Arguments"
                        testname="Arguments" enabled="true">
            <collectionProp name="Arguments.arguments">
              <elementProp name="" elementType="HTTPArgument">
                <booleanProp name="HTTPArgument.always_encode">false</booleanProp>
                <stringProp name="Argument.name"></stringProp>
                <stringProp name="Argument.value">{
  "userId": "USER${__Random(1,1000)}",
  "productId": "PROD${__Random(1,10)}",
  "quantity": ${__Random(1,5)},
  "region": "${REGION}",
  "customerName": "Customer${__Random(1,1000)}",
  "email": "customer${__Random(1,1000)}@example.com",
  "shippingAddress": "123 Main St, ${REGION}",
  "paymentMethod": "credit_card",
  "currency": "USD"
}</stringProp>
                <booleanProp name="HTTPArgument.use_file">false</booleanProp>
              </elementProp>
            </collectionProp>
          </elementProp>
        </HTTPSamplerProxy>
        <hashTree>
          <ResponseAssertion guiclass="AssertionGui" testclass="ResponseAssertion"
                             testname="Assert Status Code 200-201" enabled="true">
            <collectionProp name="Asserions">
              <stringProp name="1633021470">201</stringProp>
              <stringProp name="1633021470">200</stringProp>
            </collectionProp>
            <stringProp name="Assertion.test_type">1</stringProp>
          </ResponseAssertion>
          <hashTree/>

          <JSR223Extractor guiclass="TestBeanGUI" testclass="JSR223Extractor"
                           testname="Extract Order ID" enabled="true">
            <stringProp name="script">
try {
  def response = prev.getResponseDataAsString();
  def json = new groovy.json.JsonSlurper().parseText(response);
  vars.put("ORDER_ID", json.orderId.toString());
  prev.setSuccessful(true);
} catch (Exception e) {
  log.error("Error extracting order ID", e);
  prev.setSuccessful(false);
}
            </stringProp>
            <stringProp name="scriptLanguage">groovy</stringProp>
          </JSR223Extractor>
          <hashTree/>
        </hashTree>

        <!-- 2. GET ORDER DETAILS -->
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy"
                          testname="2. Get Order Details" enabled="true">
          <stringProp name="HTTPSampler.path">/orders/${ORDER_ID}</stringProp>
          <stringProp name="HTTPSampler.method">GET</stringProp>
        </HTTPSamplerProxy>
        <hashTree>
          <ResponseAssertion guiclass="AssertionGui" testclass="ResponseAssertion"
                             testname="Assert Status 200" enabled="true">
            <collectionProp name="Asserions">
              <stringProp name="1633021470">200</stringProp>
            </collectionProp>
            <stringProp name="Assertion.test_type">1</stringProp>
          </ResponseAssertion>
          <hashTree/>
        </hashTree>

        <!-- 3. UPDATE ORDER STATUS -->
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy"
                          testname="3. Update Order Status" enabled="true">
          <stringProp name="HTTPSampler.path">/orders/${ORDER_ID}</stringProp>
          <stringProp name="HTTPSampler.method">PUT</stringProp>
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments"
                        guiclass="HTTPArgumentsPanel" testclass="Arguments"
                        testname="Arguments" enabled="true">
            <collectionProp name="Arguments.arguments">
              <elementProp name="" elementType="HTTPArgument">
                <stringProp name="Argument.value">{
  "status": "shipped",
  "trackingNumber": "TRACK${__Random(100000,999999)}",
  "estimatedDelivery": "${__time(yyyy-MM-dd)}"
}</stringProp>
                <booleanProp name="HTTPArgument.use_file">false</booleanProp>
              </elementProp>
            </collectionProp>
          </elementProp>
        </HTTPSamplerProxy>
        <hashTree>
          <ResponseAssertion guiclass="AssertionGui" testclass="ResponseAssertion"
                             testname="Assert Status 200" enabled="true">
            <collectionProp name="Asserions">
              <stringProp name="1633021470">200</stringProp>
            </collectionProp>
            <stringProp name="Assertion.test_type">1</stringProp>
          </ResponseAssertion>
          <hashTree/>
        </hashTree>

        <!-- 4. LIST ORDERS -->
        <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy"
                          testname="4. List Orders" enabled="true">
          <stringProp name="HTTPSampler.path">/orders</stringProp>
          <stringProp name="HTTPSampler.method">GET</stringProp>
          <elementProp name="HTTPsampler.Arguments" elementType="Arguments"
                        guiclass="HTTPArgumentsPanel" testclass="Arguments"
                        testname="Arguments" enabled="true">
            <collectionProp name="Arguments.arguments">
              <elementProp name="page" elementType="Argument">
                <stringProp name="Argument.name">page</stringProp>
                <stringProp name="Argument.value">1</stringProp>
              </elementProp>
              <elementProp name="limit" elementType="Argument">
                <stringProp name="Argument.name">limit</stringProp>
                <stringProp name="Argument.value">20</stringProp>
              </elementProp>
              <elementProp name="region" elementType="Argument">
                <stringProp name="Argument.name">region</stringProp>
                <stringProp name="Argument.value">${REGION}</stringProp>
              </elementProp>
            </collectionProp>
          </elementProp>
        </HTTPSamplerProxy>
        <hashTree/>

        <!-- 5. CANCEL ORDER (30% chance) -->
        <IfController guiclass="IfControllerPanel" testclass="IfController"
                      testname="If Random - Cancel Order" enabled="true">
          <stringProp name="IfController.condition">${__javaScript(Math.random() < 0.3)}</stringProp>
          <booleanProp name="IfController.useExpression">true</booleanProp>
        </IfController>
        <hashTree>
          <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy"
                            testname="5. Cancel Order" enabled="true">
            <stringProp name="HTTPSampler.path">/orders/${ORDER_ID}</stringProp>
            <stringProp name="HTTPSampler.method">DELETE</stringProp>
          </HTTPSamplerProxy>
          <hashTree/>
        </hashTree>

        <!-- LISTENERS -->
        <ResultCollector guiclass="SummaryReport" testclass="ResultCollector"
                         testname="Summary Report" enabled="true"/>
        <hashTree/>
      </hashTree>
    </hashTree>
  </hashTree>
</jmeterTestPlan>
Enter fullscreen mode Exit fullscreen mode

Key Features:

  • Dynamic Payload: Each request generates random user IDs, products, and quantities
  • Request Correlation: Order ID from POST response used in subsequent requests
  • Regional Headers: Includes X-Region and X-Request-ID for tracking
  • Response Assertions: Validates HTTP status codes
  • Conditional Logic: 30% of users cancel orders (realistic scenario)
  • Multi-threaded: Configurable thread count, ramp-up time, iterations

Dockerizing JMeter

Dockerfile (Multi-Stage Build)

# Stage 1: Builder
FROM openjdk:11-jre-slim as builder

ARG JMETER_VERSION=5.5
ARG JMETER_HOME=/opt/apache-jmeter

RUN apt-get update && apt-get install -y wget unzip && rm -rf /var/lib/apt/lists/*

RUN mkdir -p ${JMETER_HOME} && \
    wget -q https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.zip && \
    unzip -q apache-jmeter-${JMETER_VERSION}.zip && \
    mv apache-jmeter-${JMETER_VERSION}/* ${JMETER_HOME} && \
    rm -rf apache-jmeter-${JMETER_VERSION}.zip apache-jmeter-${JMETER_VERSION}

# Stage 2: Runtime
FROM openjdk:11-jre-slim

ARG JMETER_HOME=/opt/apache-jmeter

COPY --from=builder ${JMETER_HOME} ${JMETER_HOME}

ENV JMETER_HOME=${JMETER_HOME} \
    PATH=${JMETER_HOME}/bin:$PATH \
    JMETER_LANGUAGE=en \
    HEAP=-Xmx1g \
    NEW=-Xmn512m

RUN mkdir -p /jmeter/tests /jmeter/results /jmeter/logs

COPY docker-entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
    CMD jmeter --version || exit 1

WORKDIR /jmeter

ENTRYPOINT ["/entrypoint.sh"]

EXPOSE 50500 50501

CMD ["--help"]
Enter fullscreen mode Exit fullscreen mode

Why Multi-Stage?

  • Builder stage: Downloads and extracts JMeter (temporary, discarded)
  • Runtime stage: Contains only JMeter binary (~850MB final image)
  • Result: Smaller, cleaner image without build artifacts

Local Distributed Testing

Docker Compose Setup

version: '3.8'

services:
  jmeter-master:
    build:
      context: .
      dockerfile: Dockerfile
    image: jmeter:latest
    container_name: jmeter-master
    hostname: jmeter-master
    networks:
      - jmeter-network
    ports:
      - "50500:50500"
    environment:
      HEAP: "-Xmx2g"
      NEW: "-Xmn1g"
      RMI_BIND_ADDRESS: "0.0.0.0"
      API_BASE_URL: "http://localhost:8080/api"
      REGION: "us-east-1"
      NUM_THREADS: "50"
      RAMP_TIME: "60"
      REMOTE_HOSTS: "jmeter-slave-1:50500,jmeter-slave-2:50500,jmeter-slave-3:50500"
    volumes:
      - ./tests:/jmeter/tests:ro
      - ./results:/jmeter/results:rw
    command: master /jmeter/tests/oms-load-test.jmx
    depends_on:
      - jmeter-slave-1
      - jmeter-slave-2
      - jmeter-slave-3
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 2.5G

  jmeter-slave-1:
    image: jmeter:latest
    container_name: jmeter-slave-1
    hostname: jmeter-slave-1
    networks:
      - jmeter-network
    ports:
      - "50502:50500"
    environment:
      HEAP: "-Xmx1g"
      RMI_BIND_ADDRESS: "0.0.0.0"
      REGION: "us-east-1"
    command: server
    deploy:
      resources:
        limits:
          cpus: '1.5'
          memory: 1.5G

  jmeter-slave-2:
    image: jmeter:latest
    container_name: jmeter-slave-2
    hostname: jmeter-slave-2
    networks:
      - jmeter-network
    ports:
      - "50503:50500"
    environment:
      HEAP: "-Xmx1g"
      RMI_BIND_ADDRESS: "0.0.0.0"
      REGION: "eu-west-1"
    command: server
    deploy:
      resources:
        limits:
          cpus: '1.5'
          memory: 1.5G

  jmeter-slave-3:
    image: jmeter:latest
    container_name: jmeter-slave-3
    hostname: jmeter-slave-3
    networks:
      - jmeter-network
    ports:
      - "50504:50500"
    environment:
      HEAP: "-Xmx1g"
      RMI_BIND_ADDRESS: "0.0.0.0"
      REGION: "ap-southeast-1"
    command: server
    deploy:
      resources:
        limits:
          cpus: '1.5'
          memory: 1.5G

networks:
  jmeter-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.25.0.0/16
Enter fullscreen mode Exit fullscreen mode

Deployment Configuration:

  • Master: 2GB heap, 2 CPU cores, orchestrates test
  • Slaves: 1GB heap each, 1.5 CPU cores, generate load
  • Network: Custom bridge for secure RMI communication
  • Ports: 50500 (RMI), 50502-50504 (external access)

AWS Multi-Region Deployment

Bash Deployment Script

#!/bin/bash
set -e

REGIONS=${1:-us-east-1,eu-west-1}
INSTANCE_TYPE=${2:-t3.large}
SECURITY_GROUP_NAME="jmeter-sg"

# Function: Create security group
create_security_group() {
    local region=$1
    echo "[INFO] Creating security group in $region..."

    local sg_id=$(aws ec2 create-security-group \
        --group-name "$SECURITY_GROUP_NAME" \
        --description "JMeter distributed load testing" \
        --region "$region" \
        --query 'GroupId' \
        --output text)

    # RMI port
    aws ec2 authorize-security-group-ingress \
        --group-id "$sg_id" \
        --protocol tcp \
        --port 50500 \
        --cidr 0.0.0.0/0 \
        --region "$region"

    # SSH port
    aws ec2 authorize-security-group-ingress \
        --group-id "$sg_id" \
        --protocol tcp \
        --port 22 \
        --cidr 0.0.0.0/0 \
        --region "$region"

    echo "$sg_id"
}

# Function: Launch instances
launch_instances() {
    local region=$1
    local instance_count=$2

    echo "[INFO] Launching $instance_count instances in $region..."

    local ami=$(aws ec2 describe-images \
        --owners 099720109477 \
        --filters "Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-focal-20.04*" \
        --region "$region" \
        --query 'Images | sort_by(@, &CreationDate) | [-1].ImageId' \
        --output text)

    local response=$(aws ec2 run-instances \
        --image-id "$ami" \
        --instance-type "$INSTANCE_TYPE" \
        --key-name jmeter-test-key \
        --security-group-ids "$(create_security_group $region)" \
        --count "$instance_count" \
        --region "$region" \
        --query 'Instances[*].InstanceId' \
        --output text)

    echo "[SUCCESS] Instances: $response"
}

# Deploy to each region
IFS=',' read -ra REGIONS_ARRAY <<< "$REGIONS"
for region in "${REGIONS_ARRAY[@]}"; do
    launch_instances "${region// /}" 3  # 1 master + 2 slaves
done
Enter fullscreen mode Exit fullscreen mode

Deployment Steps:

  1. Create security groups (RMI port 50500 + SSH)
  2. Get latest Ubuntu AMI for each region
  3. Launch EC2 instances
  4. Docker Compose pulls from ECR
  5. JMeter containers start automatically

Test Data Generation

Python Data Generator

#!/usr/bin/env python3
"""Generate realistic test data for load testing"""

import argparse
import csv
import json
import random
from datetime import datetime, timedelta

class OrderDataGenerator:
    USERS = [f"USER{i:05d}" for i in range(1, 501)]
    PRODUCTS = [f"PROD{i:03d}" for i in range(1, 51)]
    REGIONS = ["us-east-1", "us-west-2", "eu-west-1", "eu-central-1", "ap-southeast-1"]

    def __init__(self, num_records=100):
        self.num_records = num_records
        self.generated_ids = set()

    def generate_order(self, index):
        return {
            "orderId": f"ORD{random.randint(100000, 999999)}",
            "userId": random.choice(self.USERS),
            "productId": random.choice(self.PRODUCTS),
            "quantity": random.randint(1, 10),
            "unitPrice": round(random.uniform(10.0, 1000.0), 2),
            "region": random.choice(self.REGIONS),
            "email": f"customer{random.randint(1, 1000)}@example.com",
            "orderDate": (datetime.now() - timedelta(days=random.randint(0, 30))).isoformat(),
        }

    def generate_csv(self, output_file):
        fieldnames = ["orderId", "userId", "productId", "quantity",
                      "unitPrice", "region", "email", "orderDate"]

        with open(output_file, "w", newline="") as f:
            writer = csv.DictWriter(f, fieldnames=fieldnames)
            writer.writeheader()

            for i in range(self.num_records):
                writer.writerow(self.generate_order(i))
                if (i + 1) % 100 == 0:
                    print(f"Generated {i + 1} records...")

        print(f"✓ Generated {self.num_records} records to {output_file}")

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description="Generate test data")
    parser.add_argument("--output", "-o", required=True, help="Output file")
    parser.add_argument("--records", "-r", type=int, default=100, help="Number of records")

    args = parser.parse_args()

    generator = OrderDataGenerator(num_records=args.records)
    generator.generate_csv(args.output)
Enter fullscreen mode Exit fullscreen mode

Usage:

# Generate 10,000 records
python3 generate-test-data.py --output orders.csv --records 10000
Enter fullscreen mode Exit fullscreen mode

Results Analysis

Python Analysis Script

#!/usr/bin/env python3
"""Analyze JMeter JTL test results"""

import csv
import json
import sys
from collections import defaultdict
from statistics import mean, median

class JTLAnalyzer:
    def __init__(self, file_path):
        self.file_path = file_path
        self.samples = []

    def load_jtl(self):
        with open(self.file_path, "r") as f:
            reader = csv.DictReader(f)
            for row in reader:
                self.samples.append({
                    "label": row.get("label", ""),
                    "elapsed": int(row.get("elapsed", 0)),
                    "latency": int(row.get("Latency", 0)),
                    "bytes": int(row.get("bytes", 0)),
                    "responseCode": row.get("responseCode", ""),
                    "success": row.get("success", "true").lower() == "true",
                })

    def analyze(self):
        elapsed_times = [s["elapsed"] for s in self.samples]

        results = {
            "summary": {
                "totalSamples": len(self.samples),
                "successful": sum(1 for s in self.samples if s["success"]),
                "failed": sum(1 for s in self.samples if not s["success"]),
            },
            "responseTime": {
                "min": min(elapsed_times),
                "max": max(elapsed_times),
                "mean": round(mean(elapsed_times)),
                "median": round(median(elapsed_times)),
                "p95": round(sorted(elapsed_times)[int(len(elapsed_times) * 0.95)]),
            },
            "httpCodes": defaultdict(int),
        }

        for sample in self.samples:
            results["httpCodes"][sample["responseCode"]] += 1

        return results

    def print_summary(self, results):
        print("\n" + "="*70)
        print("📊 JMeter Test Results")
        print("="*70)

        summary = results["summary"]
        print(f"\nTotal Samples:   {summary['totalSamples']}")
        print(f"Successful:      {summary['successful']}")
        print(f"Failed:          {summary['failed']}")

        timing = results["responseTime"]
        print(f"\nResponse Time (ms):")
        print(f"  Min:    {timing['min']}")
        print(f"  Max:    {timing['max']}")
        print(f"  Mean:   {timing['mean']}")
        print(f"  Median: {timing['median']}")
        print(f"  P95:    {timing['p95']}")

        print(f"\nHTTP Codes:")
        for code, count in sorted(results["httpCodes"].items()):
            print(f"  {code}: {count}")

if __name__ == "__main__":
    analyzer = JTLAnalyzer(sys.argv[1])
    analyzer.load_jtl()
    results = analyzer.analyze()
    analyzer.print_summary(results)

    # Export to JSON
    if len(sys.argv) > 2:
        with open(sys.argv[2], "w") as f:
            json.dump(results, f, indent=2)
Enter fullscreen mode Exit fullscreen mode

Expected Output:

======================================================================
📊 JMeter Test Results
======================================================================

Total Samples:   5000
Successful:      4950
Failed:          50

Response Time (ms):
  Min:    45
  Max:    2543
  Mean:   245
  Median: 198
  P95:    687

HTTP Codes:
  200: 4500
  201: 400
  500: 100
Enter fullscreen mode Exit fullscreen mode

Quick Start (5 Minutes)

Step 1: Build (2 minutes)

cd /path/to/project
docker build -t jmeter:latest -f Dockerfile .
Enter fullscreen mode Exit fullscreen mode

Step 2: Run (2 minutes)

docker-compose up
Enter fullscreen mode Exit fullscreen mode

Step 3: Analyze (1 minute)

python3 analyze-results.py results/*/results.jtl
Enter fullscreen mode Exit fullscreen mode

Output:

  • Test runs automatically
  • Results saved to results/YYYYMMDD_HHMMSS/
  • HTML report at results/*/html-report/index.html

Customization Guide

Modify for Your API

  1. Open test script in JMeter GUI:
jmeter -t oms-load-test.jmx
Enter fullscreen mode Exit fullscreen mode
  1. Modify HTTP Samplers:

    • Change domain (API_BASE_URL)
    • Update paths and methods
    • Adjust payload structure
  2. Update assertions:

    • Change expected response codes
    • Add custom response validation
  3. Configure load:

    • Adjust thread count
    • Change ramp-up time
    • Modify loop iterations

Change Target API

# In docker-compose.yml, update:
environment:
  API_BASE_URL: "https://your-api.example.com"
  AUTH_TOKEN: "your-real-token"
  NUM_THREADS: "100"
Enter fullscreen mode Exit fullscreen mode

Deploy Custom Configuration

# Override variables at runtime
API_BASE_URL=https://api.yourcompany.com \
NUM_THREADS=100 \
REGION=eu-west-1 \
docker-compose up
Enter fullscreen mode Exit fullscreen mode

Troubleshooting

Docker Build Fails

# Clear cache and rebuild
docker system prune -a
docker build --no-cache -t jmeter:latest .
Enter fullscreen mode Exit fullscreen mode

RMI Connection Issues

# Check RMI logs
docker logs jmeter-master | grep -i rmi

# Verify ports
docker port jmeter-master

# Restart services
docker-compose restart
Enter fullscreen mode Exit fullscreen mode

Out of Memory

# Increase heap in docker-compose.yml
environment:
  HEAP: "-Xmx2g"      # Increase from 1g
  NEW: "-Xmn1g"       # Increase from 512m
Enter fullscreen mode Exit fullscreen mode

Test Script Not Found

# Verify volume mount
docker inspect jmeter-master | grep -A 5 Mounts

# Check file exists
docker exec jmeter-master ls -la /jmeter/tests/

# Use absolute path
docker run -v $(pwd)/tests:/jmeter/tests:ro jmeter:latest ...
Enter fullscreen mode Exit fullscreen mode

Summary

You now have a complete, production-ready distributed load testing setup:

JMeter test script for Order Management System
Docker containerization for reproducibility
Local orchestration with 1 master + 3 slaves
AWS deployment automation for multi-region testing
Data generation scripts
Results analysis engine

Key Metrics Generated:

  • Response times (min, max, mean, median, P95, P99)
  • Throughput (requests/second)
  • Success/failure rates
  • Per-endpoint statistics
  • Geographic distribution analysis

Next Steps:

  1. Run locally first (docker-compose up)
  2. Analyze results (python3 analyze-results.py)
  3. Customize for your API
  4. Deploy to AWS (./deploy-aws.sh)
  5. Share results with your team

This setup works for any REST API and scales from your laptop to global infrastructure!

Top comments (0)