DEV Community

Cover image for Performance Testing PHP Applications: Load Testing with K6 and Artillery
Patoliya Infotech
Patoliya Infotech

Posted on

Performance Testing PHP Applications: Load Testing with K6 and Artillery

TL;DR: Load testing is not optional for production PHP applications. This guide covers two of the best modern tools, K6 and Artillery, with real scripts, PHP-specific gotchas, CI/CD integration, and a framework for interpreting results so you know exactly when your app is ready (and when it isn't).

Why PHP Apps Need Load Testing

PHP powers a staggering portion of the web, from WordPress blogs to enterprise Laravel APIs to high-traffic e-commerce platforms. Yet performance testing is consistently one of the most skipped steps in PHP development workflows.

The consequences are predictable: a product launch doubles traffic, the database connection pool saturates, opcache fills up, and suddenly a healthy-looking application is returning 502s.

Load testing answers questions your unit tests and code reviews never can:

  • How many concurrent users can my app handle before response times degrade?
  • Where exactly does it break, PHP-FPM, MySQL, Redis, or the application code itself?
  • Does my session handling degrade under concurrent writes?
  • Does my Laravel queue back up under spike traffic?

If you're building custom PHP applications for production use, load testing isn't a one-time checkbox, it's a recurring discipline tied to every major feature release.

Performance Testing Concepts Refresher

Before diving into tools, let's align on terminology:

Term Definition
Load Test Simulate expected traffic to measure normal behaviour
Stress Test Push beyond expected limits to find the breaking point
Spike Test Sudden burst of traffic, simulates a flash sale or viral event
Soak Test Sustained load over hours, finds memory leaks and resource exhaustion
VU (Virtual User) A simulated user executing your test script concurrently
RPS (Requests/sec) Throughput, how many HTTP requests your app handles per second
p95 / p99 latency 95th / 99th percentile response time, the tail that users actually feel
Error rate % of requests that returned 4xx/5xx, your availability signal

The metric that matters most in practice: p99 latency. Your average response time can look great while 1% of users wait 8 seconds. In a Laravel API serving 10,000 RPM, that's 100 frustrated users every minute.

Tool Overview: K6 vs Artillery

Both tools are excellent. Here's how they differ:

Feature K6 Artillery
Script language JavaScript (ES6+) YAML + optional JS processors
Protocol support HTTP, WebSocket, gRPC HTTP, WebSocket, Socket.io
Learning curve Low–Medium Very Low (YAML)
Real browser simulation ✅ (k6 Browser)
Cloud execution ✅ Grafana Cloud k6 ✅ Artillery Cloud
CI/CD integration ✅ Excellent ✅ Excellent
Output / metrics Rich, Prometheus-compatible JSON, HTML reports
Best for API & complex flow testing Quick HTTP load tests, microservices
License AGPL-3 (OSS) + commercial MPL-2 (OSS) + commercial

Rule of thumb:

  • Reach for K6 when you need programmable, conditional logic, thresholds as code, and CI gates.
  • Reach for Artillery when you want YAML-driven speed, readable scenario configs, and simple team onboarding.

Many teams use both: Artillery for quick smoke tests, K6 for authoritative load runs in CI.

These tools integrate cleanly into DevOps pipelines, making performance a first-class deployment gate rather than an afterthought.

Setting Up Your Test Environment

Critical rule: Never load test production.

Always test against a staging environment that mirrors production as closely as possible:

✅ Same PHP version (match your production FPM config)
✅ Same web server (Nginx/Apache + PHP-FPM pool settings)
✅ Same database engine + approximate data volume
✅ Same caching layer (Redis / Memcached / OPcache enabled)
✅ Same CDN bypass (test origin, not CDN-cached responses)
✅ Network latency similar to real users (add artificial latency if testing locally)
Enter fullscreen mode Exit fullscreen mode

PHP-FPM configuration to note before testing:

; /etc/php/8.x/fpm/pool.d/www.conf
pm = dynamic
pm.max_children = 50        ; This is your hard concurrency ceiling
pm.start_servers = 10
pm.min_spare_servers = 5
pm.max_spare_servers = 35
pm.max_requests = 500       ; Restart workers after N requests (memory leak mitigation)
Enter fullscreen mode Exit fullscreen mode

Your load test should push pm.max_children to observe exactly what happens when the pool is exhausted, that's the most important number to know.

Configuring staging environments that accurately mirror production is a core part of robust software testing and QA practices.

Load Testing with K6

Installation

# macOS
brew install k6

# Ubuntu / Debian
sudo gpg --no-default-keyring \
  --keyring /usr/share/keyrings/k6-archive-keyring.gpg \
  --keyserver hkp://keyserver.ubuntu.com:80 \
  --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69

echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" \
  | sudo tee /etc/apt/sources.list.d/k6.list

sudo apt-get update && sudo apt-get install k6

# Docker (no install required)
docker run --rm -i grafana/k6 run - <script.js
Enter fullscreen mode Exit fullscreen mode

Your First K6 Script

// basic-load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';

// Test options, 10 VUs for 30 seconds
export const options = {
  vus: 10,
  duration: '30s',
};

export default function () {
  const res = http.get('https://staging.yourphpapp.com/api/products');

  // Assertions, failures increment k6's error counter
  check(res, {
    'status is 200': (r) => r.status === 200,
    'response time < 500ms': (r) => r.timings.duration < 500,
    'body contains data': (r) => r.body.includes('"data"'),
  });

  sleep(1); // Think time between requests, simulates real user behaviour
}
Enter fullscreen mode Exit fullscreen mode

Run it:

k6 run basic-load-test.js
Enter fullscreen mode Exit fullscreen mode

Output snippet:

✓ status is 200
✓ response time < 500ms
✓ body contains data

http_req_duration............: avg=143ms  min=89ms  med=131ms  max=487ms  p(90)=201ms  p(95)=289ms
http_req_failed..............: 0.00%  ✓ 0      ✗ 300
iterations...................: 300    10.0/s
vus..........................: 10     min=10   max=10
Enter fullscreen mode Exit fullscreen mode

Virtual Users & Stages

Real traffic doesn't spike instantly. Model realistic ramp-up patterns:

// staged-load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';

export const options = {
  stages: [
    { duration: '2m', target: 20 },   // Ramp up to 20 VUs over 2 minutes
    { duration: '5m', target: 20 },   // Hold at 20 VUs for 5 minutes
    { duration: '2m', target: 50 },   // Ramp to 50 VUs - stress territory
    { duration: '5m', target: 50 },   // Hold at stress level
    { duration: '2m', target: 0 },    // Ramp down - watch for recovery
  ],
};

export default function () {
  const res = http.get('https://staging.yourphpapp.com/');
  check(res, { 'status 200': (r) => r.status === 200 });
  sleep(Math.random() * 3 + 1); // Random 1-4s think time
}
Enter fullscreen mode Exit fullscreen mode

The ramp-down phase is underrated. Watch whether your app recovers cleanly as load drops. PHP-FPM workers that don't release database connections properly will hold locks even after VUs go to zero.

Thresholds & SLOs

K6 thresholds turn your performance requirements into pass/fail CI gates:

export const options = {
  stages: [
    { duration: '3m', target: 100 },
    { duration: '5m', target: 100 },
    { duration: '2m', target: 0 },
  ],
  thresholds: {
    // 95% of requests must complete in under 400ms
    'http_req_duration': ['p(95)<400'],

    // 99th percentile must stay under 1 second
    'http_req_duration': ['p(99)<1000'],

    // Less than 1% of requests can fail
    'http_req_failed': ['rate<0.01'],

    // Specific URL thresholds using tags
    'http_req_duration{url:https://staging.yourphpapp.com/api/checkout}': ['p(95)<600'],
  },
};
Enter fullscreen mode Exit fullscreen mode

If any threshold is breached, k6 run exits with a non-zero code - perfect for failing a CI build.

Testing PHP Auth Flows

Most PHP apps require authentication. Here's a realistic Laravel Sanctum / session-based login flow:

// auth-flow-test.js
import http from 'k6/http';
import { check, group, sleep } from 'k6';

const BASE_URL = 'https://staging.yourphpapp.com';

export const options = {
  stages: [
    { duration: '1m', target: 30 },
    { duration: '3m', target: 30 },
    { duration: '1m', target: 0 },
  ],
  thresholds: {
    'http_req_duration{group:::Login}': ['p(95)<800'],
    'http_req_duration{group:::Dashboard}': ['p(95)<400'],
    'http_req_failed': ['rate<0.02'],
  },
};

export default function () {
  let authToken;

  group('Login', () => {
    // Step 1: Get CSRF token (Laravel requires this)
    const csrfRes = http.get(`${BASE_URL}/sanctum/csrf-cookie`);
    const csrfToken = csrfRes.cookies['XSRF-TOKEN']
      ? decodeURIComponent(csrfRes.cookies['XSRF-TOKEN'][0].value)
      : '';

    // Step 2: Authenticate
    const loginRes = http.post(
      `${BASE_URL}/api/login`,
      JSON.stringify({
        email: `testuser${__VU}@example.com`,  // __VU = virtual user ID
        password: 'password',
      }),
      {
        headers: {
          'Content-Type': 'application/json',
          'X-XSRF-TOKEN': csrfToken,
          'Accept': 'application/json',
        },
      }
    );

    check(loginRes, {
      'login successful': (r) => r.status === 200,
      'token returned': (r) => JSON.parse(r.body).token !== undefined,
    });

    authToken = JSON.parse(loginRes.body).token;
  });

  sleep(1);

  group('Dashboard', () => {
    const dashRes = http.get(`${BASE_URL}/api/dashboard`, {
      headers: {
        'Authorization': `Bearer ${authToken}`,
        'Accept': 'application/json',
      },
    });

    check(dashRes, {
      'dashboard loads': (r) => r.status === 200,
      'data present': (r) => JSON.parse(r.body).data !== undefined,
    });
  });

  sleep(2);
}
Enter fullscreen mode Exit fullscreen mode

Testing PHP APIs with Dynamic Data

Using K6's SharedArray for realistic, parameterised test data:

// parameterised-test.js
import http from 'k6/http';
import { SharedArray } from 'k6/data';
import { check, sleep } from 'k6';

// Loaded once, shared across all VUs (memory-efficient)
const users = new SharedArray('users', function () {
  return JSON.parse(open('./test-users.json')); // Array of {email, password}
});

const products = new SharedArray('products', function () {
  return JSON.parse(open('./test-products.json')); // Array of product IDs
});

export const options = {
  vus: 50,
  duration: '5m',
  thresholds: {
    'http_req_duration': ['p(95)<500'],
    'http_req_failed': ['rate<0.01'],
  },
};

export default function () {
  // Pick a user based on VU index (ensures each VU uses a consistent user)
  const user = users[__VU % users.length];
  const product = products[Math.floor(Math.random() * products.length)];

  // Authenticate
  const loginRes = http.post(
    'https://staging.yourphpapp.com/api/login',
    JSON.stringify({ email: user.email, password: user.password }),
    { headers: { 'Content-Type': 'application/json' } }
  );

  const token = JSON.parse(loginRes.body).token;

  // Browse product
  http.get(`https://staging.yourphpapp.com/api/products/${product.id}`, {
    headers: { 'Authorization': `Bearer ${token}` },
  });

  // Add to cart
  const cartRes = http.post(
    'https://staging.yourphpapp.com/api/cart',
    JSON.stringify({ product_id: product.id, quantity: 1 }),
    {
      headers: {
        'Authorization': `Bearer ${token}`,
        'Content-Type': 'application/json',
      },
    }
  );

  check(cartRes, { 'added to cart': (r) => r.status === 201 });

  sleep(2);
}
Enter fullscreen mode Exit fullscreen mode

Load Testing with Artillery

Installation & YAML Config

npm install -g artillery
artillery version  # Verify install
Enter fullscreen mode Exit fullscreen mode

A basic Artillery config for a PHP application:

# basic-load-test.yml
config:
  target: "https://staging.yourphpapp.com"
  phases:
    - duration: 60      # seconds
      arrivalRate: 5    # new users per second
      name: "Warm-up"
    - duration: 120
      arrivalRate: 20
      name: "Sustained Load"
    - duration: 60
      arrivalRate: 50
      name: "Peak Spike"
  defaults:
    headers:
      Accept: "application/json"
      Content-Type: "application/json"

scenarios:
  - name: "Browse product catalogue"
    flow:
      - get:
          url: "/api/products"
          expect:
            - statusCode: 200
      - think: 2
      - get:
          url: "/api/products/{{ productId }}"
          expect:
            - statusCode: 200
            - contentType: "application/json"
Enter fullscreen mode Exit fullscreen mode

Run it:

artillery run basic-load-test.yml
# With HTML report
artillery run --output results.json basic-load-test.yml
artillery report results.json
Enter fullscreen mode Exit fullscreen mode

Scenarios & Phases

Artillery's multi-scenario config is ideal for modelling realistic traffic mixes:

# realistic-traffic-mix.yml
config:
  target: "https://staging.yourphpapp.com"
  phases:
    - duration: 300
      arrivalRate: 30
      name: "Normal traffic"
  processor: "./processors.js"  # Custom JS logic

scenarios:
  # 60% of traffic: anonymous browsing
  - name: "Anonymous Browse"
    weight: 60
    flow:
      - get:
          url: "/api/products?page={{ $randomInt(1, 10) }}"
      - think: 3
      - get:
          url: "/api/products/{{ $randomInt(1, 500) }}"
      - think: 2

  # 30% of traffic: authenticated users
  - name: "Authenticated User Flow"
    weight: 30
    flow:
      - post:
          url: "/api/login"
          json:
            email: "{{ email }}"
            password: "{{ password }}"
          capture:
            - json: "$.token"
              as: "authToken"
      - get:
          url: "/api/account/orders"
          headers:
            Authorization: "Bearer {{ authToken }}"
      - think: 5

  # 10% of traffic: checkout flow
  - name: "Checkout"
    weight: 10
    flow:
      - post:
          url: "/api/login"
          json:
            email: "{{ email }}"
            password: "{{ password }}"
          capture:
            - json: "$.token"
              as: "authToken"
      - post:
          url: "/api/cart"
          headers:
            Authorization: "Bearer {{ authToken }}"
          json:
            product_id: "{{ $randomInt(1, 100) }}"
            quantity: 1
      - post:
          url: "/api/checkout"
          headers:
            Authorization: "Bearer {{ authToken }}"
          json:
            payment_method: "card"
          expect:
            - statusCode: 201
Enter fullscreen mode Exit fullscreen mode

Custom Processors (Node.js)

For dynamic data generation and complex logic, Artillery supports JS processor functions:

// processors.js
const { faker } = require('@faker-js/faker');

// Called before each scenario
function generateUserData(context, events, done) {
  context.vars.email = faker.internet.email();
  context.vars.password = 'test_password_123';
  context.vars.productId = faker.number.int({ min: 1, max: 500 });
  return done();
}

// Custom response validator
function validateProductResponse(context, response, context2, events, done) {
  const body = JSON.parse(response.body);

  if (!body.data || !body.data.id) {
    events.emit('counter', 'invalid_product_response', 1);
  }

  return done();
}

module.exports = { generateUserData, validateProductResponse };
Enter fullscreen mode Exit fullscreen mode
# In your YAML config:
config:
  processor: "./processors.js"

scenarios:
  - name: "Dynamic User Test"
    flow:
      - function: "generateUserData"
      - post:
          url: "/api/register"
          json:
            email: "{{ email }}"
            password: "{{ password }}"
Enter fullscreen mode Exit fullscreen mode

Testing Laravel / Symfony Routes

A targeted test for a Laravel e-commerce API, covering the critical path:

# laravel-ecommerce-test.yml
config:
  target: "https://staging.yourphpapp.com"
  phases:
    - duration: 60
      arrivalRate: 2
      name: "Warm up PHP-FPM pool"
    - duration: 300
      arrivalRate: 25
      name: "Production simulation"
    - duration: 120
      arrivalRate: 75
      name: "Black Friday spike"
    - duration: 60
      arrivalRate: 2
      name: "Recovery observation"
  http:
    timeout: 10            # Fail requests > 10s
    pool: 100              # HTTP connection pool size
  ensure:
    p99: 2000              # Fail test if p99 > 2000ms
    maxErrorRate: 1        # Fail test if error rate > 1%

scenarios:
  - name: "Product Search  PDP  Cart  Checkout"
    flow:
      # Search
      - get:
          url: "/api/search?q=laptop&per_page=20"
          capture:
            - json: "$.data[0].id"
              as: "productId"

      # Product Detail Page
      - get:
          url: "/api/products/{{ productId }}"

      # Login
      - post:
          url: "/api/login"
          json:
            email: "loadtest_{{ $loopCount }}@example.com"
            password: "password"
          capture:
            - json: "$.token"
              as: "token"

      # Add to Cart
      - post:
          url: "/api/cart/items"
          headers:
            Authorization: "Bearer {{ token }}"
          json:
            product_id: "{{ productId }}"
            quantity: 1
          expect:
            - statusCode: 201

      # Checkout
      - post:
          url: "/api/orders"
          headers:
            Authorization: "Bearer {{ token }}"
          json:
            shipping_address_id: 1
            payment_nonce: "fake-valid-nonce"
          expect:
            - statusCode: 201
Enter fullscreen mode Exit fullscreen mode

PHP-Specific Optimisation Targets

When your load test reveals problems, here's where to look first in a PHP stack:

1. PHP-FPM Pool Exhaustion

Symptom: Latency climbs sharply, then requests start failing with 502/504 as load increases.

# Monitor FPM pool status in real time during load test
watch -n 1 'curl -s http://localhost/fpm-status | grep -E "active|idle|max"'
Enter fullscreen mode Exit fullscreen mode

Fix:

; Increase max_children (but watch RAM, each PHP worker ~30–80MB)
pm.max_children = 100

; Enable slow log to find expensive scripts
slowlog = /var/log/php-fpm-slow.log
request_slowlog_timeout = 2s
Enter fullscreen mode Exit fullscreen mode

2. OPcache Misconfiguration

Symptom: First requests after deploy are slow; warm requests are fine. OPcache fill events show up in metrics.

; php.ini - OPcache tuning for production
opcache.enable = 1
opcache.memory_consumption = 256       ; MB - increase for large codebases
opcache.max_accelerated_files = 20000  ; Must exceed file count in your app
opcache.validate_timestamps = 0        ; Disable in production for max speed
opcache.revalidate_freq = 0
opcache.jit = 1255                     ; Enable JIT (PHP 8.x)
opcache.jit_buffer_size = 128M
Enter fullscreen mode Exit fullscreen mode

3. Database Connection Pool Saturation

Symptom: Query times are fine in isolation but explode under concurrent load. SHOW PROCESSLIST shows hundreds of connections in Sleep state.

// config/database.php (Laravel)
'mysql' => [
    'driver'    => 'mysql',
    'host'      => env('DB_HOST'),
    // ... other config
    'options'   => [
        PDO::ATTR_PERSISTENT => true,  // Persistent connections - use with care
    ],
    'pool' => [
        'min' => 5,
        'max' => 50,  // Match your pm.max_children
    ],
],
Enter fullscreen mode Exit fullscreen mode

Better yet, use a connection pooler like PgBouncer (PostgreSQL) or ProxySQL (MySQL) to decouple PHP-FPM workers from DB connections.

4. N+1 Queries Under Load

Symptom: Single-user response time is 80ms; 50-user response time is 4000ms. The scaling is non-linear.

// ❌ N+1 - 1 query for orders + N queries for each user
$orders = Order::all();
foreach ($orders as $order) {
    echo $order->user->name; // Executes a query per iteration
}

// ✅ Eager loading - 2 queries total regardless of count
$orders = Order::with('user')->get();
Enter fullscreen mode Exit fullscreen mode

These are the kinds of scalability issues that come up repeatedly in PHP application development, especially in Laravel and CodeIgniter apps that don't enforce eager loading at the architectural level.

5. Session Handling Under Concurrency

Symptom: Logged-in users intermittently get logged out or see stale data under load.

PHP's default file-based sessions create write locks - one request blocks all others for the same session. At 50+ concurrent VUs sharing sessions, this serialises your application:

// config/session.php (Laravel) - Switch to Redis
'driver' => env('SESSION_DRIVER', 'redis'),

// .env
SESSION_DRIVER=redis
REDIS_HOST=127.0.0.1
Enter fullscreen mode Exit fullscreen mode

Redis for session storage is also a stepping stone to database and cloud transformation - moving stateful components out of the server and into scalable, distributed infrastructure.

Integrating Load Tests into CI/CD

GitHub Actions with K6

# .github/workflows/load-test.yml
name: Load Test

on:
  push:
    branches: [main]

jobs:
  load-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install K6
        run: |
          sudo gpg --no-default-keyring \
            --keyring /usr/share/keyrings/k6-archive-keyring.gpg \
            --keyserver hkp://keyserver.ubuntu.com:80 \
            --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
          echo "deb [signed-by=/usr/share/keyrings/k6-archive-keyring.gpg] https://dl.k6.io/deb stable main" \
            | sudo tee /etc/apt/sources.list.d/k6.list
          sudo apt-get update && sudo apt-get install k6

      - name: Run Load Test
        run: k6 run --out json=results.json ./tests/load/smoke-test.js
        env:
          BASE_URL: ${{ secrets.STAGING_URL }}
        # k6 exits non-zero if thresholds are breached → build fails ✅

      - name: Upload Results
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: k6-results
          path: results.json
Enter fullscreen mode Exit fullscreen mode

GitHub Actions with Artillery

# .github/workflows/artillery-test.yml
name: Artillery Load Test

on:
  pull_request:
    branches: [main]

jobs:
  artillery:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Install Artillery
        run: npm install -g artillery

      - name: Run Smoke Test
        run: |
          artillery run \
            --output artillery-results.json \
            ./tests/load/smoke-test.yml

      - name: Check Results
        run: |
          # Extract p99 from results and fail if > 1000ms
          P99=$(cat artillery-results.json | jq '.aggregate.latency.p99')
          echo "p99 latency: ${P99}ms"
          if (( $(echo "$P99 > 1000" | bc -l) )); then
            echo "❌ p99 latency exceeded threshold"
            exit 1
          fi

      - name: Generate HTML Report
        if: always()
        run: artillery report artillery-results.json

      - name: Upload Report
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: artillery-report
          path: artillery-results.html
Enter fullscreen mode Exit fullscreen mode

Embedding load tests as deployment gates is a hallmark of mature CI/CD pipeline practices - it transforms performance from a reactive investigation into a proactive quality standard.

Reading & Interpreting Results

K6 Output Decoded

http_req_duration............: avg=234ms min=89ms med=201ms max=2341ms p(90)=389ms p(95)=512ms p(99)=1204ms
Enter fullscreen mode Exit fullscreen mode
Metric What It Tells You
avg Overall average - useful but misleading if skewed
med (p50) The "typical" user experience
p90 9 in 10 users see this or better
p95 Your SLO target - 95% of users
p99 Your tail - 1% still experience this
max Single worst request - often an outlier but worth investigating

Red flags in K6 output:

http_req_failed..............: 5.23% ✗ 156   ← > 1% means something is broken
http_req_duration............: p(95)=4200ms   ← Well above 500ms SLO
checks.....................: 78.45%            ← < 100% means assertions are failing
vus_max...................: 50                 ← Were all VUs actually running?
Enter fullscreen mode Exit fullscreen mode

Artillery Report Metrics

Artillery's HTML report gives you:

  • Latency histogram - Distribution of response times
  • RPS over time - Did throughput hold steady or degrade?
  • Error count timeline - When did errors start?
  • Scenario completion rate - Did multi-step flows complete?

The key question for every load test result:

"Does the p99 latency grow linearly with load, or does it knee-bend exponentially at a specific VU count?"

A linear growth (p99 200ms → 400ms as VUs double) suggests normal queuing. An exponential knee (p99 200ms → 3000ms when going from 40 to 50 VUs) pinpoints your exact saturation point - usually PHP-FPM pool exhaustion or a DB connection limit.

Common PHP Performance Bottlenecks (and Fixes)

Symptom in Load Test Root Cause Fix
502/504 at high VUs PHP-FPM pool exhausted Increase pm.max_children, add horizontal scaling
Slow first request after deploy OPcache cold start Pre-warm OPcache with opcache_compile_file() in deploy script
Non-linear latency growth N+1 queries Eager load with with() in Laravel / Doctrine
Memory growth over soak test Memory leak in long-lived code Lower pm.max_requests, profile with Blackfire
Session errors under concurrency File-based session locking Migrate to Redis sessions
Sporadic 500s under load Unhandled exceptions in queue Add retry logic + dead-letter queues
Slow API under concurrency Missing DB indexes Run EXPLAIN on slow queries, add composite indexes
High CPU, low latency Inefficient PHP code Profile with Xdebug / Blackfire, look at hot loops

Systematically addressing these bottlenecks is part of what makes custom software development built for scale different from code written just to pass functional tests.

Conclusion & Further Reading

Load testing is the closest thing you have to a crystal ball for production. The tools have never been better: K6 gives you the programmability of a full test framework, and Artillery gives you YAML-driven simplicity for fast iteration. Used together - and integrated into your CI/CD pipeline - they transform performance from a reactive emergency into a measurable, manageable quality attribute.

Quick recap:

┌──────────────────────────────────────────────────────────────────┐
│  Goal                          │  Tool / Technique               │
├────────────────────────────────┼─────────────────────────────────┤
│  Quick smoke test              │  Artillery YAML                 │
│  Programmable load + gates     │  K6 with thresholds             │
│  Auth flows                    │  K6 groups + CSRF handling      │
│  Realistic traffic mix         │  Artillery weighted scenarios   │
│  CI/CD integration             │  GitHub Actions + exit codes    │
│  PHP-FPM tuning insight        │  fpm-status + slow log          │
│  DB bottleneck diagnosis       │  EXPLAIN + ProxySQL/PgBouncer   │
│  Memory leaks                  │  Soak test + Blackfire          │
└──────────────────────────────────────────────────────────────────┘
Enter fullscreen mode Exit fullscreen mode

The process in four steps:

  1. Define SLOs first - p95 < 400ms, error rate < 1%. Without a target, any result is acceptable.
  2. Run baseline - What's the app like at 1 VU? That's your ceiling.
  3. Ramp to break - Increase VUs until thresholds are breached. Note the exact breaking point.
  4. Fix, re-test, document - The test only has value if it drives a change and verifies it worked.

Further Reading & Resources

Deepen your PHP performance knowledge with these:

Top comments (0)