DEV Community

Cover image for ๐Ÿš€ Shift Left Performance Testing in Spring Boot: Stability Through Control
AK DevCraft
AK DevCraft Subscriber

Posted on

๐Ÿš€ Shift Left Performance Testing in Spring Boot: Stability Through Control

Introduction

Performance testing often comes too late in the Software Development Lifecycle, either after the code is merged, deployed, or when something starts slowing down in production.

But what if performance testing doesn't have to wait until the end?
What if it could run right inside your Spring Boot CI/CD pipeline, every time the code changes?

Thatโ€™s the essence of Shift Left Performance Testing, bringing load and latency validation closer to developers.
And when you combine Gatling (for load simulation) with mocked dependencies (for stability), you get both speed and consistency in your performance results.

The Problem

One of the biggest challenges with API performance testing is uncontrolled variables:

  • External APIs fluctuate in response time
  • Databases may have inconsistent caching or data sizes
  • Network latency varies per environment

When these factors change, your test results become inconsistent.
Youโ€™re left wondering: Is my API slow, or was it the dependency this time?

To catch genuine performance regressions, you need stable and repeatable test conditions, which means mocking whatโ€™s not under test.

โš™๏ธ Setting the Ground for Controlled Perf Testing

To get predictable results, mock or simplify dependencies before running load tests.

Mock External APIs with WireMock

If your Spring Boot API calls other services, say for authentication, pricing, or inventory, mock those dependencies using WireMock or any other mocking framework.

Wiremock example:

@AutoConfigureWireMock(port = 8089)
@SpringBootTest
class ProductServiceIntegrationTest {

  @BeforeEach
  void setupMocks() {
    stubFor(get(urlEqualTo("/inventory/123"))
      .willReturn(aResponse()
        .withFixedDelay(200)  // Simulate 200ms latency
        .withHeader("Content-Type", "application/json")
        .withBody("{\"available\": true}")));
  }
}
Enter fullscreen mode Exit fullscreen mode

Now, every test run behaves exactly the same - same delay, same data, same outcome.
Thatโ€™s controlled performance testing.

Note - You can also use Wiremock's recording feature to record the response in a file so you donโ€™t have to stub large object responses.

Use H2 for Database Mocking

When your test focuses on the application or API layer, you donโ€™t always need a full production-grade database.

Using an in-memory database like H2 ensures consistency and isolation:

spring:
  datasource:
    url: jdbc:h2:mem:testdb
    driver-class-name: org.h2.Driver
    username: test
    password:
  jpa:
    hibernate:
      ddl-auto: update
Enter fullscreen mode Exit fullscreen mode

You can preload the same dataset before each run for reproducibility.
It eliminates variability from query performance, network I/O, and DB load.

Gatling: Performance as Code

Once your environment is stable, define your performance tests in Gatling.

Maven Configuration for Gatling (Java DSL)

pom.xml

<project>
  <dependencies>
    <!-- Gatling core and HTTP modules -->
    <dependency>
      <groupId>io.gatling</groupId>
      <artifactId>gatling-java</artifactId>
      <version>3.11.5</version>
      <scope>test</scope>
    </dependency>

    <dependency>
    <groupId>io.gatling</groupId>
      <artifactId>gatling-http</artifactId>
      <version>3.11.5</version>
      <scope>test</scope>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <!-- Gatling Maven Plugin -->
      <plugin>
        <groupId>io.gatling</groupId>
        <artifactId>gatling-maven-plugin</artifactId>
        <version>4.9.2</version>
        <executions>
          <execution>
            <phase>verify</phase>
            <goals>
              <goal>test</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</project>
Enter fullscreen mode Exit fullscreen mode

Performance test

You can create a separate folder in the project structure and organize all of the performance tests. Treat performance test just like production code.

import io.gatling.javaapi.core.*;
import io.gatling.javaapi.http.*;

import static io.gatling.javaapi.core.CoreDsl.*;
import static io.gatling.javaapi.http.HttpDsl.*;

public class ProductApiSimulation extends Simulation {

    // Define the HTTP protocol
    HttpProtocolBuilder httpProtocol = http
        .baseUrl("http://localhost:8080") // Base URL of your Spring Boot service
        .acceptHeader("application/json");

    // Define the scenario
    ScenarioBuilder scn = scenario("Get Products Scenario")
        .exec(
            http("Get All Products")
                .get("/api/products")
                .check(status().is(200))
        );

    {
        setUp(
            scn.injectOpen(
            rampUsers(50).during(2 * 60), // โœ… Warm-up over 2 mins
            constantUsersPerSec(20).during(15 * 60) // โœ… Sustain load for 15 mins
            )
        )
        .protocols(httpProtocol)
        .maxDuration(17 * 60); // โœ… Hard stop for safety
        // โœ… Add assertions for automated performance gating
        .assertions(
           global().responseTime().percentile(95).lt(500),  // 95% under 500ms
           global().successfulRequests().percent().gt(98.0) // Error rate < 2%
       );

    }
}
Enter fullscreen mode Exit fullscreen mode

Code Explanation

  • rampUsers(50).during(2 * 60) โ†’ Simulates a gradual load (50 users in 120 seconds).
  • constantUsersPerSec(20).during(15 * 60) โ†’ maintain the load for 15 mins, quite obviously this is a high number, but we need to strike a balance between pipeline duration and enough load duration to perform the performance test on the application.
  • check(status().is(200)) โ†’ Verifies each request returns HTTP 200.
  • Assertions โ†’ Define performance thresholds. If breached:
    • The test fails,
    • The pipeline breaks,
    • Developers are notified before the code moves to the upper environment.

To avoid a long pipeline run:

โœ… Run smoke performance tests on every PR for (30โ€“60 sec)
โœ… Run long tests only on:

  • main branch
  • nightly builds
  • release candidates

GitHub action example

on:
  push:
    branches:
      - main
  schedule:
    - cron: "0 2 * * *"  # nightly
Enter fullscreen mode Exit fullscreen mode

Why Assertions Matter

These assertions turn your performance test into a quality gate.
If response times exceed 500 ms or error rates go beyond 2%, the test fails automatically.

This instantly alerts developers that a recent change has caused a performance regression โ€” before it ever reaches production.

No manual analysis, no post-deployment surprises.

Note - โ€œDefine your performance thresholds/assertions based on realistic SLAs or baseline metrics, and review them periodically as your service evolves.โ€

๐Ÿš€ Running the Test

Once configured, you can run performance tests using:

mvn gatling:test
Enter fullscreen mode Exit fullscreen mode

This will:

  • Launch your Spring Boot API (if already running locally or in CI)
  • Run the Gatling simulation (ProductApiSimulation.java)
  • Generate an HTML report under:
target/gatling/productapisimulation-<timestamp>/index.html
Enter fullscreen mode Exit fullscreen mode

Fail the build automatically if assertions fail (e.g., high latency, low success rate)

๐Ÿงฉ Bringing It into the CI/CD Pipeline

The real power of Shift Left testing comes when performance runs automatically in your pipeline โ€” just like unit or integration tests.

Hereโ€™s an example using GitHub Actions:

name: Performance Tests

on:
  push:
    branches:
      - "release/*"   # Run only on release branches to avoid long runs on every commit
      - "main"
  workflow_dispatch:  # Allow manual trigger

jobs:
  performance-test:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up JDK 17
        uses: actions/setup-java@v3
        with:
          java-version: "17"
          distribution: "temurin"

      - name: Build Spring Boot app
        run: ./mvnw clean package -DskipTests

      - name: Run Gatling performance tests
        run: ./mvnw gatling:test
Enter fullscreen mode Exit fullscreen mode

This setup:
โœ… Runs automatically on main or release branches
โœ… Can be triggered manually for controlled environments
โœ… Produces Gatling HTML reports as pipeline artifacts
โœ… Fails the pipeline automatically if performance thresholds are breached

To keep pipelines lean, run these tests only on main, release, or nightly builds โ€” not every feature branch.

Why This Approach Works

When you combine stable mocks, assertions, and CI integration:

  • You get consistent metrics across builds
  • You isolate application performance from dependency noise
  • You catch regressions early with automatic thresholds
  • You build confidence without extending pipeline times unnecessarily

This is how teams move from reactive performance firefighting to proactive performance assurance.

๐Ÿ Wrapping Up

Shift-left performance testing isnโ€™t about running massive load tests earlier; itโ€™s about running smarter, smaller, and stable tests continuously.

By combining:

  • Spring Boot for your core service
  • WireMock for predictable external calls
  • H2 for stable DB interactions
  • Gatling for performance-as-code
  • Assertions to enforce performance budgets
  • CI/CD filters to run tests only where needed

You achieve repeatable, reliable, and developer-owned performance validation.

Thatโ€™s not just testing earlier โ€” itโ€™s building performance culture into the pipeline.

โšก TL;DR

  • Shift left = move performance testing closer to code commits.
  • Mock dependencies (WireMock, H2) โ†’ get stable, repeatable results.
  • Use Gatling โ†’ define performance as code.
  • Add assertions โ†’ fail builds when thresholds break.
  • Configure CI/CD โ†’ run only on main/release branches.
  • Focus on early detection, not end-stage firefighting.

If you have reached here, then I have made a satisfactory effort to keep you reading. Please be kind enough to leave any comments or share with corrections.

My Other Blogs:

Top comments (0)