Why This Article Exists
When your microservice calls an external API, things will go wrong. The remote service goes down, your threads pile up, timeouts cascade, and suddenly your perfectly healthy application is choking because of someone else's outage.
This is exactly the problem that the Circuit Breaker pattern solves. And when you combine it with Retry, you get a resilient system that can recover from transient failures while protecting itself from prolonged outages.
When I looked through my public repositories for a good example of calling an external API with proper resilience handling, I couldn't find one. So I built a demo service that wraps the randomuser.me test API — a perfect sandbox for demonstrating how Circuit Breaker and Retry work together in practice. While the API itself is just a test sandbox, the resilience patterns we apply here are exactly what you'd use when integrating with any real external service (payment gateways, third-party data providers, partner APIs, etc.).
Full code is available on GitHub.
Target audience: This article is aimed at beginner and mid-level developers who integrate with external APIs and want to learn how to make those integrations resilient. If you've worked with Spring Boot but never added Circuit Breaker or Retry — this article is for you.
The Setup
The project is a stateless Spring Boot 3.5 proxy service using Java 21. The key components:
-
RandomUserClient— the HTTP client that talks to the external API -
UserService— business logic and validation -
UserController— REST endpoints -
GlobalExceptionHandler— centralized error handling
The HTTP client uses Spring's modern RestClient (not the deprecated RestTemplate):
@Configuration
public class RestClientConfig {
@Bean
public RestClient randomUserRestClient(
RestClient.Builder builder,
@Value("${randomuser.base-url}") String baseUrl,
@Value("${randomuser.connect-timeout}") int connectTimeout,
@Value("${randomuser.read-timeout}") int readTimeout) {
SimpleClientHttpRequestFactory factory = new SimpleClientHttpRequestFactory();
factory.setConnectTimeout(connectTimeout);
factory.setReadTimeout(readTimeout);
return builder
.baseUrl(baseUrl)
.requestFactory(factory)
.build();
}
}
With timeouts configured in application.properties:
randomuser.base-url=https://randomuser.me
randomuser.connect-timeout=2000
randomuser.read-timeout=3000
View RestClientConfig.java on GitHub
Adding Resilience4j
We use the BOM (Bill of Materials) for version consistency:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-bom</artifactId>
<version>2.2.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-spring-boot3</artifactId>
</dependency>
</dependencies>
The Circuit Breaker + Retry Combination
Here's the core question: in what order should Circuit Breaker and Retry wrap each other?
The correct answer is: Circuit Breaker wraps Retry.
Request → CircuitBreaker → Retry → Retry → Retry → HTTP call
Why? Because you want Retry to exhaust all its attempts first, and only then should the Circuit Breaker record the final outcome (success or failure). If it were the other way around, every individual retry attempt would count as a separate failure for the Circuit Breaker — which would trip it open far too quickly.
The Aspect Order Trap
Here's where it gets non-obvious. In Spring, annotations on a method don't have a guaranteed execution order. The order of @CircuitBreaker and @Retry on your method does not determine which one is the outer decorator.
You must configure this explicitly through properties:
# Higher order number = outer decorator
# CircuitBreaker(1) → Retry(2) → actual method call
resilience4j.circuitbreaker.circuitBreakerAspectOrder=1
resilience4j.retry.retryAspectOrder=2
The rule: the aspect with the higher aspectOrder value executes as the outer decorator. So CircuitBreaker with order 1 wraps Retry with order 2, meaning Retry is closer to the method — exactly what we want.
⚠️ Without explicit ordering, the behavior is undefined and may change between JVM restarts. Always configure this in properties.
The Client Implementation
@Component
public class RandomUserClient {
private final RestClient restClient;
private static final Logger log = LoggerFactory.getLogger(RandomUserClient.class);
public RandomUserClient(RestClient restClient) {
this.restClient = restClient;
}
@Retry(name = "randomUserApi")
@CircuitBreaker(name = "randomUserApi", fallbackMethod = "getUsersFallback")
public UserResponseDto getUsers(int count) {
try {
return restClient.get()
.uri("/api/?results={count}", count)
.retrieve()
.body(UserResponseDto.class);
} catch (HttpServerErrorException ex) {
log.error("External API server error: {}", ex.getMessage());
throw new ExternalApiException(502, "External API server error");
} catch (ResourceAccessException ex) {
log.warn("External API is unavailable: {}", ex.getMessage());
throw new ExternalApiException(503, "External API is unavailable");
}
}
private UserResponseDto getUsersFallback(int count, Exception ex) {
log.warn("Circuit breaker fallback triggered: {}", ex.getMessage());
throw new ExternalApiUnavailableException(
"Random User API is currently unavailable");
}
}
View RandomUserClient.java on GitHub
A few things to notice:
1. The fallback method signature must match. It takes the same parameters as the original method plus an Exception parameter. Getting this wrong produces a cryptic NoSuchMethodException at runtime.
2. The fallback throws, not returns. Returning a default/empty response would silently degrade the user experience. Instead, we throw ExternalApiUnavailableException, which the GlobalExceptionHandler maps to HTTP 503. This makes it explicit to the client that the service is degraded.
3. HttpClientErrorException is notably absent from the catch block. This is intentional — keep reading.
The 4xx vs 5xx Retry Problem
Here's a critical design decision: should you retry on 4xx (client) errors?
No. A 4xx error means the request itself is invalid. Retrying it will produce the same result every time. You should only retry on:
-
5xx errors (
HttpServerErrorException) — the server had a transient problem -
Network errors (
ResourceAccessException) — timeout, connection refused, etc.
We configure this explicitly in properties:
# Retry only on server errors and network issues
resilience4j.retry.instances.randomUserApi.retry-exceptions=\
org.springframework.web.client.HttpServerErrorException,\
org.springframework.web.client.ResourceAccessException,\
com.tarasantoniuk.random_user_api.feature.user.exception.ExternalApiException
# Never retry on client errors
resilience4j.retry.instances.randomUserApi.ignore-exceptions=\
org.springframework.web.client.HttpClientErrorException
By letting HttpClientErrorException propagate uncaught through the client method (no catch block for it), it bypasses both Retry and Circuit Breaker and goes straight to the GlobalExceptionHandler. The Circuit Breaker won't count a 4xx as a failure, and Retry won't waste time on an unrecoverable error.
View full application.properties on GitHub
Full Circuit Breaker Configuration
# Sliding window: last 10 calls
resilience4j.circuitbreaker.instances.randomUserApi.sliding-window-size=10
# Open circuit when 50% of calls fail
resilience4j.circuitbreaker.instances.randomUserApi.failure-rate-threshold=50
# Stay open for 30 seconds before transitioning to half-open
resilience4j.circuitbreaker.instances.randomUserApi.wait-duration-in-open-state=30s
# Allow 3 test calls in half-open state
resilience4j.circuitbreaker.instances.randomUserApi.permitted-number-of-calls-in-half-open-state=3
And the Retry configuration:
resilience4j.retry.instances.randomUserApi.max-attempts=3
resilience4j.retry.instances.randomUserApi.wait-duration=1s
This means: on a transient failure, Retry will attempt up to 3 calls with 1-second intervals. If all 3 fail, that counts as one failure for the Circuit Breaker. After 5 out of 10 such failures (50%), the circuit opens and all subsequent calls go straight to the fallback for 30 seconds.
Monitoring with Spring Boot Actuator
You'll want to observe the circuit breaker state in production. Add the Actuator dependency and configure it:
management.health.circuitbreakers.enabled=true
management.endpoint.health.show-details=always
resilience4j.circuitbreaker.instances.randomUserApi.register-health-indicator=true
Now GET /actuator/health includes the circuit breaker state:
{
"components": {
"circuitBreakers": {
"status": "UP",
"details": {
"randomUserApi": {
"status": "UP",
"details": {
"failureRate": "-1.0%",
"state": "CLOSED"
}
}
}
}
}
}
When the circuit opens, the state changes to OPEN and the overall health status reflects the degradation.
Exception Hierarchy and Global Error Handling
The exception design is deliberate:
-
ExternalApiException— wraps 5xx errors from the external API, carries the HTTP status code -
ExternalApiUnavailableException— thrown by the Circuit Breaker fallback when the circuit is open -
HttpClientErrorException— Spring's built-in exception for 4xx, propagated as-is
The GlobalExceptionHandler maps each to the appropriate HTTP response:
@RestControllerAdvice
public class GlobalExceptionHandler {
@ExceptionHandler(ExternalApiException.class)
public ResponseEntity<ErrorResponse> handleExternalApiException(ExternalApiException ex) {
HttpStatus status = HttpStatus.resolve(ex.getStatusCode());
if (status == null) {
status = HttpStatus.BAD_GATEWAY; // safe default for unknown codes
}
return ResponseEntity.status(status)
.body(new ErrorResponse(status.value(), "External API error"));
}
@ExceptionHandler(ExternalApiUnavailableException.class)
public ResponseEntity<ErrorResponse> handleExternalApiUnavailable(
ExternalApiUnavailableException ex) {
return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE)
.body(new ErrorResponse(503, "External API is currently unavailable"));
}
@ExceptionHandler(HttpClientErrorException.class)
public ResponseEntity<ErrorResponse> handleHttpClientError(
HttpClientErrorException ex) {
return ResponseEntity.status(ex.getStatusCode())
.body(new ErrorResponse(
ex.getStatusCode().value(), "External API client error"));
}
}
Note the use of HttpStatus.resolve() instead of HttpStatus.valueOf(). The difference? valueOf() throws an IllegalArgumentException on non-standard status codes (like 999), while resolve() safely returns null — which we handle with a fallback to BAD_GATEWAY.
View GlobalExceptionHandler.java on GitHub
Testing the Resilience Layer
Testing the service layer with Mockito — we verify that the client is called for valid inputs and that validation rejects invalid ones:
@ExtendWith(MockitoExtension.class)
class UserServiceTest {
@Mock
private RandomUserClient randomUserClient;
@Test
void getUsers_validCount_delegatesToClient() {
UserService service = new UserService(randomUserClient, 5000);
UserResponseDto expected = new UserResponseDto(List.of(), null);
when(randomUserClient.getUsers(10)).thenReturn(expected);
UserResponseDto result = service.getUsers(10);
assertEquals(expected, result);
verify(randomUserClient).getUsers(10);
}
@Test
void getUsers_countExceedsMax_throwsIllegalArgument() {
UserService service = new UserService(randomUserClient, 5000);
assertThrows(IllegalArgumentException.class,
() -> service.getUsers(5001));
verifyNoInteractions(randomUserClient);
}
}
The controller tests use @WebMvcTest with MockMvc to verify the full HTTP error mapping chain:
@WebMvcTest(UserController.class)
class UserControllerTest {
@Autowired
private MockMvc mockMvc;
@MockitoBean
private UserService userService;
@Test
void getUsers_serviceUnavailable_returns503() throws Exception {
when(userService.getUsers(50))
.thenThrow(new ExternalApiUnavailableException("API unavailable"));
mockMvc.perform(get("/api/users"))
.andExpect(status().isServiceUnavailable())
.andExpect(jsonPath("$.message")
.value("External API is currently unavailable"));
}
}
View tests:
Common Pitfalls
1. AOP Proxy Bypass
@CircuitBreaker, @Retry, @Cacheable, and @Async are all AOP-based. If you call an annotated method from within the same class, the proxy is bypassed and the annotations are silently ignored. That's why @Retry and @CircuitBreaker live on RandomUserClient, not on UserService — the service calls the client through a Spring proxy.
2. Aspect Order Is Not Annotation Order
I initially assumed that writing @Retry before @CircuitBreaker on the method would make Retry the outer decorator. It doesn't. The annotation order on the method is irrelevant. You must configure aspectOrder properties.
3. Fallback Signature Mismatch
The fallback method must have the exact same parameters as the original method, plus one Exception (or Throwable) parameter at the end. Missing or extra parameters cause a runtime error — not a compile-time one.
4. @Max Cannot Reference Properties
I wanted to use @Max for validation with a configurable maximum value from properties. This doesn't work because annotation values must be compile-time constants. The solution: use @Min(1) with @Validated at the controller level for the lower bound, and a manual check with @Value-injected maxCount at the service level for the upper bound.
The Flow Visualized
Client Request
│
▼
UserController (validation: @Min(1))
│
▼
UserService (business validation: count ≤ maxCount)
│
▼
RandomUserClient
│
├── CircuitBreaker [CLOSED] ──► Retry (up to 3 attempts)
│ │
│ ├── Success → return response
│ ├── 5xx/timeout → retry...
│ └── All retries exhausted → CB records failure
│
├── CircuitBreaker [OPEN] ──► fallback → ExternalApiUnavailableException → 503
│
└── CircuitBreaker [HALF-OPEN] ──► allow 3 test calls
│
├── Enough succeed → CLOSED
└── Still failing → OPEN
Key Takeaways
Circuit Breaker should wrap Retry, not the other way around. Configure this with
aspectOrderproperties — never rely on annotation placement.Don't retry 4xx errors. Use
retry-exceptionsandignore-exceptionsto be explicit about what's retryable.Fallbacks should fail loudly. Throwing an exception in the fallback (rather than returning empty data) keeps behavior transparent to the caller.
Use
HttpStatus.resolve()overHttpStatus.valueOf()to avoid crashes on non-standard status codes.Monitor your circuit breakers via Actuator health endpoints — a circuit stuck in OPEN state is something you want to know about.
Be aware of AOP proxy limitations. Resilience4j annotations only work when the method is invoked through a Spring proxy (i.e., from another bean).
Resources
About the author:
Java Backend Developer with 19+ years of IT experience, building heavy backend financial applications.
Connect:
If you found this article helpful, please leave a reaction ❤️ and follow for more!
Top comments (0)