DEV Community

Cover image for Outbox Pattern with Kafka and Hexagonal Architecture in Spring Boot
Allan Roberto
Allan Roberto

Posted on

Outbox Pattern with Kafka and Hexagonal Architecture in Spring Boot

When building microservices, one of the most common consistency problems is this:

what happens if the database transaction succeeds, but the Kafka publish fails?

If your service saves business data first and publishes the event afterward, you can end up with an order stored in the database and no event in Kafka. That is exactly the kind of distributed inconsistency the Outbox Pattern is designed to avoid.

In this article, I’ll use the repository below as the study case:

sb-kafka-producer-sample

This project is not just a simple Kafka producer. Its README explicitly states that it uses PostgreSQL, Kafka, Flyway, and the outbox pattern in a hexagonal architecture. It also exposes an asynchronous POST /api/orders endpoint that stores the order and the outbox event in the same transaction, and then a scheduled publisher sends pending events to Kafka.


Why the Outbox Pattern matters

The classic anti-pattern looks like this:

[ REST API ]
    ↓
[ Save order in DB ]
    ↓
[ Publish event to Kafka ]
Enter fullscreen mode Exit fullscreen mode

This looks simple, but it is unsafe. If the database commit succeeds and Kafka fails right after, your system state becomes inconsistent.

The outbox pattern changes the flow to this:

[ REST API ]
    ↓
[ Save order ]
    ↓
[ Save outbox event ]  <-- same transaction
    ↓
[ Scheduled publisher reads pending outbox events ]
    ↓
[ Publish to Kafka ]
    ↓
[ Mark event as PUBLISHED or FAILED ]
Enter fullscreen mode Exit fullscreen mode

That is exactly the strategy implemented by the repository. The order is persisted and the outbox event is saved together inside CreateOrderService, while OutboxPublisher later processes pending events.


Hexagonal Architecture in the repository

The repository README describes the project in hexagonal layers:

  • domain: business models and enums
  • application: ports, use cases, commands, and exceptions
  • adapter.in.web: REST controllers and validation
  • adapter.out.persistence: JPA entities, repositories, and persistence adapters
  • adapter.out.messaging: Kafka publishing and outbox polling
  • config: infrastructure configuration

That separation is important, because Kafka is not leaking into the use case layer. Instead, the application layer talks through ports, and the adapters implement them.

A simplified view looks like this:

domain/
application/
  port/
    in/
    out/
  usecase/
adapter/
  in/web/
  out/persistence/
  out/messaging/
config/
Enter fullscreen mode Exit fullscreen mode

The core of the pattern: CreateOrderService

The most important class in this study case is CreateOrderService.

It:

  1. validates duplicated products,
  2. loads the user,
  3. loads the products,
  4. calculates totals,
  5. saves the order through OrderPersistencePort,
  6. creates an OutboxEvent with status PENDING,
  7. stores the event through OutboxEventPort.

This is the most important part of the implementation:

@Async("taskExecutor")
@Transactional
@Override
public CompletableFuture<?> create(CreateOrderCommand command) {
    validateDuplicatedProducts(command.items());

    User user = loadOrderUserService.loadById(command.userId());
    Map<Long, Product> productMap = loadOrderProductsService.loadProductsByItems(command.items());

    List<OrderItem> items = command.items().stream()
        .map(item -> toOrderItem(item, productMap.get(item.productId())))
        .toList();

    BigDecimal totalAmount = items.stream()
        .map(OrderItem::lineTotal)
        .reduce(BigDecimal.ZERO, BigDecimal::add);

    Order savedOrder = orderPersistencePort.save(new Order(
        null,
        user.id(),
        user.name(),
        OrderStatus.CREATED,
        totalAmount,
        OffsetDateTime.now(),
        items
    ));

    outboxEventPort.save(new OutboxEvent(
        UUID.randomUUID(),
        "ORDER",
        savedOrder.id().toString(),
        "ORDER_CREATED",
        createPayload(toOrderCreatedEvent(savedOrder)),
        OutboxStatus.PENDING,
        OffsetDateTime.now(),
        null,
        null
    ));

    return CompletableFuture.completedFuture(savedOrder);
}
Enter fullscreen mode Exit fullscreen mode

This is the key design choice: the use case does not publish directly to Kafka. It only persists the business state and the event description atomically.


The Outbox model

The repository models the outbox entry as a domain record:

public record OutboxEvent(
    UUID id,
    String aggregateType,
    String aggregateId,
    String eventType,
    String payload,
    OutboxStatus status,
    OffsetDateTime createdAt,
    OffsetDateTime processedAt,
    String errorMessage
) { }
Enter fullscreen mode Exit fullscreen mode

This is a solid design because it includes:

  • an event identifier,
  • aggregate metadata,
  • serialized payload,
  • lifecycle status,
  • processing timestamp,
  • failure message.

The application port for this workflow is also explicit:

public interface OutboxEventPort {
    OutboxEvent save(OutboxEvent event);
    List<OutboxEvent> findAll();
    List<OutboxEvent> findProcessableEvents(int limit);
    void markPublished(UUID eventId);
    void markFailed(UUID eventId, String errorMessage);
}
Enter fullscreen mode Exit fullscreen mode

That keeps the application independent from JPA and database details.


The publisher side

The scheduled publisher is implemented in OutboxPublisher.

It fetches processable events in batches, publishes them through OrderEventPublisherPort, and updates the outbox status accordingly:

@Scheduled(fixedDelayString = "${app.outbox.fixed-delay-ms}")
public void publishPendingEvents() {
    for (OutboxEvent event : outboxEventPort.findProcessableEvents(batchSize)) {
        try {
            orderEventPublisherPort.publish(event);
            outboxEventPort.markPublished(event.id());
        } catch (Exception exception) {
            LOGGER.warn("Failed to publish outbox event {}", event.id(), exception);
            outboxEventPort.markFailed(event.id(), truncate(exception.getMessage()));
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

That means the failure handling is also persisted, which is useful for observability and retries.


Kafka adapter

The concrete Kafka adapter is KafkaOrderEventPublisher, which implements OrderEventPublisherPort.

It deserializes the outbox payload into a typed OrderCreatedEvent and sends it with:

  • topic from configuration,
  • key = aggregateId,
  • value = typed event.
@Component
public class KafkaOrderEventPublisher implements OrderEventPublisherPort {

    private final KafkaTemplate<String, OrderCreatedEvent> kafkaTemplate;
    private final ObjectMapper objectMapper;
    private final String topic;

    @Override
    public void publish(OutboxEvent event) {
        kafkaTemplate.send(topic, event.aggregateId(), deserialize(event.payload())).join();
    }

    private OrderCreatedEvent deserialize(String payload) {
        try {
            return objectMapper.readValue(payload, OrderCreatedEvent.class);
        } catch (JacksonException exception) {
            throw new IllegalStateException("Could not deserialize order outbox payload", exception);
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Using aggregateId as the Kafka key is a good choice because it helps preserve ordering for messages of the same aggregate.


Why this implementation is good

What I like in this repository is that it avoids the common mistake of mixing transport concerns into the use case.

The responsibilities are clean:

  • CreateOrderService handles business orchestration and stores the outbox event.
  • OutboxPublisher handles scheduling and delivery lifecycle.
  • KafkaOrderEventPublisher handles Kafka-specific concerns.
  • OutboxEventPort and OrderEventPublisherPort keep the application layer isolated from infrastructure.

That is exactly how the outbox pattern should look in a hexagonal architecture.


Tests in the repository

One thing I appreciated is that the repository already includes the right kinds of tests.

1. Use case test

CreateOrderServiceTest verifies that:

  • the order is created,
  • total amount is calculated,
  • the outbox event is saved,
  • the payload contains a serialized OrderCreatedEvent,
  • duplicated products are rejected.

Example:

@Test
void shouldCreateOrderAndOutboxEvent() {
    CreateOrderCommand command = new CreateOrderCommand(
        1L,
        List.of(
            new CreateOrderItemCommand(10L, 2),
            new CreateOrderItemCommand(20L, 1)
        )
    );

    when(loadOrderUserService.loadById(1L))
        .thenReturn(new User(1L, "User", "user@example.com"));

    when(loadOrderProductsService.loadProductsByItems(command.items()))
        .thenReturn(Map.of(
            10L, new Product(10L, "Notebook", new BigDecimal("12.50")),
            20L, new Product(20L, "Keyboard", new BigDecimal("45.90"))
        ));

    when(orderPersistencePort.save(any(Order.class)))
        .thenAnswer(invocation -> {
            Order order = invocation.getArgument(0, Order.class);
            return new Order(
                99L,
                order.userId(),
                order.userName(),
                OrderStatus.CREATED,
                order.totalAmount(),
                order.createdAt(),
                order.items()
            );
        });

    Order order = createOrderService.create(command).join();

    assertThat(order.id()).isEqualTo(99L);
    assertThat(order.totalAmount()).isEqualByComparingTo("70.90");

    ArgumentCaptor<OutboxEvent> captor = ArgumentCaptor.forClass(OutboxEvent.class);
    verify(outboxEventPort).save(captor.capture());

    OrderCreatedEvent event =
        new ObjectMapper().readValue(captor.getValue().payload(), OrderCreatedEvent.class);

    assertThat(event.orderId()).isEqualTo(99L);
    assertThat(event.totalAmount()).isEqualByComparingTo("70.90");
}
Enter fullscreen mode Exit fullscreen mode

This is a strong unit test because it verifies not only that the port was called, but also that the serialized event payload is correct.


2. Kafka publisher unit test

KafkaOrderEventPublisherTest checks that the adapter:

  • deserializes the outbox payload,
  • publishes a typed OrderCreatedEvent,
  • uses the right topic and key.
@Test
void shouldDeserializeOutboxPayloadAndPublishTypedEvent() throws Exception {
    KafkaTemplate<String, OrderCreatedEvent> kafkaTemplate = mock(KafkaTemplate.class);
    ObjectMapper objectMapper = new ObjectMapper();

    KafkaOrderEventPublisher publisher =
        new KafkaOrderEventPublisher(kafkaTemplate, objectMapper, "orders.created");

    OrderCreatedEvent payload = new OrderCreatedEvent(
        99L,
        1L,
        "User",
        OrderStatus.CREATED,
        new BigDecimal("70.90"),
        OffsetDateTime.parse("2026-03-24T10:15:30Z"),
        List.of(new OrderCreatedItemEvent(
            10L, "Notebook", 2,
            new BigDecimal("12.50"),
            new BigDecimal("25.00")
        ))
    );

    OutboxEvent outboxEvent = new OutboxEvent(
        UUID.randomUUID(),
        "ORDER",
        "99",
        "ORDER_CREATED",
        objectMapper.writeValueAsString(payload),
        OutboxStatus.PENDING,
        OffsetDateTime.now(),
        null,
        null
    );

    when(kafkaTemplate.send(eq("orders.created"), eq("99"), eq(payload)))
        .thenReturn(CompletableFuture.completedFuture(null));

    publisher.publish(outboxEvent);

    verify(kafkaTemplate).send("orders.created", "99", payload);
}
Enter fullscreen mode Exit fullscreen mode

This is a very good adapter test because it validates the contract between the outbox payload and Kafka publishing.


3. Persistence test

OutboxEventPersistenceAdapterTest verifies that:

  • an outbox event can be saved,
  • it is returned by findProcessableEvents,
  • it transitions to PUBLISHED after markPublished.
@Test
void shouldPersistAndTransitionOutboxEvents() {
    UUID id = UUID.fromString("11111111-1111-1111-1111-111111111111");

    outboxEventPersistenceAdapter.save(new OutboxEvent(
        id,
        "ORDER",
        "1",
        "ORDER_CREATED",
        "{\"orderId\":1}",
        OutboxStatus.PENDING,
        OffsetDateTime.parse("2026-03-24T10:15:30Z"),
        null,
        null
    ));

    assertThat(outboxEventPersistenceAdapter.findProcessableEvents(10)).hasSize(1);

    outboxEventPersistenceAdapter.markPublished(id);

    assertThat(outboxEventPersistenceAdapter.findAll())
        .singleElement()
        .extracting(OutboxEvent::status)
        .isEqualTo(OutboxStatus.PUBLISHED);
}
Enter fullscreen mode Exit fullscreen mode

This test is important because the outbox pattern is not only about producing messages. It is also about tracking delivery state in a reliable way.


4. Kafka integration test

The most interesting test is OutboxPublisherKafkaIntegrationTest.

It:

  • inserts a pending outbox event,
  • invokes outboxPublisher.publishPendingEvents(),
  • consumes the record from Kafka,
  • verifies the payload,
  • verifies the outbox status became PUBLISHED.
@Test
void shouldPublishPendingOutboxEventToKafka() throws Exception {
    OrderCreatedEvent payload = new OrderCreatedEvent(
        99L,
        1L,
        "Default User",
        OrderStatus.CREATED,
        new BigDecimal("70.90"),
        OffsetDateTime.parse("2026-03-24T12:00:00Z"),
        List.of(new OrderCreatedItemEvent(
            10L,
            "Notebook",
            2,
            new BigDecimal("12.50"),
            new BigDecimal("25.00")
        ))
    );

    UUID eventId = UUID.randomUUID();

    outboxEventPort.save(new OutboxEvent(
        eventId,
        "ORDER",
        "99",
        "ORDER_CREATED",
        objectMapper.writeValueAsString(payload),
        OutboxStatus.PENDING,
        OffsetDateTime.now(),
        null,
        null
    ));

    try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(consumerProperties())) {
        consumer.subscribe(List.of(topic));

        outboxPublisher.publishPendingEvents();

        ConsumerRecord<String, String> publishedRecord = pollSingleRecord(consumer);

        assertThat(publishedRecord.key()).isEqualTo("99");

        OrderCreatedEvent publishedEvent =
            objectMapper.readValue(publishedRecord.value(), OrderCreatedEvent.class);

        assertThat(publishedEvent.orderId()).isEqualTo(99L);

        assertThat(outboxEventPort.findAll())
            .filteredOn(event -> event.id().equals(eventId))
            .singleElement()
            .extracting(OutboxEvent::status)
            .isEqualTo(OutboxStatus.PUBLISHED);
    }
}
Enter fullscreen mode Exit fullscreen mode

This is the kind of test that gives confidence in the whole outbox flow end to end.


What this project teaches well

This repository is a strong example because it shows that the outbox pattern is not just “save event in a table.”

It also includes:

  • a dedicated outbox model,
  • clear application ports,
  • a scheduled publisher,
  • explicit status transitions,
  • Kafka integration,
  • persistence tests,
  • end-to-end integration coverage.

In other words, it demonstrates the pattern as an operational workflow, not just as a conceptual diagram.


Possible improvements

Even though the implementation is already solid, there are a few natural next steps:

  1. Add retry count and backoff metadata to the outbox table.
  2. Add dead-letter handling for repeatedly failing events.
  3. Consider CDC with Debezium instead of polling when throughput grows.
  4. Add observability metrics for pending, published, and failed events.
  5. Make idempotency explicit on the consumer side as well.

These are suggestions on top of an implementation that is already well-structured. The current code already provides a good baseline for production-minded design.


Final thoughts

The best thing about this repository is that it uses the outbox pattern the way it should be used in a hexagonal architecture:

  • the use case creates business data and the outbox entry in the same transaction,
  • the application layer depends on ports,
  • the Kafka adapter is isolated,
  • the publisher is responsible for delivery and state transition,
  • and the tests validate the behavior from unit to integration level.

That makes allanroberto18/sb-kafka-producer-sample a much better study case than a simple “producer sends a message” demo. It shows how to build a reliable event publication flow with Spring Boot, Kafka, and Hexagonal Architecture.


Handle consumers: From Outbox to Email Delivery: Extending the Kafka Flow in Spring Boot

Top comments (0)