DEV Community

Cover image for From Outbox to Email Delivery: Extending the Kafka Flow in Spring Boot
Allan Roberto
Allan Roberto

Posted on

From Outbox to Email Delivery: Extending the Kafka Flow in Spring Boot

In the previous article, I showed how to create an order, save an outbox event in the same transaction, and publish that event to Kafka using the outbox pattern.

This new step is the natural sequence of that flow.

Now the application does more than publish OrderCreatedEvent. It also consumes that event, prepares an invoice email, stores the email content, creates a new outbox event for email delivery, and dispatches the email asynchronously.

The full project is here: sb-kafka-producer-sample


Why not send the email directly in the consumer?

Because that makes the consumer responsible for too much:

  • reading from Kafka
  • loading data from the database
  • building the email
  • talking to SMTP
  • updating order state

That is where partial failure becomes painful.

For example:

  • the email is sent successfully
  • the order status update fails right after

Now the user received the invoice, but your system still says it was not delivered.

So instead of sending the email directly inside the consumer, I split the process into two steps:

  1. The consumer prepares durable state.
  2. The outbox dispatcher performs the side effect.

Step 1: consume the Kafka event and generate the email request

The Kafka consumer is intentionally small:

@ConditionalOnProperty(value = "app.kafka.invoice-consumer-enabled", havingValue = "true", matchIfMissing = true)
@Component
public class OrderCreatedInvoiceConsumer {

  private final ObjectMapper objectMapper;
  private final CreateInvoiceEmailFromOrderCreatedEventService service;

  public OrderCreatedInvoiceConsumer(
      ObjectMapper objectMapper,
      CreateInvoiceEmailFromOrderCreatedEventService service
  ) {
    this.objectMapper = objectMapper;
    this.service = service;
  }

  @KafkaListener(
      topics = "${app.kafka.order-topic}",
      groupId = "${app.kafka.invoice-consumer-group-id}",
      autoStartup = "${app.kafka.invoice-consumer-enabled:true}"
  )
  public void consume(String payload) {
    service.create(deserialize(payload));
  }

  private OrderCreatedEvent deserialize(String payload) {
    try {
      return objectMapper.readValue(payload, OrderCreatedEvent.class);
    } catch (JacksonException exception) {
      throw new IllegalStateException("Could not deserialize consumed order event", exception);
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This class only does adapter work:

  • listens to Kafka
  • deserializes the payload
  • delegates to the use case

That keeps the consumer easy to understand and easy to test.


Step 2: store the invoice email and create a new outbox event

Once the order event is consumed, the application service:

  • loads the user by userId
  • builds the invoice body
  • saves the email content
  • saves a new outbox event that references that email
@Transactional
public void create(OrderCreatedEvent event) {
  User user = loadOrderUserService.loadById(event.userId());

  InvoiceEmail invoiceEmail = invoiceEmailPort.save(new InvoiceEmail(
      UUID.randomUUID(),
      user.id(),
      event.orderId(),
      user.email(),
      "Invoice for order #" + event.orderId(),
      invoiceEmailBodyFactory.create(user, event),
      OffsetDateTime.now(),
      null
  ));

  outboxEventPort.save(new OutboxEvent(
      UUID.randomUUID(),
      OutboxAggregateType.INVOICE_EMAIL,
      invoiceEmail.id().toString(),
      OutboxEventType.EMAIL_INVOICE_REQUESTED,
      createPayload(new InvoiceEmailOutboxPayload(invoiceEmail.id())),
      OutboxStatus.PENDING,
      OffsetDateTime.now(),
      null,
      null
  ));
}
Enter fullscreen mode Exit fullscreen mode

This was the key design choice for me.

Instead of putting the full email in the outbox payload, I store the rendered email in the database and keep only a reference in the outbox event.

That gives a few benefits:

  • the email content is auditable
  • retries do not need to rebuild the message
  • multiple emails per order are supported
  • the outbox payload stays small

Building an invoice-like email

The email body is generated separately, not inside the listener.

That was important because the email has business content:

  • user name
  • item list
  • quantities
  • total amount

In other words, this is not just a notification email. It behaves more like a simple invoice.

Keeping that rendering logic in its own component made the implementation cleaner and the tests more focused.


Step 3: dispatch the email from the outbox

The second half of the flow happens in the outbox publisher.

It reads pending outbox events and routes each one to the correct dispatcher:

@Scheduled(fixedDelayString = "${app.outbox.fixed-delay-ms}")
public void publishPendingEvents() {
  for (OutboxEvent event : outboxEventPort.findProcessableEvents(batchSize, maxAttempts)) {
    try {
      resolveDispatcher(event.eventType()).dispatch(event);
      outboxEventPort.markPublished(event.id());
    } catch (Exception exception) {
      LOGGER.warn("Failed to publish outbox event {}", event.id(), exception);
      outboxEventPort.markFailed(event.id(), truncate(exception.getMessage()));
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

This is where the outbox became more useful than just "publish to Kafka".

Now it can also dispatch email-related events. The pattern became generic.


Step 4: send the email and update the order

The email dispatcher is responsible for the final side effect:

@Transactional
@Override
public void dispatch(OutboxEvent event) {
  InvoiceEmailOutboxPayload payload = deserialize(event.payload());

  InvoiceEmail invoiceEmail = invoiceEmailPort.findById(payload.invoiceEmailId())
      .orElseThrow(() -> new NotFoundException(
          "Invoice email not found for id " + payload.invoiceEmailId()
      ));

  emailSenderPort.send(invoiceEmail);
  invoiceEmailPort.markSent(invoiceEmail.id());
  orderPersistencePort.updateStatus(invoiceEmail.orderId(), OrderStatus.INVOICE_DELIVERED);
}
Enter fullscreen mode Exit fullscreen mode

That sequence matters.

The order is updated to INVOICE_DELIVERED only after the email is actually sent.

This avoids saying the invoice was delivered when SMTP failed.


Retry support

Email delivery is a good candidate for retry.

SMTP failures are often temporary, so I added retry control in the outbox through attempt_count plus a configurable max-attempts.

That gives the system a safer behavior:

  • retry transient failures
  • stop retrying forever when the event is clearly broken

Without that, the email dispatcher would either fail too early or retry endlessly.


Why I like this version better

The biggest improvement is not the email itself. It is the separation of responsibilities.

Now the flow is:

  • Kafka consumer receives the event
  • application service prepares durable email state
  • outbox stores the delivery request
  • dispatcher sends the email
  • order status changes only after success

That is a much safer model than sending email directly from the listener.

It also keeps the code more aligned with hexagonal architecture:

  • adapters receive and send data
  • use cases coordinate business rules
  • side effects are isolated behind ports

Final thoughts

The first article was about producing events reliably.

This second step is about consuming them without turning the consumer into a fragile orchestration class.

For me, the main lesson is simple:

when a flow touches Kafka, database state, and an external service like SMTP, it is worth splitting preparation from delivery.

That small design decision makes failures easier to handle and the behavior much more predictable.

If you want to see the full implementation, here is the repository again: sb-kafka-producer-sample

Top comments (0)