DEV Community

Moslem Chalfouh
Moslem Chalfouh

Posted on • Originally published at Medium on

ECS, Lambda, or EC2? How Hexagonal Architecture Made the Choice Irrelevant

You start a project locally. Everything runs smoothly, tests are green, and infrastructure feels like a “later” problem. Then “later” arrives — bringing deadlines, security compliance, and platform changes.

On a recent project, a Kafka-driven Java service went through three major infrastructure pivots before hitting production: containers, serverless, and finally classic EC2. The service was designed to generate business documents and call on-premises APIs.

The only reason I could pivot the project without a full rewrite was strict adherence to Hexagonal Architecture.

Here is the story of how that structure absorbed the chaos.

The Use Case: Kafka In, Legacy Out

On paper, the functional requirement was deceptively simple:

  1. Consume events from a Kafka topic.
  2. Apply routing and validation rules.
  3. Generate a business document.
  4. Call an on-premises API to update downstream processes.

Locally, it was just a Spring Boot app with some JSON and a few services.

I focused purely on the domain model and boundaries, ignoring whether the entry point would eventually be a Lambda handler, a Kafka listener, or a container.

Act I — The Container Hype (ECS)

Initially, the plan was to use Amazon ECS. It was the exciting option: containerize the app, push it, and run it in a managed cluster.

But there was a hidden constraint. While ECS was trendy among delivery teams, it was not yet an officially approved standard for our security and compliance department. This meant:

  • Extra validation steps.
  • Uncertain timelines.
  • A high risk of a “No-Go” decision right before launch.

For a project under strict delivery pressure, betting on a platform still awaiting approval was a gamble we couldn’t afford. I had to pivot.

Act II — The Serverless Promise (Lambda + Kafka Connector)

The logical plan B was Serverless. Infrastructure teams wanted to avoid OS patching, and AWS Lambda fit that bill perfectly.

The architecture seemed elegant:

  • Messages arrive in Kafka (Confluent).
  • A Kafka connector pushes them to a Lambda trigger.
  • The Lambda processes the event, generates the document, calls the API, and vanishes.

Java on Lambda: Debunking Myths

Despite skepticism about running Java on Lambda (cold starts, heavy runtime), I leveraged a modern stack: Java 21 + Spring Boot 3. I used Virtual Threads for I/O-bound efficiency and SnapStart to reduce cold start latency.

Technically, it worked. Locally and in non-prod, the Lambda accepted payloads, mapped them to domain objects, and executed the business logic perfectly.

Then organizational reality hit.

The Compliance Wall

Just before go-live, a new constraint dropped: the specific Confluent connector required to trigger the Lambda was not qualified for production.

The consequences were immediate:

  • The push model (Connector → Lambda) was banned.
  • The service had to consume directly from Kafka.
  • Serverless was effectively dead for this release.

I needed a third option. Fast.

Act III — Landing on EC2 (And Why It Didn’t Hurt)

With two options off the table, we turned to the most battle-tested solution available: a classic EC2 instance running a Spring Boot application.

In a tightly coupled architecture, this would have required major surgery: rewriting entry points, refactoring message parsing, and risking regression in the business logic.

But for us? The impact was trivial.

The Real Hero: Hexagonal Architecture

Because I had structured the service around clear Hexagonal principles, our project structure looked like this:

The only thing that changed across our three AWS pivots was the Driving Adapter (the left side).

The Stable Center

The use case remained untouched:

// application/port/in/ProcessRequestUseCase.java
public interface ProcessRequestUseCase {
    void process(BusinessRequestEvent event);
}
Enter fullscreen mode Exit fullscreen mode

The domain model never knew if the data came from a Lambda JSON payload or a Kafka ConsumerRecord.

Adapter 1 — The Lambda Approach (Abandoned)

When we targeted Lambda, our entry point looked like this:

// infrastructure/lambda/LambdaHandler.java
public class LambdaHandler implements RequestHandler<Map<String, Object>, String> {
    private final ProcessRequestUseCase useCase;

    @Override
    public String handleRequest(Map<String, Object> event, Context context) {
        // Adapt JSON -> Domain
        BusinessRequestEvent domainEvent = map(event);
        useCase.process(domainEvent);
        return "OK";
    }
}
Enter fullscreen mode Exit fullscreen mode

Adapter 2 — The EC2 Approach (Final Production)

When we switched to EC2, we simply swapped in a Spring Kafka listener:

// infrastructure/kafka/KafkaConsumerListener.java
@Component
public class KafkaConsumerListener {
    private final ProcessRequestUseCase useCase;

    @KafkaListener(topics = "${app.kafka.topic}", groupId = "${app.kafka.group}")
    public void onMessage(ConsumerRecord<String, String> record) {
        // Adapt Kafka Record -> Domain
        BusinessRequestEvent domainEvent = map(record.value());
        useCase.process(domainEvent);
    }
}
Enter fullscreen mode Exit fullscreen mode

What changed?

  • Infrastructure concerns (annotations, configuration, scaling).

What stayed the same?

  • The Domain Model.
  • The Use Case API.
  • The entire business logic.

This decoupling is precisely what allowed the service to survive three infrastructure decisions without rewriting a single line of business code.

The Takeaway

The story isn’t about “Containers vs. Serverless vs. EC2.” Those are implementation details that will inevitably change based on cost, trends, and governance.

The real lesson is:

  1. Infrastructure is volatile  — internal standards and compliance rules are moving targets.
  2. Business logic should be stable  — it shouldn’t break because you changed a compute platform.
  3. Architecture buys you options  — the freedom to pivot without panic.

By keeping the domain pure and the adapters thin, I absorbed an ECS experiment, a Serverless attempt, and an EC2 fallback.

Infrastructure decisions will keep changing. Good architecture is what lets you sleep at night when they do.

Top comments (0)