DEV Community

Cover image for Implementing Event-Driven Architecture in PHP
Dariusz Gafka
Dariusz Gafka

Posted on

Implementing Event-Driven Architecture in PHP

Traditional service integration moves routing logic outside the application’s code.
Message brokers, cloud messaging services, and stream-processing topologies become the place where business-critical flows are defined.

However, this comes at a cost of making our Endpoints - Dumb.

Dumb Endpoints

We often agree to move routing logic outside the application, because it promises simplicity or speed. It looks simpler because it looks like we no longer need to handle routing ourselves. From a developer’s perspective, we just receive and process a message, while routing happens “somewhere else” — outside the code.

When important logic is pushed outside the application, the code becomes unaware of the integrations it depends on. This makes changes harder to test and verify. It also lowers confidence when making changes, because modifying something outside the application is always riskier than changing code we fully own and can easily cover with tests.

When we follow the dumb endpoints approach, where the application is unaware of routing logic, we eventually end up in a situation where:

  1. Knowledge becomes fragmented — Only a few people truly understand the full setup and configuration that lives outside the applications being integrated.

  2. Testing becomes painful — It is no longer easy to test behavior using automated application-level tests. Changes often require modifying external configurations, where testing and verifying correctness is much harder.

  3. Changes become risky — When changes cannot be easily verified, confidence drops. This slows development and often leads to more bugs and production issues.

The state of the architecture is often accepted as it is, and the problems created by dumb endpoints are pushed onto developers. This often leads to situations where more “control” is introduced to prevent further issues — for example, by adding gatekeepers who must review and approve every change.
Ironically, this all starts with the promise of speed and simplicity, offered as a trade-off for moving integration logic outside the application.

However, we can achieve both speed and simplicity while keeping integrations under the control of the application itself. There is no trade-off required. To do this, we need to follow a different approach — one where endpoints are no longer dumb, but become smart.

Smart Endpoints - Dumb Pipes

This leads us to the Smart Endpoints, Dumb Pipes approach.
It reverses the direction of responsibility — instead of moving logic outward, we move it back inward. Applications are no longer dumb. They become smart and decide where messages should go and where they should be consumed from. In this model, the application itself fully controls the integration.

To achieve smart endpoints we need to build the logic of routing inside our Applications. This means using clear abstractions that allow us to orchestrate message flow within the code, rather than external configurations.

To make this possible, messaging needs to be a first-class citizen in our applications.
The messaging abstraction should provide routing capabilities that we can configure as needed and fully test from within the application. Ideally, this abstraction should be decoupled, meaning we are not forced to implement or extend any framework-specific classes.

Enterprise Integration Patterns is a great book that defines a set of abstractions for building messaging systems at the programming-language level. I brought these patterns to life in the Ecotone Framework for PHP.
In the next section, we will explore how to build integrations between applications using a higher-level abstraction built on top of these patterns — the Service Map.

Service Map

Now that we’ve established that smart endpoints keep routing logic inside the application and provide messaging capabilities directly within the programming language, let’s explore how applications can actually be integrated.
To do this, we will look at one of Ecotone’s features — the Service Map.

Service Map is exactly what it sounds like—a map of integrated applications (services) and the pipes (channels) which they communicate through. Here's how to set it up:

#[ServiceContext]
public function serviceMap(): DistributedServiceMap
{
    return DistributedServiceMap::initialize()
        ->withCommandMapping(
            targetServiceName: "ticketService",
            channelName: "ticket_commands"
        );
}
Enter fullscreen mode Exit fullscreen mode

This configuration says: "When sending Commands to Ticket Service, use ticket_commands channel (pipe)"

The routing is done at the Application level, not the Message Broker level. This means that we control the process from within the codebase we own, and can easily cover that with tests.

Command routing

This configuration is for sending Commands, for Event we will be using Event Mapping:

#[ServiceContext]
public function serviceMap(): DistributedServiceMap
{
    return DistributedServiceMap::initialize()
        ->withEventMapping(
            channelName: "ticket_events",
            subscriptionKeys: ["user.*"],
        );
}
Enter fullscreen mode Exit fullscreen mode

This configuration says: "When publishing Events, when routing key start with user then ticket_events channel (pipe)"

Event Mapping allows us to publish Events to specific Channel (Pipe) based on subscription keys.

Event routing

We can of course have multiple subscription to broadcast events to different Services.

Application Code

With the map configured, publishing is straightforward.

Sending side

For Commands, we target a specific service:

public function onUserRegistered(
    string $userId, 
    DistributedBus $distributedBus
): void {
    $distributedBus->convertAndSendCommand(
        targetServiceName: "ticketService",
        routingKey: "ticket.create",
        command: new CreateTicket($userId, "Welcome!")
    );
}
Enter fullscreen mode Exit fullscreen mode

For Events that multiple services might care about, we publish without a target:

$distributedBus->convertAndPublishEvent(
    routingKey: "user.registered",
    event: new UserRegistered($userId)
);
Enter fullscreen mode Exit fullscreen mode

Ecotone makes this part of the API: Commands are sent to a single service, while Events can be delivered to many services.
The Service Map automatically handles routing based on subscription keys.

Receiving side

On the receiving side, we mark handlers as distributed to accept external messages:

#[Distributed]
#[CommandHandler("ticket.create")]
public function createTicket(CreateTicket $command): void
{
    // Create the ticket
}

#[Distributed]
#[EventHandler("user.registered")]
public function onUserRegistered(UserRegistered $event): void
{
    // React to user registration
}
Enter fullscreen mode Exit fullscreen mode

The #[Distributed] attribute makes it explicit that these handlers can receive messages from other services. This clarity prevents accidental breaking changes.

I mentioned earlier that this approach does not require sacrificing speed.
We are not building our own integration infrastructure from scratch — instead, we reuse existing systems.

The key idea is to keep the logic inside the application and treat pipes (channels) as simple transport.
The channel’s only responsibility is to move messages, not to act as the “mastermind” of orchestration.

We have two message channels (pipes): ticket_commands and event_commands.
With the Service Map approach, we can define their implementations in a way that fits our needs, without being tightly coupled to a specific message broker.

This means we can choose — and later switch — the underlying technology if needed.
For example, we might decide to use RabbitMQ or Redis-based channels:

#[ServiceContext]
public function channels()
{
    return [
        // Amazon SQS Message Channel
        SqsBackedMessageChannelBuilder::create("ticket_events"),
        // RabbitMQ Message Channel
        AmqpBackedMessageChannelBuilder::create("ticket_commands"),
    ];
}
Enter fullscreen mode Exit fullscreen mode

Defining Channel is enough for Ecotone to automatically register Message Consumer for us. From that point on, we can start consuming messages right away:

bin/console ecotone:run {ticket_commands/ticket_events}
Enter fullscreen mode Exit fullscreen mode

Streaming Channels

The Service Map works regardless of whether we use queue-based brokers or streaming platforms under the hood.
When using streaming platforms, we gain additional capabilities thanks to their non-destructive nature, which I described in a previous blog post.

In the case of queue-based solutions, we can push messages to each channel as part of the publishing process:

#[ServiceContext]
public function serviceMap(): DistributedServiceMap
{
    return DistributedServiceMap::initialize()
        ->withEventMapping(
            channelName: "ticket_events",
            subscriptionKeys: ["user.*"],
        )
        ->withEventMapping(
            channelName: "order_events",
            subscriptionKeys: ["user.*"],
        );
}
Enter fullscreen mode Exit fullscreen mode

Queue based Event publishing

When using Kafka or RabbitMQ streaming channels, we can push messages to a single channel, from which multiple services can consume:

#[ServiceContext]
public function serviceMap(): DistributedServiceMap
{
    return DistributedServiceMap::initialize()
        ->withEventMapping(
            channelName: "user_events",
            subscriptionKeys: ["user.*"],
        );
}
Enter fullscreen mode Exit fullscreen mode

Streaming based Event Publishing

Ecotone provides different Message Channels integrations:

  • Streaming Channels: Kafka and RabbitMQ
  • Queue Channels: RabbitMQ, Amazon SQS, Redis, Database Channels, Symfony Messenger, Laravel Queues

Decoupled Data Models

All communication happens through defined routing keys, whether the message is a Command or an Event. This is intentional and helps keep applications decoupled from each other.

As a result, each application can use models that fit its own needs and include only the data that is truly meaningful from an integration perspective.

// Publisher sends this
$distributedBus->convertAndPublishEvent(
    routingKey: "user.billing.changed",
    event: new BillingDetailsChanged($userId, $newAddress)
);

// Consumer can use different model
#[Distributed]
#[EventHandler("user.billing.changed")]
public function handle(UserAddressUpdated $event): void
{
    // Different class, same routing key
}
Enter fullscreen mode Exit fullscreen mode

Whether models are shared or not should be a project-level decision.
Ecotone does not force either approach, allowing teams to choose what works best for their specific context.

Testing Integrations

One of the core ideas I mentioned earlier is making integrations testable at the application level. With Ecotone’s Service Map, we can test integrations using in-memory channels or real integrations, all directly from the application code:

$messaging = EcotoneLite::bootstrapFlowTesting(
    [ServiceMapConfig::class],
    enableAsynchronousProcessing: [
        // Define using which Channel you want to test
        SimpleMessageChannelBuilder::createQueueChannel("ticket_commands"),
    ]
);

$messaging->convertAndSendCommand(
    targetServiceName: "ticketService",
    routingKey: "ticket.create",
    command: new CreateTicket($userId, "Welcome!")
);

// Verify command landed in channel
$message = $messaging->getMessageChannel('ticket_commands')->receive();
$this->assertNotNull($message);
Enter fullscreen mode Exit fullscreen mode

The same way we could test out consumption side of things. It's really easy to test any kind of Service Map and cover that with automated tests to ensure that delivery happens as we expect.

Other Supporting Features

We did cover the core part of integration, however together with that Ecotone provides much more features, that ensures that integrations works as expected. For this you may consider exploring:

  • Outbox pattern: For transactional consistency
  • Dead letter queues: For failed message handling
  • Message priorities: For urgent processing
  • Scheduled messages: For delayed delivery

For this you may take a look on Ecotone's documentation page.

Summary

Choosing the Smart Endpoints, Dumb Pipes architecture allows us to take full control of the integration process and keep things simple, testable, and easy to verify for everyone.
The goal is to keep integration logic close to where it is actually used. This helps maintain shared knowledge and a clear understanding of how the system behaves as it evolves.

You can read more about about Service Map under this link.

Whatever you choose to use Ecotone to deliver this approach or build it yourself, feel free to join Ecotone's community channel to discuss different approaches and share the experiences.

Top comments (0)