DEV Community

Kasia Ryniak
Kasia Ryniak

Posted on

Migrating an ERP-Driven Storefront Using a Message-Broker Architecture (RabbitMQ)

Modern e-commerce platforms increasingly rely on modular, API-driven components. ERPs… do not. They are deterministic, slow-moving systems built around the idea that consistency matters more than speed.

Recently, when migrating a custom storefront to Solidus, a complete open source e-commerce solution built with Ruby on Rails for one of our clients, we faced this architectural tension head-on: How do you build a fast, flexible, customer-facing storefront when the ERP must remain the single source of truth for products, stock, and order states?

The problem that has arisen here was ensuring the data consistency between the [Solidus](https://solidus.io/) storefront and ERP system. It was obvious that event-driven two-way communication between them needed to be established, but the question was how to implement it in a way that would be efficient, maintainable, scalable and fault-tolerant - or to put is shortly, how to ensure that the communication meets the standards of a modern, well-engineered system.

Our answer to that problem was to design the integration around a message-broker-centric architecture. The message broker of choice in this particular case was RabbitMQ, mainly for the simplicity of integration.

Why message brokers?

Efficiency

Message brokers are software components that were built with the purpose of handling large amounts of data (in the isolated packages of data called “messages”). The messages can be published to the broker, or consumed from it by the services that are subscribed to it (publish/consume/subscribe - RabbitMQ nomenclature, other message brokers name those actions differently with little difference on what they functionally mean). This the clue of message brokers - they serve as a middleman between systems that can either publish or consume data from it.

Diagram illustrating data flow between clients via message broker.
One client publishes the data to the message broker queue. The client named “Consumer” is subscribed to that specific queue. This way, whenever some message (data package) is published to the subscribed queue, the said client will “consume” (receive) the data.
Note that publish/subscribe/consume is a RabbitMQ-specific nomenclature. Other message brokers name those actions differently but in principle work similarly.

This architecture pattern facilitates event-driven communication between the systems - whenever an event occurs that the other system should know about (e.g. there is a stock movement on particular products on the ERP side, or an user has placed an order on the storefront side), all that is needed is to publish the pertinent message to the broker and ensure that the other side is subscribed to the broker.

Subscription to the queue is defined on the consumers side. With a properly subscribed client, the message can be consumed and processed accordingly. In our case we used this mechanism to update system internal state right after the update message was sent to the pertinent queues.

The integral ease of implementing the event-driven communication that characterize message broker is a big part of what makes it a state-of-the-art solution for the efficient two-way communication between systems.

Maintainability

Message brokers allow easy logical division of data streams (via queues, or - on higher level - virtual hosts). This facilitates integration of numerous systems to one message broker, with logical separation of the messages. It makes it easy to extend - you can easily integrate new services/systems by creating new queues/virtual hosts.

Diagram illustrating the logical division of data streams facilitated by message brokers Virtual hosts serve as a logical environments inside a broker that provide isolation between systems sharing broker infrastructure. Queues serve as a logical data buckets, isolating data between services/components inside of one system.
Diagram illustrating the logical division of data streams facilitated by message brokers

Virtual hosts serve as a logical environments inside a broker that provide isolation between systems sharing broker infrastructure. Queues serve as a logical data buckets, isolating data between services/components inside of one system.

The diagram above illustrates an example architecture using mechanisms for logical separation of data streams provided by message brokers:

  • Virtual hosts - ideal for isolating data streams between systems that share a broker’s infrastructure. For example, one ERP system can be connected to two separate stores via a single broker instance with two dedicated virtual hosts; this ensures that one store cannot access the data intended for the other.

  • Queues - logical containers for messages of a specific type (e.g., product updates or new post comments). They act as the data buckets that subscribers connect to and separate data between components within a system (e.g., two services for product updates and user updates connected to two separate queues, “products” and “users”).
    This logical division of data streams into virtual hosts and queues simplifies system organization and isolation, making it easier to maintain and extend complex projects.

Another upside of message broker queues, in terms of data flow, is that they implement FIFO (First In, First Out) mechanism - ensuring that the messages are processed in order they were received. This provides predictable and consistent data processing, prevents race conditions, and helps maintain the logical sequence of events across distributed systems.

There is an additional advantage when it comes to maintainability of message broker architecture, appreciated by any pragmatic software developer - namely the wide range of existing libraries and client integrations. RabbitMQ, for instance, provides integration libraries for virtually every popular web framework and programming language, providing seamless connectivity and simplifying development across diverse tech stacks.

Scalability

Message brokers scale well both horizontally (more instances of broker services) and vertically (more resources designated to one instance), which make it a good decision for dynamic and growing systems.

Horizontal scaling needs to be integrated with a load balancing layer or be supported by the specific message broker via cluster awareness mechanisms (so that the client can connect to the broker instance, and the messages are correctly routed internally within the cluster of brokers). Obviously this comes with additional complexity on the configuration side. That being said, in most systems vertical scaling will be absolutely enough, as message brokers are highly efficient in message processing and can generally withstand heavy throughput on one instance.

Also the aforementioned ways of logical separation facilitates the case of setting up the mirrored services - that was our case on our project, where we were to set up three separate instances of the eCommerce storefront, that were connected to the same ERP. We just created new virtual hosts inside of RabbitMQ instance and connected the mirrored web server instances to it. Since the clients attempt to publish/subscribe data to/from the queue is enough to declare the queue, just connecting the mirrored service (with shared codebase) automatically handles the queue structure definition.

Fault tolerance

Message brokers store the messages in their internal memory, up to the moment of them being successfully consumed by the designated service. This ensures better fault tolerance of the communication - that way the messages do not get lost (e.g. when the target service is temporarily down). Also most message brokers support the message persistence, so that the messages are not lost even in case of failure or temporary outage of the broker itself, or the message consumption acknowledgment mechanisms, that allow the consumers to control whether of not given message should be removed from the queue - which could be used to e.g. not remove messages that caused consumer crash on received data.

In traditional flows based on API or webhooks, if the receiving service is unavailable, the request simply fails, and the data may be lost unless additional retry logic is implemented. In contrast, message brokers buffer and persist messages, allowing the consumer to process them later, once it’s back online. This makes them a far more robust solution for asynchronous, fault-tolerant communication between distributed systems.

Lessons Learned (for teams building ERP-driven e-commerce systems)

This project reinforced a pattern we see frequently in ERP-centric e-commerce environments: the technical challenges are rarely about the tools themselves. They stem from how systems think, how they communicate, and how teams design the boundaries between them. Below are our lessons:

1. Event models are more important than queues

A message broker won’t save a poorly designed event contract. Invest time in defining:

  • event schema
  • versioning
  • idempotency
  • domain boundaries

2. Monitoring is non-negotiable

Most issues in distributed systems aren’t failures but rather slowdowns. RabbitMQ metrics let us spot:

  • backlog accumulation
  • slow consumers
  • malformed messages
  • retries and dead-letter queues

3. ERP constraints should drive architectural decisions, not vice versa

ERPs are deterministic, linear, often slow, and sometimes unavailable. Storefronts are fast, parallel, and customer-facing. The message broker acts as a translator between these fundamentally different worlds. Designing with this asymmetry in mind (instead of pretending both systems operate at the same tempo) leads to more stable and predictable commerce operations.

4. Broker-backed integration accelerates future modernization

Once event-driven messaging is in place, you can:

  • add new microservices
  • replace parts of the storefront
  • extend ERP responsibilities
  • integrate PIM, OMS, CRM without rewriting the whole integration layer.

This is why message brokers are becoming standard in composable commerce architecture.

Conclusion

Migrating to a Solidus storefront and connecting it tightly to an existing ERP taught us, once again, that message brokers can be a foundational architectural element for e-commerce system where reliability, extensibility, and operational resilience matter.

And if the ERP is the system of record, as it often is, treating the message broker as the backbone of your data synchronization is a sustainable approach we’ve seen that consistently survives real-world operational complexity.

Top comments (0)