DEV Community

Cover image for Navigating Microservices Communication: Patterns, Performance, and Technology Choices
Viacheslav Zinovev
Viacheslav Zinovev

Posted on

Navigating Microservices Communication: Patterns, Performance, and Technology Choices

Microservices, with their distributed architecture, introduce a significant change in how services communicate. To understand this change, we need to distinguish between calls made within a single process (in-process) and calls made across a network between separate processes (inter-process). Although treating them as the same may seem tempting, oversimplifying this difference can create major challenges in the microservices world. This article will explore these distinctions and their significant impact on microservice interactions.

Performance Considerations

One of the most important differences between in-process and inter-process calls is how they perform. In an in-process call, developers can make the communication more efficient to reduce unnecessary work, making it almost as fast as calling a function within the same process. However, with inter-process calls, data packets have to travel across a network, causing significant delays and potential slowdowns.

The difference in performance affects how APIs are designed. Something that works well within one process may not work well when using microservices. For example, making many calls within a single process may not cause any performance problems, but doing the same thing across microservices can be inefficient.

When passing parameters within a single process, developers often use references, avoiding the need for data copying. In inter-process communication, data must be serialized, necessitating careful consideration of data size and serialization methods.

Developers need to be aware of these differences. Trying to hide network calls or ignoring performance considerations can lead to performance issues later on. Finding the right balance between abstraction and visibility is important to ensure efficient interactions between services.

Adapting to Changing Interfaces

Modifying an interface within a single process is easy because the code that implements the interface and the code that uses it exist together in the same process. However, in a microservices architecture, where services are separate entities, making changes to an interface that is not compatible with previous versions becomes more complicated. You have two options: either update all microservices that use the interface at the same time or come up with a phased rollout strategy.

Navigating Error Handling Challenges

In error handling, dealing with errors in a single process is usually simple. You can categorize errors as expected or catastrophic. However, in distributed systems, error management becomes more complex. It often involves issues that are out of your control, like network disruptions or unavailability of microservices. Inter-process communication can fail in five different ways:

  1. Crash Failure: The server crashes and needs to be restarted.
  2. Omission Failure: You send a request but don't get a response, or downstream services stop working.
  3. Timing Failure: Events happen too early or too late in the communication flow.
  4. Response Failure: You get a response, but it's either wrong or incomplete.
  5. Arbitrary Failure (Byzantine): Something goes wrong, but the parties involved can't agree on what exactly happened.

Transient errors are common in distributed systems, which highlights the need for a strong framework to handle errors and guide client actions. HTTP's status codes (e.g., 400 for client errors, 500 for server issues) show how important it is to have such a framework for building resilient systems, even if HTTP isn't the chosen communication protocol for microservices..

Technology Choices for Inter-Process Communication: A Plethora of Options

The world of inter-process communication provides many technology options, which can sometimes make it difficult to decide. People often choose technologies they are familiar with or that are currently popular, but this can result in using the wrong solutions. The important thing is to first determine your communication style and then choose the right technology.

Styles of Microservice Communication

The model shown below helps us understand different communication styles in microservice architectures. While it may not cover every detail of inter-process communication, it gives a general overview of the common communication styles in the microservices world.

model

The main communication styles in this model are:

  1. Synchronous Blocking: In this style, a microservice makes a call to another microservice and waits for a response before continuing its operation.
  2. Asynchronous Nonblocking: Here, the microservice making the call can continue its processing independently, whether it gets a response or not.
  3. Request-Response: A microservice sends a request to another microservice and expects a response that tells it the result.
  4. Event-Driven: Microservices send events that other microservices consume and react to, without the sending microservice knowing which microservices are consuming its events.
  5. Common Data: Although less common, microservices can work together by sharing a common data source.

The choice of communication style depends on factors such as the microservices context, the need for reliable communication, acceptable latency levels, and communication volume. Usually, the decision-making process starts with choosing between request-response or event-driven collaboration. If request-response is chosen, both synchronous and asynchronous approaches can work. However, for event-driven collaboration, only nonblocking asynchronous options are available.

Choosing the right communication technology involves more than just considering communication styles. Other factors like low-latency requirements, security concerns, and scalability needs should also be taken into account. It's important to consider the specific requirements and limitations of your problem space when making technology choices.

It's worth noting that a comprehensive microservice architecture often combines different collaboration styles. Some interactions naturally fit request-response, while others work better with event-driven communication. In fact, it's common for a single microservice to support multiple forms of collaboration.

With these ideas in mind, let's dive deeper into the various communication styles.

Pattern: Synchronous Blocking

Synchronous Blocking

Synchronous blocking calls occur when a microservice initiates a request to another microservice and waits for it to complete before continuing. This communication style resembles linear code execution, where each step waits for the previous one, which is common in traditional programming.

While synchronous calls are familiar and simple, they have drawbacks. The main issue is temporal coupling, where the caller assumes the downstream microservice is available. If it's not, the call fails, forcing the caller to decide what to do next, like retrying or queuing.

Temporal coupling also affects responses, as they travel over the same connection. If the downstream service responds but the caller is unreachable, the response is lost. Additionally, slow or busy downstream services can cause delays, leading to performance problems and system disruptions. Synchronous calls are more susceptible to issues caused by downstream problems than asynchronous ones.

Synchronous blocking calls are suitable for simple microservices but problematic in long chains, where any issue can break the entire operation. They can also strain system resources. To address these challenges, consider offloading tasks, parallelizing, or exploring non-blocking communication patterns while preserving the workflow, as discussed in later sections.

Pattern: Asynchronous Nonblocking

Asynchronous Nonblocking

Asynchronous communication allows microservices to initiate calls over the network without obstructing the calling microservice's progress, enabling it to continue processing other tasks regardless of whether it receives a response. Within the realm of asynchronous communication in microservices, three common styles stand out:

  1. Communication through Common Data: In this style, an upstream microservice modifies shared data that one or more downstream microservices can subsequently access and utilize.
  2. Request-Response: A microservice dispatches a request to another microservice, which performs the action and provides a response once the task is complete. This style is apt for lengthy processes.
  3. Event-Driven Interaction: A microservice emits events, which are factual statements about events in its domain. Other microservices can listen for these events and respond accordingly.

Advantages of non-blocking asynchronous communication encompass temporal decoupling, eliminating the requirement for microservices to be simultaneously accessible during the call. It is particularly suited for lengthy or time-consuming processes.

However, this approach introduces complexity and demands a choice among various styles of asynchronous communication, potentially leading to confusion.

Asynchronous communication shines in scenarios involving protracted processes and intricate call chains that are challenging to reconfigure. The selection of the specific type of asynchronous communication depends on each style's unique requirements and trade-offs.

Pattern: Communication Through Common Data

Communication Through Common Data

This pattern involves microservices sharing data by storing it in a common data repository. It's used when one microservice puts data in a specific place, and others access and use that data at their own pace.

To use this pattern, you need a persistent data storage system like a file system or distributed memory store. Other microservices often use polling to find new data. Two common types of data repositories are data lakes (for raw data) and data warehouses (for structured data, which requires microservices to understand the data format).

In this pattern, data flows in one direction: one microservice publishes data, and others consume it. Using a shared database where multiple microservices read and write data can create tight connections.

The advantages of this pattern are simplicity and compatibility with widely used technologies, making it suitable for handling large amounts of data and for interoperability. However, disadvantages include the need for polling, potential disruptions from changes in the data storage, and reliance on the reliability of the data storage itself.

This pattern is useful in situations with technology limitations and scenarios involving a lot of data. Legacy systems can easily access data, and it's efficient for processing large data volumes.

Pattern: Request-Response Communication

Request-Response Communication

In microservices, request-response communication involves one microservice sending a request to another and waiting for a response to determine the outcome. This can happen synchronously, causing blocking, or asynchronously, without blocking.

Request-response calls can be synchronous, where a request is sent, and the sender waits for a response, or asynchronous, where messages go through a message broker, requiring routing considerations.

Handling responses asynchronously can be tricky due to time delays and different microservice instances. Storing related data in a database is one solution.

All request-response methods need timeout handling to avoid system blockage when waiting for responses that might not come. The specifics of implementing timeouts vary with technology.

When multiple calls are needed before processing, deciding whether to do them in parallel or sequentially is crucial. Parallel execution reduces latency, especially when dealing with multiple external services. Reactive extensions and async/await can help with concurrent calls.

Request-response communication is great when the result is crucial for further actions or when handling failures and retries. The choice between synchronous and asynchronous depends on specific use cases and trade-offs.

Pattern: Event-Driven Communication

Event-Driven Communication

Event-driven communication in microservices operates differently from traditional request-response models. Instead of one microservice directly instructing another, microservices emit events independently. These events may or may not be received by other microservices, creating an inherently asynchronous interaction.

An event represents something happening within the emitting microservice's domain, and it is emitted without knowledge of how other microservices will interpret or use it. This decentralizes responsibility, shifting it from the emitter to the recipients. Unlike request-response, where the sender dictates actions, events empower recipients to decide how to react, reducing coupling in collaboration.

This distribution of responsibility supports autonomous teams within organizations, simplifying the complexity of individual microservices. Events and messages are related but distinct concepts. Events convey statements about occurrences, while messages are asynchronous communication mechanisms. In event-driven collaboration, events are typically distributed using messages as the medium.

To implement event-driven communication, you need mechanisms for emitting events and for consumers to discover and process them. Message brokers like RabbitMQ can serve both roles, but they introduce development complexity. Alternatively, HTTP can propagate events, although it may not suit low-latency scenarios.

Events can contain varying levels of information, from just an identifier to all necessary data, impacting coupling and event size. Event-driven communication excels when information needs to be broadcast, emphasizing loose coupling. However, it can introduce complexity and should be carefully considered for alignment with your specific use case.

Top comments (0)