In the world of microservices, effective communication is essential. This article simplifies the talk about microservices, making it easy to understand. We'll look at different ways they chat, like RPC, REST, GraphQL, and more. We'll also explore how they share information and manage changes. This will help everyone grasp the concept of microservice communication.
Choosing the Right Way
When microservices need to talk to each other, there are several ways to do it. But how do you know which method is the best? Options like SOAP, XML-RPC, REST, gRPC, and more are available. New choices continually appear. Before we jump into the technical details, let's think about what we want from the method we pick.
1. Easy Compatibility
Pick a method that allows for updates without causing problems. Even small changes, like adding new data, shouldn't disrupt the services using your microservice. Test for compatibility before making changes to ensure a smooth transition.
2. Clear Interface
Make sure the microservice's interface is straightforward for both users and developers. This prevents unintended issues with compatibility. Using clear rules, whether they are mandatory or optional, helps define what a microservice should do.
3. Technology Flexibility
In the rapidly changing world of IT, it's essential to keep your options open. Microservices should be able to communicate without being tied to a particular technology. Don't lock yourself into technologies that limit your choices for implementing microservices.
4. User-Friendly Service
Ensure that using your microservice is easy for consumers. Even if a microservice is well-structured, it's not helpful if it's difficult to use. Let clients choose their preferred technology or provide a client library, but be cautious about making the system too interconnected.
5. Hide Technical Details
Don't make consumers depend on your internal workings. When consumers rely on your internal details, even small changes in your microservice can disrupt them. It's best to avoid technology that exposes these inner workings.
Technology Options
Remote Procedure Calls (RPC)
RPC is about making a call to a service located elsewhere. There are various RPC methods, some with explicit structures like SOAP or gRPC, making it easier to create client and server code for different technology platforms. However, some, like Java RMI, tightly connect client and server, limiting flexibility.
RPC often requires a particular way to format data, such as gRPC using protocol buffers. Some are associated with certain network protocols, while others offer more flexibility, like using TCP for reliability or UDP for faster communication.
RPC frameworks with explicit structures make it easy to generate client code but may require clients to access the structure first. For example, Avro RPC can send the complete structure with the data.
RPC simplifies client-side code creation, making remote calls seem like local ones. However, it can limit platform options, blur the line between local and remote calls, and require careful consideration of network reliability in distributed systems.
RPC can be fragile, requiring updates for even minor changes and leading to the accumulation of unnecessary data in shared objects.
Despite challenges, modern implementations like gRPC offer excellent performance and ease of use. gRPC is a top choice when you control both the client and server. For situations with various applications communicating with microservices, REST over HTTP may be a better fit, especially when avoiding client-side code compilation against server-side structures is necessary.
REST
REST, short for Representational State Transfer, is a design style inspired by the web. It provides principles and rules that can be useful for microservices, especially when you want alternatives to RPC for service interfaces.
REST is based on the concept of resources representing things the service knows about. How a resource appears externally is independent of its internal storage format. Clients can ask for a resource's information and make changes with defined HTTP actions (GET, POST, PUT). HTTP offers various features, including caching and security controls, which can be useful for handling large amounts of traffic and ensuring security.
Despite its advantages, REST has challenges, like potential performance issues and HTTP overhead.
REST works well for synchronous request-response interfaces, caching, and sharing APIs with external parties. However, it may not be the best choice for general microservice-to-microservice communication, where more efficient communication methods might be preferable.
GraphQL
GraphQL has become popular for its ability to help client devices define queries, reducing the need for multiple requests to get the same information. This is beneficial for enhancing the performance of resource-constrained client devices and eliminating the need for custom server-side aggregation.
However, dynamic client queries can put a significant load on server resources, and caching in GraphQL is more complex than in typical REST-based HTTP APIs.
GraphQL is best suited for offering functionality to external clients at the system's edge. It can efficiently serve external APIs that require multiple calls to gather information. However, it complements general microservice-to-microservice communication rather than replacing it.
Message Brokers
Message brokers act as middlemen between processes, enabling communication between microservices. They are commonly used for asynchronous communication, providing guaranteed delivery and support for various types of messages.
Messages can represent requests, responses, or events. Instead of microservices directly communicating, a microservice sends a message to a broker, which ensures delivery.
There are two communication mechanisms in message brokers: queues (point-to-point) and topics (one-to-many). Queues are suitable for request/response scenarios, while topics work better for event-based collaboration.
Message brokers ensure guaranteed delivery, even if the receiving destination is temporarily unavailable. Some brokers support transactions and guarantee delivery exactly once, although these can be complex concepts.
Examples of message brokers include RabbitMQ, ActiveMQ, and Kafka. Some cloud providers offer managed message broker services like AWS's SQS and SNS.
Serialization Formats
Textual Formats
Using standard textual formats, like JSON, offers flexibility, although some prefer XML due to better tool support. JSON is favored for its compatibility with browsers and perceived simplicity, while Avro is known for its structured approach. The choice of format depends on specific needs and preferences.
Binary Formats
Binary serialization methods are more efficient in terms of data size and data transfer speed. Protocol buffers are commonly used in microservice-based communication. Other formats like Simple Binary Encoding, Cap'n Proto, and FlatBuffers have distinct advantages. Their effectiveness depends on the specific use case, especially in ultra-fast distributed systems.
Schemas
Schemas help define what information endpoints expose and accept. The choice of serialization format often dictates the schema technology. Some technologies require clear schemas, like SOAP or gRPC, while others make schema usage optional.
Schemas assist in identifying structural changes, while testing is necessary for understanding any semantic changes. Using an explicit schema is more beneficial than schemaless communication because it provides a clear understanding of the agreement between the client and server.
Avoiding Disruptive Changes
Expansion Changes
Start by adding new things to a microservice's agreement without removing anything else. For example, adding a new data field to a message should be fine if clients can handle such changes.
Tolerant Reader
Avoid tightly binding client code to the microservice's interface. Use techniques like XPath to extract needed fields, making clients adaptable to field changes.
Choosing the Right Technology
Select technologies that allow changes without disrupting clients. For instance, Protocol Buffers use field numbers, enabling the addition of new fields without affecting clients.
Explicit Interface
Highlight the importance of microservices having clear schemas to clarify their endpoint functionalities and maintain compatibility.
Detecting Incompatible Changes Early
Use schema-specific tools for finding changes, like Protolock, json-schema-diff-validator, and openapi-diff, which evaluate compatibility and prevent incompatible schemas from being used.
Managing Disruptive Changes
Lockstep Deployment
Give consumers time to upgrade to a new interface when deploying a new version with disruptive changes. This approach sacrifices independent deployment.
Coexisting Incompatible Microservice Versions
Run different service versions concurrently, directing older consumers to the older version and allowing newer consumers to access the updated version. However, this approach has drawbacks and complexity.
Emulating the Old Interface
Deploy a new service version that exposes both old and new endpoints, allowing consumers time to transition. Once all consumers have moved to the new endpoint, remove the old one.
The DRY Principle in a Microservice World
In software development, the "DRY" principle, which stands for "Don't Repeat Yourself," advises against duplicating things unnecessarily. Instead of copying code, it encourages creating reusable code. However, when it comes to microservices, sharing code between services can create problems.
Sharing code, such as logging tools, is acceptable within a service. But if that code spreads beyond a service, it causes issues. The main problem is that updating shared code isn't easy because each microservice usually has its version. This means updating the code requires updating all the microservices, which can be complicated.
When you share code across microservices, you might end up with many different versions of the same code. To update them all at once, you need a coordinated deployment, which can be tricky.
Client libraries are often used to make working with services easier, but they can lead to problems like mixing different pieces of code. An example of a better approach is Amazon Web Services (AWS), where software development kits (SDKs) provide a layer over the main API. These SDKs are often developed by different teams or the community, which keeps things separate.
Netflix uses client libraries for reliability and scalability, but this can introduce some coupling problems.
To handle client libraries well, it's important to keep the code for the transport protocol separate from the code related to the destination service. You should also decide whether to require using client libraries or allow different technology stacks to make calls to the API. Let the clients decide when to update their libraries to maintain the ability to release services independently.
Understanding Service Discovery
Using DNS for Service Discovery
Service discovery can start with simple DNS associations between names and IP addresses. For example, the Accounts microservice might be linked to "accounts.service.net." But updating these entries when you deploy new services can be tough.
You can create a naming convention for different environments, such as "accounts-uat.service.net."
More advanced approaches involve separate domain name servers for various environments, which can point to different hosts based on where you look them up. This works well in some cases but can be complex.
DNS has its advantages, but it can be hard to manage in an environment where hosts are frequently changed or updated. Tools like Consul can help with these challenges.
DNS entries have a "time to live" (TTL), which shows how long a client can consider an entry fresh. To address this, some use load balancers to route traffic to service instances, making updates easier.
While DNS is widely supported, it might not be the best choice for all situations. For multiple instances of a host, DNS entries that resolve to load balancers can be a better option.
Dynamic Service Registries
Service discovery methods can go beyond DNS, especially in dynamic environments. Here are a few alternatives:
-
ZooKeeper:
- Originally made for the Hadoop project.
- Offers a way to store information in a hierarchy.
- Allows clients to add, change, and look up nodes.
- Supports notifications for changes.
- Used for configuration management and leader elections but might not be the best fit for dynamic service registration.
-
Consul:
- Supports configuration management and service discovery.
- Has an HTTP interface for service discovery.
- Includes an integrated DNS server with SRV records.
- Conducts health checks on nodes.
- Works well with various technology stacks and has tools like consul-template for dynamic updates.
- Can be used with Vault for managing secrets.
-
etcd and Kubernetes:
- Kubernetes uses etcd for managing configuration info.
- Kubernetes handles service discovery for containerized workloads.
- Matches metadata associated with pods to identify services.
- Ideal for Kubernetes environments but might not be the best choice in mixed platforms.
-
Rolling Your Own:
- Create a custom system for service discovery.
- One example is tagging AWS instances with metadata.
- Use AWS APIs to find relevant machines.
- This method allows for rich metadata association with instances.
- Building your own system isn't recommended anymore due to the availability of mature service discovery tools.
Choosing a service discovery method depends on your specific needs and environment. Each solution has its own advantages and considerations, so pick the one that best suits your requirements.
Top comments (0)