Advances in cloud computing have shown that microservices architecture is an effective solution to issues introduced by monolithic systems. The previous iteration of this idea was called SOA or Services Oriented Architecture, a term coined in the 90s.
The general design pattern is not new, but the rise of highly scalable infrastructure provisioning technologies (AWS/Azure/gCloud) has pushed this instrumental pattern back into the mainstream. Leading companies have adopted Microservices to decrease the computation costs of their systems.
There are three critical challenges with a microservices implementation. The first immediate challenge is that previously we would primarily deal with passing objects around in memory. Nowadays, we are passing objects across the wire. To send information over the wire requires a large amount of work. The typical journey involves:
- Serialize a request into a message that we can send as bits across the wire.
- Send the request via a network card.
- Translate the data into packets.
- Receive the packets on the other end.
- Deserialize and finally turn the data into an in-memory object the requester can use.
To do all this creates a performance penalty. It is costly to do all this work every time we send a message. Modern microservice implementations comprise many layers and operations, such as fan-out. Considering these layers, the cost of serialization and deserialization grows in computational cost as the system evolves.
The second major challenge is network contention. When dealing with poorly reliable networks and significant payloads, the system performance suffers.
The third challenge arises from the increased volume of services in a given system. Current microservices systems may often have a single machine dedicated to a single service. There are two requirements in newer systems with a higher volume of services: having a single machine run multiple services and services running over multiple machines.
gRPC => a collection of libraries and tools that allow us to create APIs (clients and servers) in many different languages. It relies on protobufs, a strongly typed and binary efficient mechanism for serializing and deserializing messages (syntactically similar to Go).
gRPC is a transport built on HTTP/2, giving it access to features like bidirectional streaming.
gRPC can control retries, flow control, rate management—all things required for building a robust client.
The language of choice = Go. Go has a small runtime footprint. This consideration is essential when services are small, and there are lots of them. There is an economic incentive to minimize computational costs in any business. Having the ability to run a process as efficiently as possible with as little memory and system overhead required means we can pack processes into a finite amount of compute.
Next, I will explore the structure of a gRPC microservice.