DEV Community

Lochie
Lochie

Posted on

Reasons to adopt gRPC in existing systems

Microservices

Advances in cloud computing have shown that microservices architecture is an effective solution to issues introduced by monolithic systems. The previous iteration of this idea was called SOA or Services Oriented Architecture, a term coined in the 90s.

The general design pattern is not new, but the rise of highly scalable infrastructure provisioning technologies (AWS/Azure/gCloud) has pushed this instrumental pattern back into the mainstream. Leading companies have adopted Microservices to decrease the computation costs of their systems.

Challenges with microservices

There are three critical challenges with a microservices implementation. The first immediate challenge is that previously we would primarily deal with passing objects around in memory. Nowadays, we are passing objects across the wire. To send information over the wire requires a large amount of work. The typical journey involves:

  • Serialize a request into a message that we can send as bits across the wire.
  • Send the request via a network card.
  • Translate the data into packets.
  • Receive the packets on the other end.
  • Deserialize and finally turn the data into an in-memory object the requester can use.

To do all this creates a performance penalty. It is costly to do all this work every time we send a message. Modern microservice implementations comprise many layers and operations, such as fan-out. Considering these layers, the cost of serialization and deserialization grows in computational cost as the system evolves.

The second major challenge is network contention. When dealing with poorly reliable networks and significant payloads, the system performance suffers.

The third challenge arises from the increased volume of services in a given system. Current microservices systems may often have a single machine dedicated to a single service. There are two requirements in newer systems with a higher volume of services: having a single machine run multiple services and services running over multiple machines.

Answers to the challenges

gRPC => a collection of libraries and tools that allow us to create APIs (clients and servers) in many different languages. It relies on protobufs, a strongly typed and binary efficient mechanism for serializing and deserializing messages (syntactically similar to Go).

gRPC is a transport built on HTTP/2, giving it access to features like bidirectional streaming.

gRPC can control retries, flow control, rate management—all things required for building a robust client.

The language of choice = Go. Go has a small runtime footprint. This consideration is essential when services are small, and there are lots of them. There is an economic incentive to minimize computational costs in any business. Having the ability to run a process as efficiently as possible with as little memory and system overhead required means we can pack processes into a finite amount of compute.

What is next?

Next, I will explore the structure of a gRPC microservice.

Top comments (2)

Collapse
 
kishanbsh profile image
Kishan B • Edited

One thing i have wondered is, how do big projects maintain all the proto buf files? You have a producer v1 and say 50 consumers, the protobuf contract must be shared and also say producer moves to v2, how to the 50 consumers get notified and update. Of course backwards compatibility has to be taken care when going to v2.

More than network optimization, these are the things i worry about

Also is it possible for the browser (javascript) to tak gRpc to a backend service?
If not how is this handled? Is there BFF (backend for frontend) layer which translates gRPC to REST to be consumable by the Frontend JS frameworks (React, Vue etc)?

Collapse
 
ldenholm profile image
Lochie • Edited

Perhaps an idea for maintaining protobufs might be storing them inside a "protorepo". Here is an article discussing this idea: gonuclei.com/resources/how-we-are-....

The design im currently exploring looks like:
(L1) FE <--> (L2) json/http api <--> (L3) protobuf/grpc.
L2 provides a bridge between the FE and the service-to-service gRPC api's.

grpc.io/blog/state-of-grpc-web/