Originally posted on apiscene
API Communication, in the recent story of Computer Science, is one of the most discussed threads, especially with the massive adoption of microservices style for software development. Such topics are finding growing interest in academic and applied research, in the attempt of decreasing the time needed to transfer information between various components. The latter are, in modern scenarios, expected to have a massive and unpredictable volume of reciprocal connections, much higher than in the recent past and with high variance over time.
In the last decade, the REST principles were the most used paradigm for deploying an API. Even if it does not exist an official and detailed standard, the OpenAPI initiative is considered as such for building RESTful APIs.
Most of the organizations that distribute their software as a service, organize their APIs based on that standard, customizing and defining the details regarding the coding style and deployment logics; This allows API implementation to be versioned and to evolve easily over time.
Moreover, REST APIs follow CRUD verbs (GET, PUT, POST, DELETE) allowing to reuse the HTTP protocol request method definition making it easy to understand how to interact with the entities and have a fair expectation of information the API could return.
The best reason to implement a REST API interface in services is that we have the possibility to decouple them. This enables the possibility to maintain specific functions independent and easily distributable to external customers/partners or to integrate them, for example with different frontend pages.
The main issue when building a REST API is for sure the performance: to mitigate it, REST APIs are usually coupled with cache systems, but even so it is still complicated to obtain satisfying performance with big payloads, due to the limits in serialization and server side loads.
Before the big REST paradigm adoption, an early form of API interaction was RPC; the idea consisted in simply executing a block of code to another machine and to call it as it was a local function. A stub is included in the compiled code that acts as a procedure call, when the program runs the procedure call is issued, the stub receives the request and forwards it to a client runtime program in the local machine.
The main implementations of RPC in the past were XML/JSON-RPC and SOAP (Simple Object Access Protocol) but a fresh and powerful library came out in the last years: gRPC.
gRPC is the result of Google experiments through the last 20 years, and it was open-sourced in 2015, becoming also part of the Cloud Native Computing Foundation Landscape last year, with a great community supporting the project.
The last years’ enrichments and implementations allowed support for browser and HTTP, but also rich language-specific SDKs for Java, Python, C++, Go, Ruby, C#, NodeJS, Android Java, Objective C, PHP and more.
Other great features of gRPC are support for streaming(Client/Server and Bidirectional) and also possibility to be used without HTTP protocol.
The framework is also very high performance, and perfectly optimized to serialize messages with protobuf (an XML-like format), with the additional advantage of not having to solve the problem of the server side load with the stub feature of RPC protocol.
However, there are also some limits on gRPC. Documentation and versioning is not always easy to achieve and can derive into function explosions and difficult discoverability. Another big limitation is also due to the high coupling between the services that communicate through gRPC, every time you need to change the exchange message you need to recompile and possibly rewrite code in both clients and servers.
In my experience, especially in the last year in BuildNN, I have implemented both paradigms and in our services we are using both gRPC and REST interfaces. The logic is to choose an endpoint in the REST interface Gateway we have developed internally when we need to exchange information between a backend service and external services, frontend or downstream APIs; but when we are sharing information between two BuildNN services we use the gRPC protocol, se they can communicate with higher performances, using the protobuf serialization previously mentioned.
In this way we achieve both the ability to decouple services that need to be shared to third parties or different teams and maintain very high levels of performance and the possibility to create services in various programming languages, avoiding serialization problems, which is very relevant for a Data Science Startup.
As they are different tools of the same “Swiss army knife”, we are able to use them together and take the best from each one.
Top comments (0)