One thing i have wondered is, how do big projects maintain all the proto buf files? You have a producer v1 and say 50 consumers, the protobuf contract must be shared and also say producer moves to v2, how to the 50 consumers get notified and update. Of course backwards compatibility has to be taken care when going to v2.
More than network optimization, these are the things i worry about
Also is it possible for the browser (javascript) to tak gRpc to a backend service?
If not how is this handled? Is there BFF (backend for frontend) layer which translates gRPC to REST to be consumable by the Frontend JS frameworks (React, Vue etc)?
Perhaps an idea for maintaining protobufs might be storing them inside a "protorepo". Here is an article discussing this idea: gonuclei.com/resources/how-we-are-....
The design im currently exploring looks like:
(L1) FE <--> (L2) json/http api <--> (L3) protobuf/grpc.
L2 provides a bridge between the FE and the service-to-service gRPC api's.
One thing i have wondered is, how do big projects maintain all the proto buf files? You have a producer v1 and say 50 consumers, the protobuf contract must be shared and also say producer moves to v2, how to the 50 consumers get notified and update. Of course backwards compatibility has to be taken care when going to v2.
More than network optimization, these are the things i worry about
Also is it possible for the browser (javascript) to tak gRpc to a backend service?
If not how is this handled? Is there BFF (backend for frontend) layer which translates gRPC to REST to be consumable by the Frontend JS frameworks (React, Vue etc)?
Perhaps an idea for maintaining protobufs might be storing them inside a "protorepo". Here is an article discussing this idea: gonuclei.com/resources/how-we-are-....
The design im currently exploring looks like:
(L1) FE <--> (L2) json/http api <--> (L3) protobuf/grpc.
L2 provides a bridge between the FE and the service-to-service gRPC api's.
grpc.io/blog/state-of-grpc-web/