<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Let's Do Tech</title>
    <description>The latest articles on DEV Community by Let's Do Tech (@letsdotech).</description>
    <link>https://dev.to/letsdotech</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/letsdotech"/>
    <language>en</language>
    <item>
      <title>gRPC for microservices in Kubernetes</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Wed, 02 Apr 2025 20:43:24 +0000</pubDate>
      <link>https://dev.to/letsdotech/grpc-for-microservices-in-kubernetes-oel</link>
      <guid>https://dev.to/letsdotech/grpc-for-microservices-in-kubernetes-oel</guid>
      <description>&lt;p&gt;I have been experimenting with gRPC for some time now. I wrote some articles to cover the basics like &lt;a href="https://dev.to/letsdotech/intro-to-grpc-and-protocol-buffers-using-go-4ckc"&gt;What is gRPC?&lt;/a&gt; &lt;a href="https://dev.to/letsdotech/implementing-ssltls-auth-in-grpc-2jmf"&gt;SSL/TLS Auth in gRPC&lt;/a&gt;, and &lt;a href="https://dev.to/letsdotech/grpc-communication-patterns-388l"&gt;communication patterns used in gRPC&lt;/a&gt;. In these topics I went through some of the advantages of gRPC over traditional REST API for inter-service communication – especially in a distributed architecture which led me to wonder about how gRPC works in Kubernetes environment. The crux is – gRPC offers great performance using Protobuf and natively supports uni and bi directional streaming.&lt;/p&gt;

&lt;p&gt;I used an analogy of calculator server and clients calling out arithmetic operations from this server using gRPC protocol, in all the previous blogs. In this blog, I take the same example to take the next step – deploying these services on K8s cluster to demonstrate how you can use gRPC in Kubernetes context. Specifically, in this post I will:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Containerize the client and server applications using Docker&lt;/li&gt;
&lt;li&gt;Prepare Kubernetes deployment YAMLs for these services&lt;/li&gt;
&lt;li&gt;Prepare Kubernetes service YAML to expose the calculator server&lt;/li&gt;
&lt;li&gt;Make sure uni and bidirectional communication is enabled between client and server&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Please note that there are multiple ways gRPC in Kubernetes is used like – in Ingress load balancers, service meshes, etc. In fact, K8s also uses gRPC to enable efficient communication between kubelet and CRI (Container Runtime Interface). This post &lt;strong&gt;does not aim to cover these complex patterns&lt;/strong&gt; , and instead stick to the basic explanation of making your microservices running on K8s cluster use gRPC.&lt;/p&gt;

&lt;p&gt;Note: &lt;a href="https://github.com/letsdotech/blog-examples/tree/main/04-grpc-k8s" rel="noopener noreferrer"&gt;You can access the code discussed in this blog post here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is gRPC a great choice for microservices on Kubernetes?
&lt;/h2&gt;

&lt;p&gt;There are several reasons for using gRPC in any distributed architecture. Kubernetes is a container orchestration platform, which is capable of managing thousands of instances of hundreds of microservices on many nodes. These instances communicate with each other over K8s Services which also offer load balancing and routing of traffic to appropriate deployments. gRPC creates highly performant interfaces to the functionality offered by these microservices. Calling a remote procedure using gRPC is almost like making another function call. Below are more details on each aspect why this is advantageous.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Performance with Protocol Buffers:&lt;/strong&gt; Protocol Buffers make it possible to drastically reduce the payload during the serialization process. Additionally, HTTP/2 supports multiplexing enabling bidirectional communication.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strongly-typed service definitions:&lt;/strong&gt; Microservice may be developed in multiple programming languages. gRPC provides a fixed contract as far as their interfaces are concerned in the native language. This reduces errors and simplifies debugging.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The above two aspects provide multiple opportunities to improve overall agility of the system, while assisting cost optimization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containerizing the microservices
&lt;/h2&gt;

&lt;p&gt;I haven’t changed the &lt;a href="https://github.com/letsdotech/blog-examples/blob/main/04-grpc-k8s/server/main.go" rel="noopener noreferrer"&gt;calculator server code&lt;/a&gt; much since the last blog, since it simply exposes the arithmetic functionality. This &lt;a href="https://github.com/letsdotech/blog-examples/blob/main/04-grpc-k8s/server/Dockerfile" rel="noopener noreferrer"&gt;Dockerfile&lt;/a&gt; builds the calculator server image. When run, the server starts listening on port 50051. Yes, SSL/TLS auth part of it as described in this &lt;a href="https://dev.to/letsdotech/implementing-ssltls-auth-in-grpc-2jmf"&gt;blog post&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;For the client service, I want to simulate random periodic requests to the calculator server to consume this arithmetic functionality. The infinite for loop runs every 3 seconds and attempts to call a randomly selectedFunction exposed by the server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for {
      // Randomly select a function
      randomIndex := rand.Intn(len(functions))
      selectedFunction := functions[randomIndex]

      // Execute the selected function
      log.Printf("Executing function: %T", selectedFunction)
      selectedFunction(client)

      // Sleep for 3 seconds
      time.Sleep(3 * time.Second)
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Currently, the calculator server exposes the below 4 functions with corresponding communication patterns. &lt;a href="https://dev.to/letsdotech/grpc-communication-patterns-388l"&gt;More details&lt;/a&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;Add()&lt;/code&gt; – Unary communication&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;GenerateNumbers()&lt;/code&gt; – Server streaming&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ComputeAverage()&lt;/code&gt; – Client streaming&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ProcessNumbers()&lt;/code&gt; – bidirectional streaming&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Next we containerize the client application using the Dockerfile below to prepare it to be deployed on Kubernetes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Build stage
FROM golang:1.22-alpine AS builder

WORKDIR /app

# Copy go mod files
COPY go.mod go.sum ./
RUN go mod download

# Copy source code
COPY client/ ./client/
COPY proto/ ./proto/

# Build the application
RUN CGO_ENABLED=0 GOOS=linux go build -o client ./client

# Final stage
FROM alpine:latest

WORKDIR /app

# Copy the binary from builder
COPY --from=builder /app/client .
COPY certs/ ./certs/

# Run the binary
CMD ["./client"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Kubernetes Deployment YAML files
&lt;/h2&gt;

&lt;p&gt;As seen from the diagram below, we need three main things to deploy the above application on Kubernetes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Server Deployment – to deploy the calculator server application&lt;/li&gt;
&lt;li&gt;Server Service – to expose server’s gRPC functionality as ClusterIP&lt;/li&gt;
&lt;li&gt;Client Deployment – to deploy instances of the client application&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcbBjkA9DxUoAPRsjoym3XnxwZts0_PKbhq8fn_H-CRItPil5ZkAEnEk8AsSIv_TQWbD4DqAEdx2pylZJ21uQZHJBTdTCMb8Qp2QLc2Fz5vyRY0P1-t4kB5mi3D2DOajcy0vzH0%3Fkey%3D0Pk3pjWMVzDWaeze4xsGTr6A" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcbBjkA9DxUoAPRsjoym3XnxwZts0_PKbhq8fn_H-CRItPil5ZkAEnEk8AsSIv_TQWbD4DqAEdx2pylZJ21uQZHJBTdTCMb8Qp2QLc2Fz5vyRY0P1-t4kB5mi3D2DOajcy0vzH0%3Fkey%3D0Pk3pjWMVzDWaeze4xsGTr6A" width="1600" height="900"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Server YAML
&lt;/h3&gt;

&lt;p&gt;To deploy the server pod, create a K8s manifest file as shown below. It uses the grpc-server image built in the last section to create containers, and exposes the port 50051 where the gRPC service is running. Note that we have used labels which we will further use in the Service file to expose the calculator server on the internal network for other pods to consume the same.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-server
spec:
replicas: 1
selector:
  matchLabels:
    app: grpc-server
template:
  metadata:
    labels:
      app: grpc-server
  spec:
    containers:
    - name: grpc-server
      image: letsdotech/grpc-server:latest
      ports:
      - containerPort: 50051
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Client YAML
&lt;/h3&gt;

&lt;p&gt;Similarly, we use the file below to run the client application. We have passed an environment variable named “SERVER_ADDRESS”, to make the containerized client application aware of the location of the gRPC-enabled calculator server.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: apps/v1
kind: Deployment
metadata:
name: grpc-client
spec:
replicas: 1
selector:
  matchLabels:
    app: grpc-client
template:
  metadata:
    labels:
      app: grpc-client
  spec:
    containers:
    - name: grpc-client
      image: letsdotech/grpc-client:latest
      env:
      - name: SERVER_ADDRESS
        value: "grpc-server-service:50051"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From the environment variable value, you should already know what the name of the calculator server’s service would be. We will create this service in the next section.&lt;/p&gt;

&lt;h2&gt;
  
  
  K8s Service for Calculator server
&lt;/h2&gt;

&lt;p&gt;It is a simple ClusterIP type of service, which also acts as a load balancer in case of multiple server instances running on the same K8s cluster. The metadata.name property specifies the service name, which is how the calculator server will be identified on the K8s network. Note that the selector specifies the label (grpc-server) which we set in the manifest for the server. This is useful during scaling operations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;apiVersion: v1
kind: Service
metadata:
name: grpc-server-service
spec:
selector:
  app: grpc-server # Match this with your server deployment labels
ports:
- port: 50051
  targetPort: 50051
type: ClusterIP
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Running everything together
&lt;/h2&gt;

&lt;p&gt;In this section, we will go ahead and “apply” all the manifest files we have created and observe the deployment. Using the kubectl apply command, we will create all the pods on the K8s cluster. Run below commands:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;kubectl apply -f server-deployment.yaml&lt;/code&gt; – to deploy the calculator server instance&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl apply -f server-service.yaml&lt;/code&gt; – to expose calculator server functionality for clients&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;kubectl apply -f client-deployment.yaml&lt;/code&gt; – to deploy client application&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Make sure that everything is running fine using kubectl get all command, as seen from the output below. We can see that the 2 deployments, and corresponding replica sets and pods are created and running. The service which exposes the calculator server is also created and is in line with the environment variable we set in the client’s deployment YAML.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;kubectl get all
NAME                               READY   STATUS    RESTARTS   AGE
pod/grpc-client-c9b746db7-bczgh    1/1     Running   0          12s
pod/grpc-server-5665c65684-kkftg   1/1     Running   0          32s

NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
service/grpc-server-service   ClusterIP   10.109.98.106   &amp;lt;none&amp;gt;        50051/TCP   2d23h
service/kubernetes            ClusterIP   10.96.0.1       &amp;lt;none&amp;gt;        443/TCP     5d23h

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grpc-client   1/1     1            1           12s
deployment.apps/grpc-server   1/1     1            1           32s

NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/grpc-client-c9b746db7    1         1         1       12s
replicaset.apps/grpc-server-5665c65684   1         1         1       32s
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This also means that the client application is already making random requests to consume the calculator server’s functions. The GIF below shows the output logs of both client and server.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcGojKo3CFgmTmiadolF8wpfWSqPGwDGFKsnVA5GEw83cVHKiPju2r5MfUNdDb9IB15NLvEa1vXt63eJNqSuEl_GVCcsR_FHa2J4nnq8IcuBC0PMA_NwitmaLIlp8QsmoxLrnyI%3Fkey%3D0Pk3pjWMVzDWaeze4xsGTr6A" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcGojKo3CFgmTmiadolF8wpfWSqPGwDGFKsnVA5GEw83cVHKiPju2r5MfUNdDb9IB15NLvEa1vXt63eJNqSuEl_GVCcsR_FHa2J4nnq8IcuBC0PMA_NwitmaLIlp8QsmoxLrnyI%3Fkey%3D0Pk3pjWMVzDWaeze4xsGTr6A" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can further scale up or down the application above by changing the replicas in the deployment YAMLs. The final K8s deployment would look like below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXewrtfs6NNQXILkuPzCPIF5JLmHlb6EuLKcKrZ8s-KnQzgczCNdXGdR7AxASvW3mgA9DuGpZui0GE8OU5Qpl0gTOhgZLNqZ3QecW5JwWEQQ3tccGzPlTN1Tm-jpc7QfQQwQGg%3Fkey%3D0Pk3pjWMVzDWaeze4xsGTr6A" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXewrtfs6NNQXILkuPzCPIF5JLmHlb6EuLKcKrZ8s-KnQzgczCNdXGdR7AxASvW3mgA9DuGpZui0GE8OU5Qpl0gTOhgZLNqZ3QecW5JwWEQQ3tccGzPlTN1Tm-jpc7QfQQwQGg%3Fkey%3D0Pk3pjWMVzDWaeze4xsGTr6A" width="1600" height="900"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This article was originally published on &lt;a href="https://letsdote.ch" rel="noopener noreferrer"&gt;Let's Do Tech&lt;/a&gt; around a month ago. Subscribe to my &lt;a href="https://news.letsdote.ch" rel="noopener noreferrer"&gt;newsletter&lt;/a&gt; to stay updated weekly.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://letsdote.ch/post/grpc-in-kubernetes/" rel="noopener noreferrer"&gt;gRPC for microservices in Kubernetes&lt;/a&gt; appeared first on &lt;a href="https://letsdote.ch" rel="noopener noreferrer"&gt;Let's Do Tech&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>grpc</category>
    </item>
    <item>
      <title>gRPC Communication Patterns</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Tue, 25 Mar 2025 11:50:28 +0000</pubDate>
      <link>https://dev.to/letsdotech/grpc-communication-patterns-388l</link>
      <guid>https://dev.to/letsdotech/grpc-communication-patterns-388l</guid>
      <description>&lt;p&gt;One of the advantages gRPC offers over REST based services is the streaming bi-directional communication. Traditional implementations which depend on REST APIs often implement Web Sockets to enable real bi-directional streaming of data packets. I said real because it is still possible to simulate streaming behavior using REST, which of course is not performant, and makes little sense.&lt;/p&gt;

&lt;p&gt;gRPC communication patterns are of 4 kinds – Unary, Server streaming, Client streaming, and Bi-directional streaming. Depending on the communication scenario you are trying to code in your distributed system, these patterns cover all the aspects quite effectively. &lt;/p&gt;

&lt;p&gt;In this post, we will understand these patterns with the help of the examples and diagrams. If you are not familiar with gRPC, check out this post – &lt;a href="https://dev.to/letsdotech/intro-to-grpc-and-protocol-buffers-using-go-4ckc"&gt;Intro to gRPC and Protocol Buffers using Go&lt;/a&gt; – which provides a quick overview of gRPC with a step-by-step instructions on how to set up gRPC in Golang based microservices. The example discussed in that post involves a calculator server and a client that consumes the arithmetic functions exposed by the calculator server. This post extends the analogy to demonstrate gRPC communication patterns.&lt;/p&gt;

&lt;p&gt;Access the complete code example &lt;a href="https://github.com/letsdotech/blog-examples/tree/main/03-grpc-communication-patterns" rel="noopener noreferrer"&gt;here&lt;/a&gt;. The code also implements &lt;a href="https://dev.to/letsdotech/implementing-ssltls-auth-in-grpc-2jmf"&gt;SSL based auth&lt;/a&gt; to secure the communication between client and server.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Protobuf implementation
&lt;/h2&gt;

&lt;p&gt;To use various gRPC communication patterns, you first need to declare those in the proto file. We are dealing with data streams in request and response for all the communication patterns except for Unary. Thankfully, it is not very complex to implement the grpc logic for such streaming services, as protoc compiler takes care of the same. The file below represents a Calculator server that implements 4 types of communication methods.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;syntax = "proto3";

package calc;

option go_package = "ldtgrpc03/proto";

service Calculator {

  // Unary RPC
  rpc Add(AddRequest) returns (AddResponse) Array

  // Server Streaming RPC
  rpc GenerateNumbers(GenerateRequest) returns (stream NumberResponse) Array

  // Client Streaming RPC
  rpc ComputeAverage(stream NumberRequest) returns (AverageResponse) Array

  // Bidirectional Streaming RPC
  rpc ProcessNumbers(stream NumberRequest) returns (stream NumberResponse) Array
}

message AddRequest {
  int64 num1 = 1;
  int64 num2 = 2;
}

message AddResponse {
  int64 result = 1;
}

message GenerateRequest {
  int64 limit = 1;
}

message NumberResponse {
  int64 number = 1;
}

message NumberRequest {
  int64 number = 1;
}

message AverageResponse {
  double result = 1;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The code above is self explanatory, however here are some notable points. The protoc compiler uses this file to generate the native Go code. Protoc generates the underlying code to enable the normal and streaming RPC communication. It only leaves the server implementation of these methods up to us – which makes sense.&lt;/p&gt;

&lt;p&gt;It is quite crucial to configure this proto file to correctly define the inbound and outbound streams for various functions exposed by the server and consumed by the clients. When generating the native Go code using the proto file below, it automatically uses appropriate grpc modules as seen in the code snippet from calc_grpc.pb.go file below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func (UnimplementedCalculatorServer) Add(context.Context, *AddRequest) (*AddResponse, error) {
   return nil, status.Errorf(codes.Unimplemented, "method Add not implemented")
}

func (UnimplementedCalculatorServer) GenerateNumbers(*GenerateRequest, grpc.ServerStreamingServer[NumberResponse]) error {
   return status.Errorf(codes.Unimplemented, "method GenerateNumbers not implemented")
}

func (UnimplementedCalculatorServer) ComputeAverage(grpc.ClientStreamingServer[NumberRequest, AverageResponse]) error {
   return status.Errorf(codes.Unimplemented, "method ComputeAverage not implemented")
}

func (UnimplementedCalculatorServer) ProcessNumbers(grpc.BidiStreamingServer[NumberRequest, NumberResponse]) error {
   return status.Errorf(codes.Unimplemented, "method ProcessNumbers not implemented")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From this point onward, it is assumed that the native Go code is generated. For more details, you can refer to &lt;a href="https://dev.to/letsdotech/intro-to-grpc-and-protocol-buffers-using-go-4ckc"&gt;this blog post&lt;/a&gt;, or for a &lt;a href="https://github.com/letsdotech/blog-examples/tree/main/03-grpc-communication-patterns" rel="noopener noreferrer"&gt;full code example refer this&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Unary
&lt;/h2&gt;

&lt;p&gt;Unary communication is the simplest form of the gRPC communication patterns between the client and server. The example in the &lt;a href="https://dev.to/letsdotech/intro-to-grpc-and-protocol-buffers-using-go-4ckc"&gt;introductory post&lt;/a&gt; actually implements the unary pattern between client and server to fetch the result of the Add() function from the server. Let us understand it here for the sake of completeness.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcbIwMsDLuHZXkIyw3R2smw7Au1chVJGOslnGMzWdKS2ZEJS86pqmdasKWdNDEDppcrYmNMe4TCbkbKNjHwfgcfeUGWzBXpGHSPMqn6GwbR1up0FwlPwLrCi0ISUdmpxPUv-zBS%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcbIwMsDLuHZXkIyw3R2smw7Au1chVJGOslnGMzWdKS2ZEJS86pqmdasKWdNDEDppcrYmNMe4TCbkbKNjHwfgcfeUGWzBXpGHSPMqn6GwbR1up0FwlPwLrCi0ISUdmpxPUv-zBS%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" alt="Unary gRPC Communication" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The diagram above represents how gRPC client and server communicates using a unary pattern. It is similar to REST API requests, except gRPC is much more performant. Client calls the Add() function using the server interface exposed gRPC – which is equivalent to client sending a single request to the server. The server processes the request, and responds back with a single response. This is a Unary communication pattern in gRPC.&lt;/p&gt;

&lt;p&gt;In the example below, the server implements the Add() function as shown below. It simply returns a response by calculating the addition of the 2 numbers sent as parameters to it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Unary RPC - Server
func (s *server) Add(ctx context.Context, req *pb.AddRequest) (*pb.AddResponse, error) {
   result := req.Num1 + req.Num2
   return &amp;amp;pb.AddResponse{Result: result}, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the corresponding client code that calls the Add() function and passes numbers 10 and 20. This is as good as calling the Add() function locally. In reality, the calculator server implements the Add() function, and the code below simply calls (sends a request) the server code.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func unaryExample(client pb.CalculatorClient) {
   ctx := context.Background()
   resp, err := client.Add(ctx, &amp;amp;pb.AddRequest{Num1: 10, Num2: 20})
   if err != nil {
       log.Fatalf("could not add: %v", err)
   }
   log.Printf("Sum: %d", resp.Result)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the Unary communication pattern, perhaps the most basic implementation of gRPC. Below is the output of the Unary communication pattern from above example.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdhVq5s3niQlJiPsb1RjYoOssOudNmW0ZuCTIAa1SG9wsvfiSr4WWXr5y_pHRaDVw1kZneC_fHEZVO4aQ6xmKzONcx5yarjddQzsa-KZXhB-P4GiiJTdW5F51DSLzRlMrM6RCA%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXdhVq5s3niQlJiPsb1RjYoOssOudNmW0ZuCTIAa1SG9wsvfiSr4WWXr5y_pHRaDVw1kZneC_fHEZVO4aQ6xmKzONcx5yarjddQzsa-KZXhB-P4GiiJTdW5F51DSLzRlMrM6RCA%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" width="800" height="69"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Server Streaming
&lt;/h2&gt;

&lt;p&gt;The server streaming pattern caters to the scenarios where the server is required to stream data back to the clients. In the example below, the calculator server exposes a function to generate numbers – GenerateNumbers() – which generates a set of numbers when the client requests it. As a client developer, you just have to call the GenerateNumbers() function on the gRPC interface to call this procedure. The server then responds with a stream of numbers as depicted in the diagram below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcYgXVMAhJSO_QknplhtlGxi3EFTd9t1shpem9AnZCcuCDETarOYOEblOH0zB00VAckFfV8MypOZfdiqZq4P18aOB-PstiT7veeaZYqkaxzmfopYQFqHi-ya35ivS4h_Yj_se9w%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcYgXVMAhJSO_QknplhtlGxi3EFTd9t1shpem9AnZCcuCDETarOYOEblOH0zB00VAckFfV8MypOZfdiqZq4P18aOB-PstiT7veeaZYqkaxzmfopYQFqHi-ya35ivS4h_Yj_se9w%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below code shows the implementation of GenerateNumbers() function on the server. It accepts the “Limit” parameter from the client calling this function to limit the number of numbers generated. Note that this “Limit” is set to 1 by default in the calc.proto file above. The function responds back to the stream for each number generated – simulating the server streaming response.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Server Streaming RPC
func (s *server) GenerateNumbers(req *pb.GenerateRequest, stream pb.Calculator_GenerateNumbersServer) error {
   for i := int64(0); i &amp;lt; req.Limit; i++ {
       if err := stream.Send(&amp;amp;pb.NumberResponse{Number: i}); err != nil {
           return err
       }
println("Sent number: ", i)
       time.Sleep(500 * time.Millisecond)
   }
   return nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the client side – code below – you can see that it is simply calling the GenerateNumbers() function using gRPC client. As soon as the server starts streaming the response back, the client processes the same by printing the numbers to the console in an infinite for loop using stream.Recv() function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func serverStreamingExample(client pb.CalculatorClient) {
   ctx := context.Background()
   stream, err := client.GenerateNumbers(ctx, &amp;amp;pb.GenerateRequest{Limit: 5})
   if err != nil {
       log.Fatalf("error calling GenerateNumbers: %v", err)
   }

   for {
       resp, err := stream.Recv()
       if err == io.EOF {
           break
       }
       if err != nil {
           log.Fatalf("error receiving: %v", err)
       }
       log.Printf("Received number: %d", resp.Number)
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the output of the server streaming communication pattern.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfglyTS1B5z6hMmDQH5B9I2KYWD7J9kdYOhXbXicHrj-Lg66b0anfgzWPjrOemMPour5xfFxi4jPfL9Glakz535UVcZIjRqb6rtzoVXRIOkKKqd1DwcZ20ZCKgB4V-JwbkbPBr7%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfglyTS1B5z6hMmDQH5B9I2KYWD7J9kdYOhXbXicHrj-Lg66b0anfgzWPjrOemMPour5xfFxi4jPfL9Glakz535UVcZIjRqb6rtzoVXRIOkKKqd1DwcZ20ZCKgB4V-JwbkbPBr7%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Client Streaming
&lt;/h2&gt;

&lt;p&gt;The client streaming communication pattern is used where clients need to stream data to the server for processing. This is still a uni-directional streaming communication as discussed in the previous section. In the example being discussed, the calculator server exposes a function to calculate the average value of all the numbers sent by the client in the form of a stream. The diagram below shows an overview of client streaming gRPC communication.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeGxYTCOLmzQjjMvXNIeRP-lut9XijELYi73v5GCtSoyqz4zbkZyF1sTTNcldnoGdHWxqmzVwHVw1CxuMbVUlZgvG2ZT7Sps8kAyhS6TmdDvdJjHbn3gEZXUjDpBGiw93JmLH1X%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeGxYTCOLmzQjjMvXNIeRP-lut9XijELYi73v5GCtSoyqz4zbkZyF1sTTNcldnoGdHWxqmzVwHVw1CxuMbVUlZgvG2ZT7Sps8kAyhS6TmdDvdJjHbn3gEZXUjDpBGiw93JmLH1X%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The code below shows the server implementation of ComputeAverage() function, that accepts a stream of input from the client, and responds with an average value of all the numbers received.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Client Streaming RPC
func (s *server) ComputeAverage(stream pb.Calculator_ComputeAverageServer) error {
   var sum int64
   var count int64

   for {
       req, err := stream.Recv()
       if err != nil {
           return stream.SendAndClose(&amp;amp;pb.AverageResponse{
               Result: float64(sum) / float64(count),
           })
       }
       println("Received number: ", req.Number)
       sum += req.Number
       count++
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The client function utilizes a stream object provisioned by the gRPC framework to send the numbers. Note that this stream object is not provided by the server implementation of the ComputeAverage() function above. Instead the gRPC framework provisions it – i.e. when we use protoc compiler to compile the calc.proto file. Refer to the first section of this blog to know more.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func clientStreamingExample(client pb.CalculatorClient) {
   ctx := context.Background()
   stream, err := client.ComputeAverage(ctx)
   if err != nil {
       log.Fatalf("error calling ComputeAverage: %v", err)
   }

   numbers := []int64{1, 2, 3, 4, 5}
   for _, num := range numbers {
       if err := stream.Send(&amp;amp;pb.NumberRequest{Number: num}); err != nil {
           log.Fatalf("error sending: %v", err)
       }
       time.Sleep(500 * time.Millisecond)
   }

   resp, err := stream.CloseAndRecv()
   if err != nil {
       log.Fatalf("error receiving response: %v", err)
   }
   log.Printf("Average: %.2f", resp.Result)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the output of client streaming communication pattern.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXccBDlqW8Min6L-Gnkgs4-hlcjxVset4BH5zQHaHX9u_2_A6JArredm4Y1ydsxA_NjYd_7fJNgGwO6VvxJ2k_BjrRH4z78WlBXxuQD10OiNVC1cpz6qbAlEa8VjnWhu4bKzVEX-%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXccBDlqW8Min6L-Gnkgs4-hlcjxVset4BH5zQHaHX9u_2_A6JArredm4Y1ydsxA_NjYd_7fJNgGwO6VvxJ2k_BjrRH4z78WlBXxuQD10OiNVC1cpz6qbAlEa8VjnWhu4bKzVEX-%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bidirectional Streaming
&lt;/h2&gt;

&lt;p&gt;By now, you should have been familiar with how client and server code use stream objects to send and receive streaming data. Bidirectional streaming as the name suggests, caters to the scenario where servers and clients need to stream data simultaneously without waiting for any completion on either side.&lt;/p&gt;

&lt;p&gt;In the example being discussed, the server implements a bidirectional stream function to accept a stream of numbers and respond back with a stream of numbers which are twice the incoming numbers. Client sends a stream of numbers and server responds without waiting for the stream from client to be completed. The diagram below represents the same.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcODFI-KYD3eGB16qEra5ifDd5Pl8Bj177XY7qvf38gEXhJVStoWltDALTFa61dqrOXeYRFhdTMlUGZqlW7vq9a5vj_ArTKmAAAZgu65MoPLdakkBgXWaoiZzY1YHkDnCSKuGgu%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcODFI-KYD3eGB16qEra5ifDd5Pl8Bj177XY7qvf38gEXhJVStoWltDALTFa61dqrOXeYRFhdTMlUGZqlW7vq9a5vj_ArTKmAAAZgu65MoPLdakkBgXWaoiZzY1YHkDnCSKuGgu%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;gRPC takes care of most of the native Go code required, this extensively simplifies the implementation of streaming logic on both server and client. The ProcessNumbers() function code below shows how the server simultaneously receives and sends the processed data back on the same stream to the client.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Bidirectional Streaming RPC
func (s *server) ProcessNumbers(stream pb.Calculator_ProcessNumbersServer) error {
   for {
       req, err := stream.Recv()
       if err != nil {
           return nil
       }

       // Process the number (multiply by 2) and send it back
       result := req.Number * 2
       if err := stream.Send(&amp;amp;pb.NumberResponse{Number: result}); err != nil {
           return err
       }
       println("Received number: ", req.Number)
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the client side, the code below initiates the stream by sending numbers to the server for processing. At the same time it listens to the same stream to hear the processed responses back from the server. The client code implements certain Go language specific nuances – if you are not familiar with Go, you can ignore them.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func bidirectionalStreamingExample(client pb.CalculatorClient) {
   ctx := context.Background()
   stream, err := client.ProcessNumbers(ctx)
   if err != nil {
       log.Fatalf("error calling ProcessNumbers: %v", err)
   }

   waitc := make(chan structArray)

   // Send numbers
   go func() {
       numbers := []int64{1, 2, 3, 4, 5}
       for _, num := range numbers {
           if err := stream.Send(&amp;amp;pb.NumberRequest{Number: num}); err != nil {
               log.Fatalf("error sending: %v", err)
           }
           time.Sleep(500 * time.Millisecond)
       }
       stream.CloseSend()
   }()

   // Receive processed numbers
   go func() {
       for {
           resp, err := stream.Recv()
           if err == io.EOF {
               close(waitc)
               return
           }
           if err != nil {
               log.Fatalf("error receiving: %v", err)
           }
           log.Printf("Received processed number: %d", resp.Number)
       }
   }()

   &amp;lt;-waitc
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Below is the output.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc5Q6KCJLrs9NDxS2OVtaMQKC2cQFZ56CcTPVv3HoUjW1nFUAlG9YxKOmXumXt-LZxpa5hMxDVf8Vi7j2sChkGj2crOVcYZLT1cSpJuhjAFy3GPy2r-8FJ6vWqBWTCJnRvGLgZq%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXc5Q6KCJLrs9NDxS2OVtaMQKC2cQFZ56CcTPVv3HoUjW1nFUAlG9YxKOmXumXt-LZxpa5hMxDVf8Vi7j2sChkGj2crOVcYZLT1cSpJuhjAFy3GPy2r-8FJ6vWqBWTCJnRvGLgZq%3Fkey%3D2wcd_xJeNtMXj9OgEcK4Q2gc" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s it for this blog post. If you enjoy reading all of this, then I would encourage you to subscribe to my newsletter. I write about CNCF tools, system design, architecture, product dev and AI.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://letsdote.ch/post/grpc-communication-patterns/" rel="noopener noreferrer"&gt;gRPC Communication Patterns&lt;/a&gt; appeared first on &lt;a href="https://letsdote.ch" rel="noopener noreferrer"&gt;Let's Do Tech&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>development</category>
      <category>grpc</category>
      <category>distributed</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Implementing SSL/TLS Auth in gRPC</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Tue, 25 Feb 2025 16:13:40 +0000</pubDate>
      <link>https://dev.to/letsdotech/implementing-ssltls-auth-in-grpc-2jmf</link>
      <guid>https://dev.to/letsdotech/implementing-ssltls-auth-in-grpc-2jmf</guid>
      <description>&lt;p&gt;gRPC supports various authentication mechanisms like SSL/TLS, ALTS (Application Layer Transport Security), and token based authentication. In this post, we will cover SSL/TLS auth. We will begin by understanding the basics of SSL authentication, and also generate required key and certificate files to implement in our example.&lt;/p&gt;

&lt;p&gt;In the previous blog post, while covering the &lt;a href="https://dev.to/letsdotech/intro-to-grpc-and-protocol-buffers-using-go-4ckc"&gt;basics of gRPC communication with Go&lt;/a&gt;, we introduced an example of a calculator server and client. The client-server gRPC communication in that example was not secure. If you check the &lt;a href="https://github.com/letsdotech/blog-examples/blob/bbd594eca3da499be885f74d2c84c3c2d24e1f9b/01-grpc-intro/client/main.go#L15" rel="noopener noreferrer"&gt;client code here&lt;/a&gt;, it attempts to connect to the server in an insecure manner.&lt;/p&gt;

&lt;p&gt;In real world situations, this poses a very high risk on multiple fronts. Thus to secure this communication, we implement certificate based authentication – also known as SSL/TLS based authentication. In this post, I cover the bare bones of what is required to enable SSL auth in gRPC based machine-to-machine communication.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As a spoiler, this post is not really about gRPC. But it helps in understanding how to implement certificate based authentication in a distributed architecture.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Full code of the example discussed in this post can be found in below link.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/letsdotech/blog-examples/tree/main/02-grpc-auth" rel="noopener noreferrer"&gt;Complete Code&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What is SSL/TLS based authentication?
&lt;/h2&gt;

&lt;p&gt;The basic idea here is to make use of certificates to authenticate clients against the server. Since these authentication mechanisms depend on certificate files, it becomes inherently difficult for any attacker to crack them. Humans generally don’t use SSL/TLS based auth as it would be quite cumbersome to maintain multiple certificates for multiple platforms. However, this makes it very suitable for securing machine to machine communication.&lt;/p&gt;

&lt;p&gt;SSL/TLS based authentication relies on a Certificate Authority (CA), without which it would be impossible for the mechanism to authenticate any request, in spite of possessing valid certificates. CA is responsible to sign and distribute certificates to clients and servers. These certificates, when used while exchanging messages, are used for validation.&lt;/p&gt;

&lt;h2&gt;
  
  
  SSL Certificates preparation for gRPC calculator server
&lt;/h2&gt;

&lt;p&gt;To enable SSL based authentication in gRPC client server communication, we first have to create the required certificates. The steps below summarise the process of doing the same.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Establish a CA

&lt;ul&gt;
&lt;li&gt;This step creates key and certificate for the CA&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Create a server key&lt;/li&gt;
&lt;li&gt;Create a certificate signing request (csr) for server, using server key&lt;/li&gt;
&lt;li&gt;Sign the server certificate signing request (csr) using CA key, to generate server certificate&lt;/li&gt;
&lt;li&gt;Create client key&lt;/li&gt;
&lt;li&gt;Create a certificate signing request (csr) for client, using client key&lt;/li&gt;
&lt;li&gt;Sign the client certificate signing request (csr) using CA key, to generate client certificate&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To generate these certificates, you can use the openssl tool via your command line on any system. Below commands generate the CA key and CA certificate files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa -out ca-key.pem 2048

openssl req -new -x509 -days 365 -key ca-key.pem -out ca-cert.pem -subj "/CN=CA"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The next set of commands generate server key and certificate files. Note that in the 2nd and 3rd command, we are making use of a &lt;code&gt;server.conf&lt;/code&gt; file, which contains &lt;a href="https://docs.openssl.org/master/man5/config/" rel="noopener noreferrer"&gt;SSL related configurations&lt;/a&gt;. These options would otherwise be provided as openssl CLI params.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa -out server-key.pem 2048

openssl req -new -key server-key.pem -out server.csr -config server.conf

openssl x509 -req -days 365 -in server.csr -CA ca-cert.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extensions v3_req -extfile server.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The steps to create client key and certificate files are similar.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl genrsa -out client-key.pem 2048

openssl req -new -key client-key.pem -out client.csr -config client.conf

openssl x509 -req -days 365 -in client.csr -CA ca-cert.pem -CAkey ca-key.pem -CAcreateserial -out client-cert.pem -extensions v3_req -extfile client.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The diagram below shows the number of files involved in the above process.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfen3-86wdayqO0qNr2atMICbFiQtjvYFzVNUXkTXic3T-_bEQkODtBzF8sHNfatJxGKeIZGyrAAz1pGCUbZzyz4r40a2odMNkTLDwGHFIXMMvRvJk6VkDcEoXtKr3n35VITkNi%3Fkey%3DbOL99hxjzEm8Pff53cORkpr4" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfen3-86wdayqO0qNr2atMICbFiQtjvYFzVNUXkTXic3T-_bEQkODtBzF8sHNfatJxGKeIZGyrAAz1pGCUbZzyz4r40a2odMNkTLDwGHFIXMMvRvJk6VkDcEoXtKr3n35VITkNi%3Fkey%3DbOL99hxjzEm8Pff53cORkpr4" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating server code to use SSL certificates
&lt;/h2&gt;

&lt;p&gt;Note that we have not touched the &lt;a href="https://github.com/letsdotech/blog-examples/blob/main/01-grpc-intro/proto/calc.proto" rel="noopener noreferrer"&gt;calc.proto file&lt;/a&gt; at all to implement the SSL authentication. If you are not aware of the example we are discussing here, refer to &lt;a href="https://dev.to/letsdotech/intro-to-grpc-and-protocol-buffers-using-go-4ckc"&gt;this blog post&lt;/a&gt; where we establish the same.&lt;/p&gt;

&lt;p&gt;At this point, the &lt;a href="https://github.com/letsdotech/blog-examples/blob/bbd594eca3da499be885f74d2c84c3c2d24e1f9b/01-grpc-intro/server/main.go#L17" rel="noopener noreferrer"&gt;Go server code&lt;/a&gt; simply listens on a specific port for incoming client connections. We register a gRPC server on this server, which essentially means that we are enabling all the callable functions as part of the gRPC interface.&lt;/p&gt;

&lt;p&gt;To implement SSL authentication on this server, we need to do some work before the server starts to listen, so that the server listens “securely”. Observe the code below to understand how certificates are loaded, and an explanation of the same would follow.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func main() {
   // Load server certificate and private key
   cert, err := tls.LoadX509KeyPair("../certs/server-cert.pem", "../certs/server-key.pem")
   if err != nil {
       log.Fatalf("failed to load server certificates: %v", err)
   }

   // Create a certificate pool and add the client's CA certificate
   certPool := x509.NewCertPool()
   ca, err := ioutil.ReadFile("../certs/ca-cert.pem")
   if err != nil {
       log.Fatalf("failed to read ca certificate: %v", err)
   }

   if ok := certPool.AppendCertsFromPEM(ca); !ok {
       log.Fatal("failed to append client certs")
   }

   // Create the TLS credentials
   creds := credentials.NewTLS(&amp;amp;amp;tls.Config{
       Certificates: []tls.Certificate{cert},
       ClientAuth:   tls.RequireAndVerifyClientCert,
       ClientCAs:    certPool,
       MinVersion:   tls.VersionTLS12,
   })

   lis, err := net.Listen("tcp", ":50051")
   if err != nil {
       log.Fatalf("failed to listen: %v", err)
   }

   s := grpc.NewServer(grpc.Creds(creds))
   pb.RegisterCalculatorServer(s, &amp;amp;amp;server{})
   log.Printf("Server listening at %v", lis.Addr())
   if err := s.Serve(lis); err != nil {
       log.Fatalf("failed to serve: %v", err)
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;First we create a cert object using the Go &lt;code&gt;tls&lt;/code&gt; module. Here we read the server certificate and key files to create a cert object.&lt;/li&gt;
&lt;li&gt;Next, we need to establish trust between server’s CA and client’s CA. In our case, both the server and client certificates are signed by the same CA, but we still need to make this server aware and trust the client connections explicitly. In the events where clients may use a different CA to generate their certificates, this step needs to be executed to create a cert pool of all those CA certs.&lt;/li&gt;
&lt;li&gt;Finally, we use the gRPC credentials module to generate credentials. Notice that we have specified the &lt;code&gt;certPool&lt;/code&gt; we created in the previous step to create this credential for authentication purposes.&lt;/li&gt;
&lt;li&gt;The rest of the code remains the same except, while registering the gRPC server, we now use credentials (&lt;code&gt;grpc.Creds(creds)&lt;/code&gt;) to secure incoming connections.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Sending SSL authenticated requests from the gRPC client
&lt;/h2&gt;

&lt;p&gt;Similar to the old server implementation, the client code was insecure as well. In fact, it is &lt;a href="https://github.com/letsdotech/blog-examples/blob/bbd594eca3da499be885f74d2c84c3c2d24e1f9b/01-grpc-intro/client/main.go#L15" rel="noopener noreferrer"&gt;quite expressive&lt;/a&gt; about it. If you now try to access the &lt;code&gt;Add()&lt;/code&gt; function from the server, something interesting happens. Obviously, the operation is not successful, but the error message throws some light on the &lt;code&gt;insecure.NewCredentials()&lt;/code&gt; function used in the &lt;a href="https://github.com/letsdotech/blog-examples/blob/main/01-grpc-intro/client/main.go" rel="noopener noreferrer"&gt;insecure implementation&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;2025/02/11 15:15:46 could not calculate: rpc error: code = Unavailable desc = connection error: desc = "error reading server preface: EOF"
exit status 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, the &lt;code&gt;insecure.NewCredentials()&lt;/code&gt; function tries to bypass the security requirements. In general, this is a handy way to test out the functionality with lesser auth complexity. Thus, instead of facing any auth related &lt;code&gt;UNAVAILABLE&lt;/code&gt; type of errors, it instead says &lt;code&gt;error reading server preface: EOF&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This means that the client is trying to dial in to the server using a plain TCP connection, and server expects to begin by a &lt;a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/" rel="noopener noreferrer"&gt;TLS handshake&lt;/a&gt;. This is a good starting point for this section, as we can now begin securing connection requests from the gRPC client. The the updated client code looks like below, explanation follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func main() {
   // Load client certificate and private key
   cert, err := tls.LoadX509KeyPair("../certs/client-cert.pem", "../certs/client-key.pem")
   if err != nil {
       log.Fatalf("failed to load client certificates: %v", err)
   }

   // Create a certificate pool and add the server's CA certificate
   certPool := x509.NewCertPool()
   ca, err := ioutil.ReadFile("../certs/ca-cert.pem")
   if err != nil {
       log.Fatalf("failed to read ca certificate: %v", err)
   }

   if ok := certPool.AppendCertsFromPEM(ca); !ok {
       log.Fatal("failed to append ca certs")
   }

   // Create the TLS credentials
   creds := credentials.NewTLS(&amp;amp;tls.Config{
       Certificates: []tls.Certificate{cert},
       RootCAs:      certPool,
       ServerName:   "localhost",
   })

   conn, err := grpc.Dial("localhost:50051", grpc.WithTransportCredentials(creds))
   if err != nil {
       log.Fatalf("did not connect: %v", err)
   }
   defer conn.Close()

   c := pb.NewCalculatorClient(conn)
   ctx, cancel := context.WithTimeout(context.Background(), time.Second)
   defer cancel()

   // Make the gRPC call
   r, err := c.Add(ctx, &amp;amp;pb.AddRequest{Num1: 5, Num2: 3})
   if err != nil {
       log.Fatalf("could not calculate: %v", err)
   }
   log.Printf("Result: %d", r.GetResult())
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As we can see, similar to the calculator server code, we need to do some ground work to enable SSL auth for clients before they can connect to the server.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The main function first loads the client certificate and key files.&lt;/li&gt;
&lt;li&gt;Creates a cert pool with server’s CA certificates (in this example, both client and server have the same CA). This helps clients to know whom (which server) they are talking to, and avoids man-in-the-middle attacks. This is also known as m-TLS (mutual TLS), where the validation happens on both sides before a secure communication channel is established.&lt;/li&gt;
&lt;li&gt;Similar to server code, we prepare the credentials using the cert pool above, and supply the same while dialing in to the server.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Run the server, and then run the client code that accesses the &lt;code&gt;Add()&lt;/code&gt; function on the server. The client should be able to access the calculator functionality as shown below – in a secure way!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfRBBmLJJToXqO0WlO0nyUEqYbkbGMRddh1o2f-UUpJTP7kgAqh8h-V-8cV2HUFcklF8s41kvkh6q3CwXKgzyBqtK7xX0cUImi_vdc4xwA81QQ9FPLZLHffLCqo2Js8k06BCkGK%3Fkey%3DbOL99hxjzEm8Pff53cORkpr4" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfRBBmLJJToXqO0WlO0nyUEqYbkbGMRddh1o2f-UUpJTP7kgAqh8h-V-8cV2HUFcklF8s41kvkh6q3CwXKgzyBqtK7xX0cUImi_vdc4xwA81QQ9FPLZLHffLCqo2Js8k06BCkGK%3Fkey%3DbOL99hxjzEm8Pff53cORkpr4" width="800" height="132"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Dig into the SSL/TLS details
&lt;/h2&gt;

&lt;p&gt;The output shown above demonstrates secure, but a generic output. One looking into SSL/TLS auth cannot really say much about it. Let us update the client code and try to print some certificate details. Note: This by default is a security bad practice, but we are doing it here for demonstration purposes.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;credentials.NewTLS&lt;/code&gt; accepts a &lt;code&gt;tls.Config&lt;/code&gt; object, which exposes a &lt;a href="https://pkg.go.dev/crypto/tls#example-Config-VerifyConnection" rel="noopener noreferrer"&gt;VerifyConnection&lt;/a&gt; function, which executes a callback function with &lt;code&gt;tls.ConnectionState&lt;/code&gt; object while generating the credentials. Write a function to print the details of this object as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func verifyConnection(state tls.ConnectionState) {
   log.Printf("=== TLS Connection Details ===")
   log.Printf("Version: %x", state.Version)
   log.Printf("CipherSuite: %s", tls.CipherSuiteName(state.CipherSuite))
   log.Printf("HandshakeComplete: %t", state.HandshakeComplete)
   log.Printf("Server Name: %s", state.ServerName)
   for i, cert := range state.PeerCertificates {
       log.Printf("Peer Certificate [%d]:", i)
       log.Printf("  Subject: %s", cert.Subject)
       log.Printf("  Issuer: %s", cert.Issuer)
       log.Printf("  Valid from: %s", cert.NotBefore)
       log.Printf("  Valid until: %s", cert.NotAfter)
   }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;verifyConnection()&lt;/code&gt; function prints details like TLS Version, CipherSuite, state of the TLS Handshake, server name, etc. It also prints details about the peer certificate. Modify the code to create credentials in the main function as shown below, and call the &lt;code&gt;verifyConnection()&lt;/code&gt; function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// Create TLS config with verification callbacks
   tlsConfig := &amp;amp;tls.Config{
       ServerName:   "localhost",
       Certificates: []tls.Certificate{cert},
       RootCAs:      certPool,
       MinVersion:   tls.VersionTLS12,
       VerifyConnection: func(cs tls.ConnectionState) error {
           verifyConnection(cs)
           return nil
       },
   }

   // Create the TLS credentials
   creds := credentials.NewTLS(tlsConfig)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We have also updated our logic to call the &lt;code&gt;Add()&lt;/code&gt; function multiple times. Find the &lt;a href="https://github.com/letsdotech/blog-examples/tree/main/02-grpc-intro" rel="noopener noreferrer"&gt;full code here&lt;/a&gt;. Re-run the server and client code and observe the client output below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcNSYUCY2JZFoCIvOWrkYsGckVxVd9MT6iD7neIgU6FwdZOoroZqbHQvTofDB1fpwRCvxOXzPAMJlflt4ReYSuVVAJQAbqllg21D4kDhT0GA9UU-fhUyeWOZEhsOms8SBPZcljG%3Fkey%3DbOL99hxjzEm8Pff53cORkpr4" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXcNSYUCY2JZFoCIvOWrkYsGckVxVd9MT6iD7neIgU6FwdZOoroZqbHQvTofDB1fpwRCvxOXzPAMJlflt4ReYSuVVAJQAbqllg21D4kDhT0GA9UU-fhUyeWOZEhsOms8SBPZcljG%3Fkey%3DbOL99hxjzEm8Pff53cORkpr4" width="645" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is just to show how you can further investigate the certificate implementation. Working with certificates can get tricky especially for beginners. But it offers a much more solid auth mechanism that exists today.&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://letsdote.ch/post/grpc-ssl-tls-auth/" rel="noopener noreferrer"&gt;Implementing SSL/TLS Auth in gRPC&lt;/a&gt; appeared first on &lt;a href="https://letsdote.ch" rel="noopener noreferrer"&gt;Let's Do Tech&lt;/a&gt;. Subscribe to my &lt;a href="https://news.letsdote.ch" rel="noopener noreferrer"&gt;newsletter&lt;/a&gt; where I share notes on product dev, system design, architecture, and AI.&lt;/p&gt;

</description>
      <category>blog</category>
      <category>grpc</category>
    </item>
    <item>
      <title>Intro to gRPC and Protocol Buffers using Go</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Fri, 21 Feb 2025 20:12:27 +0000</pubDate>
      <link>https://dev.to/letsdotech/intro-to-grpc-and-protocol-buffers-using-go-4ckc</link>
      <guid>https://dev.to/letsdotech/intro-to-grpc-and-protocol-buffers-using-go-4ckc</guid>
      <description>&lt;p&gt;Inter-service communication is perhaps one of the fundamental aspects of distributed computing. Almost everything relies on it. Distributed architectures consist of multiple microservices with multiple running instances each. The workloads run long running tasks on virtual or physical servers, containers, or Kubernetes clusters, while simpler tasks are run as serverless functions.&lt;/p&gt;

&lt;p&gt;gRPC caters to the scenarios where there is a need for distributed workloads to be tightly coupled – those services which rely on communicating data, and where speed matters. The usual JSON based REST APIs don’t fall short, but if teams are seeking “even more” performance improvements in their distributed and tightly coupled architectures should consider using gRPC instead.&lt;/p&gt;

&lt;p&gt;In this blog post, I will not go through the theoretical details of gRPC, and rather focus on the practical example to introduce it to you, especially when you are short on time. The gRPC documentation is a great resource for understanding this technology in detail, otherwise. Topics covered in this post are listed below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Introduce the server and client example&lt;/li&gt;
&lt;li&gt;Define interfaces in .proto file&lt;/li&gt;
&lt;li&gt;Generating gRPC code for Go&lt;/li&gt;
&lt;li&gt;Server implementation&lt;/li&gt;
&lt;li&gt;Making a gRPC call in client&lt;/li&gt;
&lt;li&gt;Testing and conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This was originally published on &lt;a href="https://letsdote.ch" rel="noopener noreferrer"&gt;Let's Do Tech&lt;/a&gt;. Subscribe to &lt;a href="https://news.letsdote.ch" rel="noopener noreferrer"&gt;Let's Do Tech News&lt;/a&gt; for timely notifications!&lt;/p&gt;

&lt;h2&gt;
  
  
  Server And Client Example
&lt;/h2&gt;

&lt;p&gt;Before we proceed to discuss gRPC, let us establish a baseline requirement using client server architecture. In a hypothetical scenario, let us assume that there are two services – a calculator server and client which consumes the calculator logic. The calculator server implements the logic to perform addition operation. A client application hosted on a different host calls the addition function to fetch the processed result.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfurMtiEkFIO1_KOJEBHp0E08ZUV6HMG5h78vnK4WKUeRtZokOU4NwcsVlC4TSalkSBmp19wmyzdG6rHaBqTqJotHBbi2kugbfW5-esbEPfWVfWBnrio9EDgY0IFltHi1OleOX4%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfurMtiEkFIO1_KOJEBHp0E08ZUV6HMG5h78vnK4WKUeRtZokOU4NwcsVlC4TSalkSBmp19wmyzdG6rHaBqTqJotHBbi2kugbfW5-esbEPfWVfWBnrio9EDgY0IFltHi1OleOX4%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To keep things simple, let us implement the client and server logic in the same repo as shown in the file structure below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeiCp6SuK6NKg-kjSWsUbEg0nxdPx43FT-Vy1--RwakW17wWIu8XAzj5gA0cewdgmntY27aAE-L_nqIGpYPWWfcbCmWZUnP5NidZ8U-Hkl7jEIu_b2o6Fz4nd-qBHxwHHK4kiaw%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeiCp6SuK6NKg-kjSWsUbEg0nxdPx43FT-Vy1--RwakW17wWIu8XAzj5gA0cewdgmntY27aAE-L_nqIGpYPWWfcbCmWZUnP5NidZ8U-Hkl7jEIu_b2o6Fz4nd-qBHxwHHK4kiaw%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" width="198" height="145"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The server code below is currently a basic Go code that implements addition logic with hardcoded values. Whether you are starting from scratch, or already have a server implementation and now looking forward to implementing gRPC in existing code – this blog post would serve both purposes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
  "log"
  "net"
)

// Calculator server implementation
func main() {
  sum := Add(1, 2)
  log.Printf("Sum: %d", sum)
}
func Add(num1, num2 int) int {
  return num1 + num2
}&amp;lt;/textarea&amp;gt;
package main

import (
  "log"
  "net"
)

// Calculator server implementation
func main() {
  sum := Add(1, 2)
  log.Printf("Sum: %d", sum)
}
func Add(num1, num2 int) int {
  return num1 + num2
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let us assume the client code implements the client logic that depends on the functionality exposed by the calculator server, as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

func main() {
  // Client application logic
}&amp;lt;/textarea&amp;gt;
package main

func main() {
  // Client application logic
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialize the Go module by giving it a suitable name. This step will differ depending on how the client and server code is currently organized in your environment. For this example, we will create all components in the same Go module. I have used “&lt;code&gt;ldtgrpc01&lt;/code&gt;” as the module name.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;go mod init ldtgrpc01  
go mod tidy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Define interfaces in .proto file
&lt;/h2&gt;

&lt;p&gt;gRPC implements Protocol Buffers (Protobuf) to serialize and deserialize the data. Protobuf offers a language and platform neutral way to serialize structured data making it smaller and thus reducing latency. Using Protobuf files, we can define services and messages to be implemented for the sake of communication between server and client.&lt;/p&gt;

&lt;p&gt;In this example, the calculator server implements the addition function, which is called by the client. To make it compatible with gRPC protocol, first define these specifications in a &lt;code&gt;.proto&lt;/code&gt; file. Create a 3rd directory to manage the Protobuf files, and create a &lt;code&gt;calc.proto&lt;/code&gt; file as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;syntax = "proto3";

package calc;
option go_package = "ldtgrpc01/proto";

// Define the service
service Calculator {
    rpc Add(AddRequest) returns (AddResponse) {}
}

// Define the messages
message AddRequest {
    int32 num1 = 1;
    int32 num2 = 2;
}

message AddResponse {
    int32 result = 1;
}&amp;lt;/textarea&amp;gt;
syntax = "proto3";

package calc;
option go_package = "ldtgrpc01/proto";

// Define the service
service Calculator {
    rpc Add(AddRequest) returns (AddResponse) Array
}

// Define the messages
message AddRequest {
    int32 num1 = 1;
    int32 num2 = 2;
}

message AddResponse {
    int32 result = 1;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Explanation of the code below.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;After specifying the syntax version as “proto3” (&lt;a href="https://protobuf.dev/programming-guides/proto3/" rel="noopener noreferrer"&gt;details&lt;/a&gt;), we define the package name. When we compile this file into native Go code, it will be created in the package name we specified here. Note that if you are specifying &lt;code&gt;go_package&lt;/code&gt;, then package calc; is not required as the package will be named as per &lt;code&gt;go_package&lt;/code&gt;. I have included it as best practice. &lt;/li&gt;
&lt;li&gt;Then we define the Calculator service, and specify an &lt;code&gt;Add&lt;/code&gt; method, which takes &lt;code&gt;AddRequest&lt;/code&gt; message as parameter, and returns &lt;code&gt;AddResponse&lt;/code&gt; message. Note that this method is defined as &lt;code&gt;rpc&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Further, we define both &lt;code&gt;AddRequest&lt;/code&gt; and &lt;code&gt;AddResponse&lt;/code&gt; messages with appropriate parameters.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can think of this as language-neutral interface specification, which only defines the interface, and doesn’t implement the same. The implementation will follow later in the server code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generating gRPC code in Go
&lt;/h2&gt;

&lt;p&gt;Using the &lt;code&gt;calc.proto&lt;/code&gt; file, you can generate native application code in multiple languages. Since we are dealing with Go, we will use the command below to generate Go code which we would use in client-server communication.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protoc –go\_out=. –go\_opt=paths=source\_relative –go-grpc\_out=. –go-grpc\_opt=paths=source\_relative proto/calc.proto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Protoc is a Protobuf compiler used to compile .proto files into native code. Refer to &lt;a href="https://protobuf.dev/getting-started/gotutorial/#compiling-protocol-buffers" rel="noopener noreferrer"&gt;this document&lt;/a&gt; for more information on the parameters used in the above command. The compilation results in creation of 2 Golang code files within the proto directory, as represented below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfGi3uQr1f0DtUk2eiin5TsICuCFkQVx54cDcq5utSi6vibOJG1Jd2lgZdGxA2RXW_mAkb5q0Q20dvQRdO2eVZQIkvCn6m4To3SxhNOzkhFUrV--MolbSEEMUQHuw_Hs-URSmXM%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfGi3uQr1f0DtUk2eiin5TsICuCFkQVx54cDcq5utSi6vibOJG1Jd2lgZdGxA2RXW_mAkb5q0Q20dvQRdO2eVZQIkvCn6m4To3SxhNOzkhFUrV--MolbSEEMUQHuw_Hs-URSmXM%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And after this operation, the resulting directory structure and files there should look like below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfUtoetlFc-rMJ304CJbFsUliGBUxRJ-tyy0V_ID__f1_jowLPeLTnf82v_aT2knMxDrf472ytpIVbeIE5XIDiRV3mOCSezXp6BESo-mHldA_S_7OB7oA4YnmpzsWxTFvBTx76z%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXfUtoetlFc-rMJ304CJbFsUliGBUxRJ-tyy0V_ID__f1_jowLPeLTnf82v_aT2knMxDrf472ytpIVbeIE5XIDiRV3mOCSezXp6BESo-mHldA_S_7OB7oA4YnmpzsWxTFvBTx76z%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" width="198" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;calc.pb.go&lt;/code&gt; and &lt;code&gt;calc_grpc.pb.go&lt;/code&gt; files are automatically generated files from your Protocol Buffer file (calc.proto). They serve different but complementary purposes as described below.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;calc.pb.go&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Contains the Go struct definitions for your messages (AddRequest and AddResponse)&lt;/li&gt;
&lt;li&gt;Includes serialization/deserialization code for these messages&lt;/li&gt;
&lt;li&gt;Handles the basic Protocol Buffer encoding/decoding logic&lt;/li&gt;
&lt;li&gt;Generated from the message definitions in your proto file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;calc_grpc.pb.go&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Contains the service definitions and interfaces for your gRPC service&lt;/li&gt;
&lt;li&gt;Includes the client and server code for your Calculator service&lt;/li&gt;
&lt;li&gt;Provides the RPC communication layer implementation&lt;/li&gt;
&lt;li&gt;Generated from the service definitions in your proto file&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thus, we have used protoc compiler to compile the Protobuf definitions into native Go code, which can be integrated into client and server applications. Note that, you should never modify these files. If you want to do the changes to the interface, modify and recompile the calc.proto file.&lt;/p&gt;

&lt;p&gt;Tip: Feel free to go through this code to understand more details about how Go implements gRPC protocol.&lt;/p&gt;

&lt;h2&gt;
  
  
  Server implementation using gRPC
&lt;/h2&gt;

&lt;p&gt;To update the calculator server code to implement the interfaces generated using gRPC, you need to import it as a package in the application code. The diagram below shows how the compiled proto package is used by server code to expose its functionality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXffLmhup1LlfyUiipqg7V-WVMs2udcNIBC3H0YpGzjg_avgBPAwBBg4s-P9VKpHp5x1harMCi3GnduE3ViW_-n0ZbZZFCQrGGFmVr-uSeBn2_noWot_aX2oGpCZsLlW6vmEoiw%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXffLmhup1LlfyUiipqg7V-WVMs2udcNIBC3H0YpGzjg_avgBPAwBBg4s-P9VKpHp5x1harMCi3GnduE3ViW_-n0ZbZZFCQrGGFmVr-uSeBn2_noWot_aX2oGpCZsLlW6vmEoiw%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that we have the native code in place which defines the gRPC interface, we need to implement the server logic. Below is the updated code for the calculator server which exposes the &lt;code&gt;Add()&lt;/code&gt; function using gRPC. The explanation follows.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
  "context"
  pb "ldtgrpc01/proto" // replace with your module name
  "log"
  "net"

  "google.golang.org/grpc"
)

type server struct {
  pb.UnimplementedCalculatorServer
}

func main() {
  lis, err := net.Listen("tcp", ":50051")
  if err != nil {
      log.Fatalf("failed to listen: %v", err)
  }

  s := grpc.NewServer()
  pb.RegisterCalculatorServer(s, &amp;amp;amp;server{})
  log.Printf("Server listening at %v", lis.Addr())

  if err := s.Serve(lis); err != nil {
      log.Fatalf("failed to serve: %v", err)
  }
}

// // Add method implementation
func (s *server) Add(ctx context.Context, req *pb.AddRequest) (*pb.AddResponse, error) {
  result := req.Num1 + req.Num2
  return &amp;amp;amp;pb.AddResponse{Result: result}, nil
}&amp;lt;/textarea&amp;gt;
package main

import (
  "context"
  pb "ldtgrpc01/proto" // replace with your module name
  "log"
  "net"

  "google.golang.org/grpc"
)

type server struct {
  pb.UnimplementedCalculatorServer
}

func main() {
  lis, err := net.Listen("tcp", ":50051")
  if err != nil {
      log.Fatalf("failed to listen: %v", err)
  }

  s := grpc.NewServer()
  pb.RegisterCalculatorServer(s, &amp;amp;serverArray)
  log.Printf("Server listening at %v", lis.Addr())

  if err := s.Serve(lis); err != nil {
      log.Fatalf("failed to serve: %v", err)
  }
}

// // Add method implementation
func (s *server) Add(ctx context.Context, req *pb.AddRequest) (*pb.AddResponse, error) {
  result := req.Num1 + req.Num2
  return &amp;amp;pb.AddResponse{Result: result}, nil
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;To make use of the protobuf interface code we built in the previous step, first, import it in the server code.&lt;/li&gt;
&lt;li&gt;Define a server struct type by simply calling the &lt;code&gt;UnimplementedCalculatorServer&lt;/code&gt; method. I will not cover what is meant by this method name generated by protoc in this blog post. For now, just know that it is this easy to implement the interface in a Go code for a gRPC server.&lt;/li&gt;
&lt;li&gt;Update and add the &lt;code&gt;Add()&lt;/code&gt; method to the server. Note that we are using &lt;code&gt;pb.AddRequest&lt;/code&gt; as input params, which is implemented by the protobuf Go code. Accordingly, we are using the &lt;code&gt;Num1&lt;/code&gt; and &lt;code&gt;Num2&lt;/code&gt; to calculate the sum, as per our definition in the &lt;code&gt;calc.proto&lt;/code&gt; file.&lt;/li&gt;
&lt;li&gt;Finally, in the main function, we create a new instance of gRPC server using Google’s &lt;code&gt;grpc&lt;/code&gt; package, and register the interface using &lt;code&gt;RegisterCalculatorServer&lt;/code&gt; function (from proto package) to the same. This exposes the calculator functions like &lt;code&gt;Add()&lt;/code&gt; to be used by clients.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Making a gRPC call in Client
&lt;/h2&gt;

&lt;p&gt;Similar to how we used gRPC package to implement the server side, we import the proto package in Client code, and use it as shown below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeFVooqBCe6MM7V5coDK9YbtE1q60YAZOyKg45Mxa1HbSKmzy7nTBjWZXDIha1j0oZwVCaOUSsE4BWVr6Zd4Fa602rddpGfrgYPC5Dd-fIJeuJFl0il53G-qQs-dNxfsngQ1KY%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Flh7-rt.googleusercontent.com%2Fdocsz%2FAD_4nXeFVooqBCe6MM7V5coDK9YbtE1q60YAZOyKg45Mxa1HbSKmzy7nTBjWZXDIha1j0oZwVCaOUSsE4BWVr6Zd4Fa602rddpGfrgYPC5Dd-fIJeuJFl0il53G-qQs-dNxfsngQ1KY%3Fkey%3Df99ZU8io6nT_nbqvq-LyBiAQ" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assuming the clients are able to access the calculator server – current example runs on localhost – we use the same grpc library from Google, and &lt;code&gt;Dial&lt;/code&gt; into the server to establish a connection. Using this connection, we create a client instance that represents all the methods exposed by the server to the client as seen in the code below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;package main

import (
  "context"
  "log"
  "time"

  pb "ldtgrpc01/proto" // replace with your module name

  "google.golang.org/grpc"
  "google.golang.org/grpc/credentials/insecure"
)

func main() {
  conn, err := grpc.Dial("localhost:50051", grpc.WithTransportCredentials(insecure.NewCredentials()))
  if err != nil {
      log.Fatalf("did not connect: %v", err)
  }
  defer conn.Close()

  c := pb.NewCalculatorClient(conn)

  ctx, cancel := context.WithTimeout(context.Background(), time.Second)
  defer cancel()

  // Make the gRPC call
  r, err := c.Add(ctx, &amp;amp;amp;pb.AddRequest{Num1: 5, Num2: 3})
  if err != nil {
      log.Fatalf("could not calculate: %v", err)
  }
  log.Printf("Result: %d", r.GetResult())
}&amp;lt;/textarea&amp;gt;
package main

import (
  "context"
  "log"
  "time"

  pb "ldtgrpc01/proto" // replace with your module name

  "google.golang.org/grpc"
  "google.golang.org/grpc/credentials/insecure"
)

func main() {
  conn, err := grpc.Dial("localhost:50051", grpc.WithTransportCredentials(insecure.NewCredentials()))
  if err != nil {
      log.Fatalf("did not connect: %v", err)
  }
  defer conn.Close()

  c := pb.NewCalculatorClient(conn)

  ctx, cancel := context.WithTimeout(context.Background(), time.Second)
  defer cancel()

  // Make the gRPC call
  r, err := c.Add(ctx, &amp;amp;pb.AddRequest{Num1: 5, Num2: 3})
  if err != nil {
      log.Fatalf("could not calculate: %v", err)
  }
  log.Printf("Result: %d", r.GetResult())
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the client (‘c’ above) is created using &lt;code&gt;NewCalculatorClient&lt;/code&gt;, it can be thought of as a local object instance of the calculator server and its methods (functions) are called as we would normally do. In the code above, observe the line where &lt;code&gt;Add()&lt;/code&gt; function is being called. In this gRPC version, passing the number parameters is a bit different. Here we use the &lt;code&gt;AddRequest&lt;/code&gt; Protobuf message (check the calc.proto file), to pass the same.&lt;/p&gt;

&lt;p&gt;Once the Add() is called on the client code, this happens:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The client serializes the Protocol Buffer message&lt;/li&gt;
&lt;li&gt;The message is sent over the network to the server&lt;/li&gt;
&lt;li&gt;The RPC (Remote Procedure Call) includes metadata and the context&lt;/li&gt;
&lt;li&gt;Server executes the “addition” logic and generates the return value/response&lt;/li&gt;
&lt;li&gt;Server creates the response message&lt;/li&gt;
&lt;li&gt;Response is serialized&lt;/li&gt;
&lt;li&gt;Sent back over the network to the client&lt;/li&gt;
&lt;li&gt;Client deserializes the response, continues it processing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Testing and Conclusion
&lt;/h2&gt;

&lt;p&gt;To test this code, first run the calculator server code so that it listens on the port 50051, and then run the client code. The client simply calls the Add() function with hardcoded values 5 and 3. The calculator server processes this gRPC request and responds with the addition, as seen in the image below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3fn1emcvm7alwa0g124.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3fn1emcvm7alwa0g124.png" alt="Image description" width="706" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://letsdote.ch/post/intro-to-grpc-and-protocol-buffers-using-go/" rel="noopener noreferrer"&gt;Intro to gRPC and Protocol Buffers using Go&lt;/a&gt; appeared first on &lt;a href="https://letsdote.ch" rel="noopener noreferrer"&gt;Let's Do Tech&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>systemdesign</category>
      <category>architecture</category>
      <category>communication</category>
      <category>distributed</category>
    </item>
    <item>
      <title>Why cry when Devin is here?</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Fri, 22 Mar 2024 15:12:59 +0000</pubDate>
      <link>https://dev.to/letsdotech/why-cry-when-devin-is-here-9fo</link>
      <guid>https://dev.to/letsdotech/why-cry-when-devin-is-here-9fo</guid>
      <description>&lt;p&gt;You might have seen &lt;a href="https://www.cognition-labs.com/introducing-devin"&gt;the video&lt;/a&gt; of the so-called “first AI software engineer” named Devin by Cognition labs floating around. As the next step, you searched online forums for more information about the truth in this. If you are weak, chances are that you have succumbed to the rhetoric and narratives there – thinking about the possibilities of how developers are going to be replaced forever.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The truth is, AI is the new plastic, except, it’s not that bad, and it is here to stay.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I blame the short attention span and lack of depth and clarity in the general thinking process of the masses today. Now, if that is too much to ask for, I suggest you read the rest of the blog to bust these myths.&lt;/p&gt;

&lt;h3&gt;
  
  
  Devin’s claims and promises
&lt;/h3&gt;

&lt;p&gt;To begin, the claim from Cognition labs is a well crafted pitch for investors. I don’t blame them. If you or I would have come up with some invention, making it a successful business would have been our choices too. Isn’t it? I do give credits to them because they have managed to stir the buzz around Devin successfully.&lt;/p&gt;

&lt;p&gt;As per the claims, Devin was &lt;a href="https://www.cognition-labs.com/post/swe-bench-technical-report"&gt;able to resolve 13.86%&lt;/a&gt; of the Github issues and PRs in a benchmarking test conducted using SWE-bench. This may not sound much, but it is almost 8x better than GPT4, and 26x better than ChatGPT3.5. ChatGPT was first made public in November 2022, and the &lt;a href="https://www.swebench.com/"&gt;last time both were evaluated&lt;/a&gt; was in Oct 2023. Without being biased, I think this is a great achievement and this had to happen – if not now, it will at some point in the near future. LLMs have opened Pandora’s box, but in a good way. But most of the time we fail to see the good side of it.&lt;/p&gt;

&lt;p&gt;Cognition labs have a few YouTube videos to demonstrate a few use cases with Devin. They have been able to replicate how a developer would have developed a software, when they are assigned a task. Looking at those, if I had to make a hiring decision with Devin’s current capabilities, then my decision would have been a negative. As a product manager, that work does not meet my expectations, and moreover it takes significant chat time and effort to make it deliver the working result. Cognition labs may claimed it to be the “first AI software engineer”, but I am sure even they know there is a loonng way to go.&lt;/p&gt;

&lt;p&gt;While all of this is happening, it is important to note that as a company they need to pitch the highest promise to position themselves. That’s not a bad attempt – look how developers around the globe are just waiting for Devin to come out in the public “and take their jobs”.&lt;/p&gt;

&lt;h3&gt;
  
  
  When Devin scores 100%?
&lt;/h3&gt;

&lt;p&gt;For a moment, let’s assume Devin is able to resolve all the Github issues successfully on SWE-bench. To add fuel to your fire, let’s also assume that there are other models available for cheap/free at 99% accuracy. We cannot deny this possibility, can we? So that leaves us with the question – will software engineers be outdated? I mean this is a good theme for sci-fi fiction. But how can we completely ignore the amount of work that goes into developing a product or service?&lt;/p&gt;

&lt;p&gt;Think about it – teams are hired for the UI/UX designs, for developing frontends and backends, CI/CD pipelines, etc. Then there are architectural aspects like tightening the security, following best practices, making sure of HA and DR, database and storage routines, etc. All these efforts have to align with the vision and roadmap which are unique to each organization and customers.&lt;/p&gt;

&lt;p&gt;One thing I know about AI – and this is my firm belief – is that it will never develop independent consciousness. As an analogy, medical science has been successful in synthesizing biological body parts, but nobody has been able to figure out how to create a living entity by imparting a life pulse. &lt;/p&gt;

&lt;p&gt;The other thing I know about AI is that it works on the data that exists today. Tech businesses and especially startups thrive because they want to disrupt industries with innovation, or at least bring something new to the table. That requires consciousness, which results in visions and dreams, which is required for progress of humanity and in general. Without it, we would still have been in the stone age, or perhaps not even that. Devin, is an outcome of the same.&lt;/p&gt;

&lt;p&gt;Forget about the ultra broad/god level/jargony language in the last paragraph. When farm tractors were invented, it didn’t put farmers out of work. It simply scaled their produce. When we, the developers, were busy automating linear tasks and putting people “out of job” (well, they got reskilled/upskilled) without any guilt, why cry now? Innovation and evolution have always been opposed by the world, out of sheer laziness.&lt;/p&gt;

&lt;p&gt;If Devin and similar AI products get to 100%, then I think we should break the whining pattern and welcome these changes, and adapt. This is a tall assumption anyway for 2024, we still have enough time to adapt! We are good at it, and it is definitely not the case where we are born with a predefined goal of writing x lines of code in this lifetime.&lt;/p&gt;

&lt;h3&gt;
  
  
  Let’s get real
&lt;/h3&gt;

&lt;p&gt;Software engineering is one of the most complex jobs that exists today. It is an intricate blend of art and science, which requires us to think extremely logically at various levels with enough room for human touch. Think of it in this way – it is probably easier for Gen AI models to perform C-level work, than that of a software engineer’s.&lt;/p&gt;

&lt;p&gt;What happened to ChatGPT? Has it replaced non-tech text based professionals? Or is it the case that these professionals are still working under the grace of their employers? In today’s world of mass layoffs, this is hard to believe.&lt;/p&gt;

&lt;p&gt;Instead we have learnt to use Gen AI products to boost our productivity. And I mean boost, not cheat. Sure, you can blindly cheat but that will take you nowhere – we all know that. We have grown to identify the bs being generated by AI models. I am not even talking about measures taken by search engines or AI detector tools. Even we as individuals are able to gauge whether a piece of text is generated by AI or not.&lt;/p&gt;

&lt;p&gt;Honestly, I think this is what would happen practically with tools like Devin as well (if it gets better). Perhaps, we can use Devins to resolve low-medium impact bugs and create a PR to begin with, while we let the real developers focus their efforts on more important issues.&lt;/p&gt;

&lt;p&gt;To take this a step further, like any other ChatGPT offshoot “AI App”, if Devin is just a wrapper around Gen AI model, then there is no big deal associated with it. We could easily do the wrapper’s job in a much better way. Coding assistants like Github Co-pilot and AWS Code Whisperer still make more sense that way.&lt;/p&gt;

&lt;p&gt;And on top of all this, the regulatory concerns, plus access to private data and code bases by organizations, to an AI software engineer with 13.86% accuracy… hmm.&lt;/p&gt;

&lt;p&gt;PS: I am having way more fun reading about this topic on Reddit. &lt;a href="https://www.reddit.com/r/cscareerquestions/comments/1bd12gc/relevant_news_cognition_labs_today_were_excited/?utm_source=share&amp;amp;utm_medium=web3x&amp;amp;utm_name=web3xcss&amp;amp;utm_term=1&amp;amp;utm_content=share_button"&gt;Example&lt;/a&gt;. Just want to settle the uncertainty amongst new developers coming into tech. I understand it sucks to hear such news, but I personally think there is no need to worry in the near future.&lt;/p&gt;

</description>
      <category>datascience</category>
      <category>artificialintelligen</category>
      <category>devin</category>
    </item>
    <item>
      <title>Microservices Architecture 🗄️ with Go: Designing Scalable and Resilient Systems</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Mon, 09 Oct 2023 03:25:05 +0000</pubDate>
      <link>https://dev.to/letsdotech/microservices-architecture-with-go-designing-scalable-and-resilient-systems-54jn</link>
      <guid>https://dev.to/letsdotech/microservices-architecture-with-go-designing-scalable-and-resilient-systems-54jn</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--F9BZXegt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1639322537228-f710d846310a%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxibG9ja2NoYWlufGVufDB8fHx8MTY5MzQ3MDUwM3ww%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--F9BZXegt--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1639322537228-f710d846310a%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxibG9ja2NoYWlufGVufDB8fHx8MTY5MzQ3MDUwM3ww%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" alt="a group of cubes that are on a black surface" title="a group of cubes that are on a black surface" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by Shubham Dhage on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the 6th post as part of the &lt;a href="https://blog.letsdote.ch/t/gotheme"&gt;Golang Theme&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Microservices architecture has gained immense popularity in recent years, while organizations on the other hand, have chosen to move away from monolithic software design. Microservices is a design approach where a complex application is broken down into smaller, loosely coupled services, each responsible for a specific piece of functionality.&lt;/p&gt;

&lt;p&gt;These services are developed, deployed, and scaled independently, fostering agility and adaptability in the face of evolving business needs. By decoupling components, microservices allow teams to work on individual services concurrently, accelerating development cycles and enabling rapid innovation.&lt;/p&gt;

&lt;p&gt;In this post, we dive into the world of microservices architecture and explore how Go, with its speed, simplicity, and concurrency support, can be the perfect choice for developing and deploying microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Go for Microservices
&lt;/h2&gt;

&lt;p&gt;Using the Go programming language for building microservices offers a range of distinct advantages that align perfectly with the architectural principles of microservices.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Go's exceptional performance and efficiency stand out. With its compiled nature and lightweight concurrency model, Go enables microservices to handle high traffic loads and concurrent requests with remarkable speed. This is crucial in a microservices ecosystem where responsiveness and low latency are paramount, allowing applications to efficiently manage numerous simultaneous interactions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go's simplicity and clean syntax contribute to rapid development and ease of maintenance. Microservices projects often involve multiple services, each with its own codebase. Go's straightforward syntax reduces the cognitive load on us, making it easier to write and understand code. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go's built-in testing and profiling tools add another layer of convenience, enabling us to ensure the reliability and performance of microservices throughout their lifecycle.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go's native support for concurrency through goroutines and channels is a game-changer for microservices architecture. It’s concurrency primitives allow us to elegantly manage these tasks without the complexities of traditional threading. This results in applications that are both efficient and scalable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally, the compact and self-contained nature of Go binaries simplifies the deployment of microservices. Go programs compile to standalone executables that include all their dependencies, eliminating the need to manage complex runtime environments. As a result, deploying, scaling, and managing individual microservices becomes smoother, reducing potential conflicts and streamlining the overall system architecture. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Overall, Go's performance, simplicity, concurrency, and deployment characteristics makes it a great choice for architects and developers seeking to develop robust and responsive microservices systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Communication Between Microservices in Go
&lt;/h2&gt;

&lt;p&gt;When it comes to communication between microservices in the Go programming language, several strategies and tools are employed to ensure seamless interaction and data flow.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;HTTP:&lt;/strong&gt; Go's native support for HTTP is a natural fit for microservices communication. Leveraging the standard library's &lt;code&gt;net/http&lt;/code&gt; package, we can effortlessly create HTTP-based APIs that facilitate the exchange of data between services. With frameworks like Gorilla Mux, we can easily build sophisticated HTTP routing and middleware, streamlining the development of RESTful APIs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;gRPC:&lt;/strong&gt; A powerful communication mechanism in the Go ecosystem is the use of gRPC. Built on top of HTTP/2, gRPC offers efficient and low-latency communication by employing protocol buffers for serialization and deserialization. This approach is particularly advantageous in scenarios where high-performance, real-time communication is required.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Event-driven:&lt;/strong&gt; When aiming for event-driven communication or asynchronous messaging between microservices, Go's channels and goroutines are most useful. While not a dedicated messaging framework, Go's concurrency primitives provide a lightweight and intuitive way to establish communication patterns like publish-subscribe or request-reply queues. Libraries such as NATS or RabbitMQ are employed to extend Go's capabilities in event-driven communication scenarios. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Go provides a robust foundation for creating effective communication pathways between microservices. Whether through traditional HTTP, gRPC, event-driven channels, or the assistance of orchestration tools, Go empowers us to build microservices systems that communicate seamlessly and reliably, fostering the growth and adaptability of modern software architectures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Securing Microservices using Go
&lt;/h2&gt;

&lt;p&gt;Securing microservices is of paramount importance in today's interconnected and distributed software landscape. The Go programming language offers a range of features and libraries that can be leveraged to fortify the security of microservices-based systems.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Authentication and authorization:&lt;/strong&gt; Go's standard library provides tools for implementing authentication mechanisms such as JWT (JSON Web Tokens) and OAuth2. These protocols enable services to validate the identity of users and grant them access based on predefined roles and permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplicity and readability:&lt;/strong&gt; The clear and concise syntax of Go makes it easier to write secure code by minimising the potential for common programming errors that could lead to vulnerabilities like injection attacks. Additionally, Go's type safety and memory management features help prevent buffer overflows and other memory-related vulnerabilities that can be exploited by attackers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Secure communication:&lt;/strong&gt; Go's support for HTTPS through the &lt;code&gt;net/http&lt;/code&gt; package enables services to establish encrypted connections using SSL/TLS protocols, safeguarding data in transit from eavesdropping and tampering. The open-source package "crypto" in the Go standard library provides a comprehensive set of cryptographic functions that can be used to implement hashing, encryption, and other security measures.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The fusion of microservices principles and the Go programming language has unveiled a powerful synergy, offering architects and developers the tools to create systems that adapt, scale, and withstand the challenges of today's complex digital world. From dissecting the intricacies of communication to ensuring security, monitoring, and debugging, we've delved into the core aspects that underpin the success of microservices architecture using Go.&lt;/p&gt;

&lt;p&gt;The realm of microservices is a space of both boundless opportunities and formidable challenges. With the proper understanding of service decomposition, communication patterns, fault tolerance, and the array of tools Go brings to the table, I hope you are at a bit of comfort while exploring the avenues of building microservices using Go.&lt;/p&gt;

&lt;p&gt;Sumeet N.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>📦 Go Modules Demystified: Managing Dependencies the Right Way</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Mon, 02 Oct 2023 03:34:14 +0000</pubDate>
      <link>https://dev.to/letsdotech/go-modules-demystified-managing-dependencies-the-right-way-3680</link>
      <guid>https://dev.to/letsdotech/go-modules-demystified-managing-dependencies-the-right-way-3680</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tCZG6hDZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1573166364266-356ef04ae798%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHw5fHxzb2Z0d2FyZSUyMGRlc2lnbnxlbnwwfHx8fDE2OTM0MzMzMDl8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tCZG6hDZ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1573166364266-356ef04ae798%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHw5fHxzb2Z0d2FyZSUyMGRlc2lnbnxlbnwwfHx8fDE2OTM0MzMzMDl8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" alt="person writing on dry-erase board" title="person writing on dry-erase board" width="800" height="534"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by Christina @ wocintechchat.com on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the 5th post as part of the &lt;a href="https://blog.letsdote.ch/t/gotheme"&gt;Golang Theme&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The foundation of any robust software project depends not just on how optimally the code is written, but also on how seamlessly it integrates with external libraries, frameworks, and tools. This is where dependency management steps into the spotlight, which ensures our project's stability, scalability, and maintainability. In this post we explore dependency management in the context of the Go programming language.&lt;/p&gt;

&lt;p&gt;Go modules are constructs, introduced as the official method for managing dependencies. They define the way we handle external packages. With Go modules, the process of acquiring, updating, and organizing dependencies is streamlined, making it easier than ever to maintain a clear and predictable project structure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Modules in Go
&lt;/h2&gt;

&lt;p&gt;Dependency management in Go is all about orchestrating the external libraries, frameworks, and tools that your project relies on. Effective dependency management ensures our project's stability, security, and longevity. As the Go ecosystem evolves and grows, managing dependencies becomes increasingly complex, necessitating a reliable solution that can navigate these intricacies seamlessly.&lt;/p&gt;

&lt;p&gt;Before the introduction of Go modules, managing dependencies in Go projects was far from straightforward. The Go community initially relied on the GOPATH environment variable to establish a unified directory structure for all Go code. However, this approach had its limitations. It posed challenges when different projects required different versions of the same library, leading to version conflicts and often leaving developers in dependency hell.&lt;/p&gt;

&lt;p&gt;With the official release of Go 1.11 in August 2018, the Go community introduced Go modules as the definitive solution to the once-difficult task of dependency management. Go modules are designed to streamline the way developers handle external packages, providing a systematic approach to versioning, compatibility, and collaboration.&lt;/p&gt;

&lt;p&gt;A Go module is a collection of related Go packages. It encapsulates not only the source code but also crucial metadata that defines the module's dependencies, version constraints, and other essential information. The main file of the Go module system is the &lt;code&gt;go.mod&lt;/code&gt; file, a declarative file that outlines the module's structure and its dependencies' specifications. This simple yet powerful file transforms the landscape of Go development by offering easy control over dependency management.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing dependencies in Go
&lt;/h2&gt;

&lt;p&gt;The dependencies in Go modules are managed by the &lt;code&gt;go get&lt;/code&gt; command. This command not only fetches the desired package from the Go module repository but also updates the project's &lt;code&gt;go.mod&lt;/code&gt; file with the necessary information about the dependency. It eliminates the need for manual updates to dependency lists and version information.&lt;/p&gt;

&lt;p&gt;With Go modules, we can define version constraints using a combination of operators and version numbers, making sure that only compatible versions of dependencies are used. This helps in mitigating the risk of unexpected breaking changes due to dependency updates. Further, Go modules adhere to semantic versioning principles, ensuring that version changes are communicated effectively and consistently.&lt;/p&gt;

&lt;p&gt;As the development landscape continues to evolve, Go modules provide a robust framework for not only adding and managing dependencies but also for adapting to changing requirements and maintaining the integrity of projects over time. This dynamic toolset empowers us to navigate the complex web of dependencies with confidence, fostering a collaborative and efficient environment for building remarkable Go applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Updating and Upgrading Go Modules
&lt;/h2&gt;

&lt;p&gt;When it comes to managing updates in Go modules, the process is designed to be both intuitive and efficient. The &lt;code&gt;go get&lt;/code&gt; command serves as our friend to the latest versions of our dependencies. By invoking this command with specific version constraints, we can ensure that only compatible updates are pulled, preserving the stability of your project.&lt;/p&gt;

&lt;p&gt;Once an update is fetched, Go modules automatically update our &lt;code&gt;go.mod&lt;/code&gt; file, reflecting the change and keeping track of version information. This smart integration simplifies the process and allows us to remain focused on building, without getting tangled in dependency management complexities.&lt;/p&gt;

&lt;p&gt;Navigating the realm of upgrades demands an understanding of semantic versioning. By adhering to the rules of semantic versioning, we can confidently decide when to perform a major version upgrades. Go modules' version constraints facilitate this process, allowing us to incrementally upgrade dependencies without triggering breaking changes in our project.&lt;/p&gt;

&lt;p&gt;As Go modules manage our updates, they also bring attention to indirect dependencies, an often-overlooked aspect of the dependency ecosystem. These are the dependencies that our direct dependencies rely on. Go modules automatically track and manage these indirect dependencies, ensuring that they're not just compatible with our project but also aligned with each other.&lt;/p&gt;

&lt;p&gt;By adhering to version constraints, semantic versioning, and automated dependency tracking, we can confidently navigate the process of keeping their projects current, secure, and ready to embrace new functionalities while maintaining the stability that is important to any successful software process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vendor Directory and Dependency Isolation
&lt;/h2&gt;

&lt;p&gt;The vendor directory, a feature of Go modules, is a designated repository that holds all the dependencies required by a project. It acts as a shield against external changes, creating a self-sufficient environment where the project's dependencies are insulated from changes in the global package space.&lt;/p&gt;

&lt;p&gt;This isolation is a key factor in mitigating compatibility issues that can arise when different projects rely on different versions of the same dependency. By containing dependencies within the vendor directory, Go modules prevent unintended interactions and version collisions.&lt;/p&gt;

&lt;p&gt;The beauty of the vendor directory lies in its simplicity. When we use the &lt;code&gt;go get&lt;/code&gt; command to fetch dependencies, Go modules not only update the &lt;code&gt;go.mod&lt;/code&gt; file but also populate the vendor directory with the corresponding package code. This means that the project can be built and run independently, regardless of the state of the user's global Go environment.&lt;/p&gt;

&lt;p&gt;Additionally, Go modules prioritize the contents of the vendor directory over any globally installed packages during compilation, ensuring that our project remains isolated from external changes and that its dependencies are consistently utilized.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The importance of effective dependency management cannot be overstated, as it directly impacts the stability, maintainability, and collaboration potential of projects. Go modules are necessary since they address the limitations of the traditional GOPATH approach and align seamlessly with the needs of modern development.&lt;/p&gt;

&lt;p&gt;Go modules not only simplify the process of handling dependencies but also enhance version compatibility, reproducibility, and security in our projects. From creating new projects to updating dependencies, supporting team collaborations, and migrating legacy projects, Go modules offer a comprehensive toolkit that empowers us to overcome challenges and build reliable software.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;p&gt;For further exploration and learning, consider these additional resources:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://blog.golang.org/using-go-modules"&gt;The Go Blog: Using Go Modules&lt;/a&gt;&lt;/strong&gt; - A comprehensive guide from the official Go blog on using Go modules, including information on the vendor directory and dependency management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/golang/go/wiki/Modules"&gt;Go Modules Wiki&lt;/a&gt;&lt;/strong&gt; - The official Go wiki page on Go modules, providing detailed information about the vendor directory, dependency isolation, and best practices.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/go-modules-by-example/index"&gt;Go Modules by Example&lt;/a&gt;&lt;/strong&gt; - A collection of practical examples and use cases for Go modules, including insights into how the vendor directory contributes to dependency isolation.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Sumeet N.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>❌ Error Handling in Go: Strategies for Writing Robust and Maintainable Code</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Mon, 25 Sep 2023 03:37:08 +0000</pubDate>
      <link>https://dev.to/letsdotech/error-handling-in-go-strategies-for-writing-robust-and-maintainable-code-4ef0</link>
      <guid>https://dev.to/letsdotech/error-handling-in-go-strategies-for-writing-robust-and-maintainable-code-4ef0</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--sVrJg3ru--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1532003885409-ed84d334f6cc%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHw2Nnx8ZXJyb3J8ZW58MHx8fHwxNjkzMDk3NzY5fDA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--sVrJg3ru--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1532003885409-ed84d334f6cc%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHw2Nnx8ZXJyb3J8ZW58MHx8fHwxNjkzMDk3NzY5fDA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" alt="closed white steel gate" title="closed white steel gate" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by Nathan Dumlao on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the 4th post as part of the &lt;a href="https://blog.letsdote.ch/t/gotheme"&gt;Golang Theme&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Error handling is a critical aspect of software development in Go, playing an important role in creating robust, reliable, and maintainable programs. In Go, errors are considered a first-class citizen rather than an afterthought, emphasizing the importance of gracefully handling unexpected situations. This approach encourages us to confront potential issues head-on, leading to more resilient codebases.&lt;/p&gt;

&lt;p&gt;One of the main reasons error handling is crucial in Go is that it promotes program stability. By explicitly addressing errors, we can prevent unhandled exceptions that might lead to program crashes or unpredictable behavior.&lt;/p&gt;

&lt;p&gt;Go's emphasis on checking and handling errors at the point of occurrence encourages programmers to anticipate failure scenarios and handle them gracefully, ensuring that a single faulty operation doesn't jeopardize the entire application.&lt;/p&gt;

&lt;p&gt;Furthermore, effective error handling contributes to code readability and maintainability. Clear and concise error messages facilitate troubleshooting and debugging. When errors are properly handled and reported, it becomes easier to diagnose issues during development and in production environments, reducing the time spent on identifying and rectifying problems.&lt;/p&gt;

&lt;p&gt;Additionally, comprehensive error handling allows us to make informed decisions about how to proceed when things go wrong, whether that involves retrying an operation, falling back to an alternative approach, or alerting administrators about critical failures.&lt;/p&gt;

&lt;p&gt;By dealing with errors proactively, we ensure that our applications are more predictable, reliable, and user-friendly, ultimately leading to higher quality software products.&lt;/p&gt;

&lt;h2&gt;
  
  
  Approaches To Handle Go Errors
&lt;/h2&gt;

&lt;p&gt;As mentioned earlier, error handling is a first-class citizen, and the language provides several approaches to handle errors effectively and gracefully. Here are some common approaches to handling errors in Go:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Return Error Values:&lt;/strong&gt; This is the most straightforward approach, where functions return both their usual result and an error value. If the function executes successfully, the error is typically &lt;code&gt;nil&lt;/code&gt;; otherwise, an error value containing relevant information is returned. The calling code can then check the error value and take appropriate action.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Panic and Recover:&lt;/strong&gt; While not recommended for routine error handling, we can use &lt;code&gt;panic&lt;/code&gt; to stop normal execution of a function and initiate a panic, and &lt;code&gt;recover&lt;/code&gt; to capture and handle this panic, allowing the program to continue running. This approach is more suitable for catastrophic errors. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Error Types:&lt;/strong&gt; Go allows us to define custom error types by implementing the &lt;code&gt;error&lt;/code&gt; interface. This enables us to create more informative error messages or group related errors together. This is particularly useful when we need to distinguish between different types of errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Error Wrapping and Propagation:&lt;/strong&gt; Sometimes, we might need to wrap errors to provide additional context about where the error occurred. The &lt;code&gt;errors&lt;/code&gt; package in Go provides the &lt;code&gt;Wrap&lt;/code&gt; function to add context to an error and &lt;code&gt;Unwrap&lt;/code&gt; function to retrieve the original error. This helps to preserve the error chain while enriching it with more information.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These approaches provide various levels of granularity and control over error handling in Go. Choosing the appropriate approach depends on the nature of the error and the context in which it's being handled.&lt;/p&gt;

&lt;h2&gt;
  
  
  Error Types And Assertions
&lt;/h2&gt;

&lt;p&gt;Error types and assertions are mechanisms used for managing and processing errors in a more structured and informative manner.&lt;/p&gt;

&lt;h3&gt;
  
  
  Error Types
&lt;/h3&gt;

&lt;p&gt;In Go, an error is not just a simple string but a value of an interface type called &lt;code&gt;error&lt;/code&gt;. The &lt;code&gt;error&lt;/code&gt; interface has a single method:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type error interface {
    Error() string
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This means that any type that implements a method named &lt;code&gt;Error()&lt;/code&gt; that returns a string can be used as an error. This provides the flexibility to create custom error types that carry additional information beyond a simple error message. By defining custom error types, we can include more context about the error, making it easier to identify the source and nature of the problem.&lt;/p&gt;

&lt;p&gt;Here's an example of defining and using a custom error type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;type MyError struct {
    Message string
}

func (e MyError) Error() string {
    return e.Message
}

func someFunction() error {
    return MyError{"This is a custom error."}
}

func main() {
    err := someFunction()
    if err != nil {
        fmt.Println("Error:", err)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Assertions
&lt;/h3&gt;

&lt;p&gt;Also known as type assertions, are used to extract the underlying value of an interface and check its concrete type. This is especially useful when we work with interface values like error types and need to access the methods or properties of the concrete type. Assertions allow us to safely convert an interface value to its concrete type.&lt;/p&gt;

&lt;p&gt;Assertions are performed using the syntax &lt;code&gt;(value).(Type)&lt;/code&gt;. If the assertion is successful, the value is converted to the specified type; otherwise, a runtime panic occurs.&lt;/p&gt;

&lt;p&gt;Here's an example of using assertions with error types:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func main() {
    err := someFunction()

    // Check if the error is of type MyError
    if myErr, ok := err.(MyError); ok {
        fmt.Println("Custom error:", myErr.Message)
    } else {
        fmt.Println("Generic error:", err)
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the code attempts to assert the error returned by &lt;code&gt;someFunction()&lt;/code&gt; into a &lt;code&gt;MyError&lt;/code&gt; type. If the assertion is successful, it prints the custom error message; otherwise, it treats the error as a generic error.&lt;/p&gt;

&lt;p&gt;Both custom error types and assertions contribute to clearer error handling by allowing you to encapsulate error-specific information and safely work with interface values. This leads to more informative error messages and better debugging capabilities in our Go programs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Error Management Strategies
&lt;/h2&gt;

&lt;p&gt;Error management strategies are employed to handle unexpected situations, errors, and exceptions that can occur during the execution of a program. These strategies are essential for creating reliable, robust, and maintainable software. Some of the error management strategies used while programming in Go are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Return Errors Explicitly:&lt;/strong&gt; Go encourages functions to return both the primary result and an error. Errors are returned as a separate return value, often the last one, allowing callers to check for errors directly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Named Return Values:&lt;/strong&gt; Named return values in Go functions allow us to initialise variables along with the &lt;code&gt;return&lt;/code&gt; statement. This simplifies error handling by reducing the need to create new variables for return values and errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Check Errors Immediately:&lt;/strong&gt; Errors should be checked and handled as close to their origin as possible. This prevents errors from propagating through multiple layers of code and makes error handling more explicit.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wrap Errors:&lt;/strong&gt; The &lt;code&gt;errors&lt;/code&gt; package provides the &lt;code&gt;fmt.Errorf&lt;/code&gt; function, which allows us to wrap errors with additional context. This helps to provide more meaningful error messages without losing the original error information.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Custom Error Types:&lt;/strong&gt; Go allows us to define our own error types by implementing the &lt;code&gt;error&lt;/code&gt; interface. This is useful when we want to categorize or differentiate between different types of errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Defer and Clean-Up:&lt;/strong&gt; The &lt;code&gt;defer&lt;/code&gt; statement is used to ensure that certain clean-up actions, such as closing files or releasing resources, are performed even if an error occurs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Logging and Debugging:&lt;/strong&gt; Go's standard library provides a robust logging package (&lt;code&gt;log&lt;/code&gt;) that is used to log errors and other relevant information. Debuggers and profilers are also utilised for diagnosing errors during development.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Graceful Degradation:&lt;/strong&gt; Designing systems to handle errors gracefully and continue functioning with degraded features is an important strategy in Go, especially for distributed systems.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Go's error handling philosophy centers around explicit error checking, simplicity, and transparency. By following these strategies, we can create reliable, maintainable, and robust code that handles errors effectively while maintaining a clear and readable code structure.&lt;/p&gt;

&lt;p&gt;Sumeet N.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Exploring the Power of Interfaces in Go: Polymorphism Simplified 🏷️</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Mon, 18 Sep 2023 03:37:17 +0000</pubDate>
      <link>https://dev.to/letsdotech/exploring-the-power-of-interfaces-in-go-polymorphism-simplified-cl1</link>
      <guid>https://dev.to/letsdotech/exploring-the-power-of-interfaces-in-go-polymorphism-simplified-cl1</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--YnKzz6Tk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1535054820380-92c41678b087%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHwzM3x8dHlwZXN8ZW58MHx8fHwxNjkzMDkzNDA2fDA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--YnKzz6Tk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1535054820380-92c41678b087%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHwzM3x8dHlwZXN8ZW58MHx8fHwxNjkzMDkzNDA2fDA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" alt="flat-lay photography of stamp lot" title="flat-lay photography of stamp lot" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by Kristian Strand on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the 3rd post as part of the &lt;a href="https://blog.letsdote.ch/t/gotheme"&gt;Golang Theme&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;An interface is a type that specifies a set of method signatures that any type implementing the interface must provide. It is a core concept in Go's type system and is a building block used to achieve polymorphism and abstraction. Following pointers help us explain interfaces better.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;An interface defines a contract by specifying a list of method signatures (function prototypes) without any implementation details. Any type that implements all the methods listed in the interface is said to satisfy or implement that interface.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unlike some other programming languages, Go's interfaces are implemented implicitly. If a type has methods with the exact method signatures defined in an interface, it is automatically considered to implement that interface.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Go's approach to interfaces is sometimes called "structural typing" or "duck typing." This means that a type is considered to implement an interface based on the methods it has, rather than being explicitly declared to implement the interface.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Consider an example of a “shape” interface in Go that implements an “area()” method. Further, there are a couple of more types - “circle” and “rectangle” which implement a method each, with same name - area(). The circle and rectangle types are said to implicitly implement the Shape interface.&lt;/p&gt;

&lt;p&gt;Interfaces play a significant role in Go's philosophy of simplicity and composition, enabling code to be more modular, testable, and adaptable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Do We Need Interfaces?
&lt;/h2&gt;

&lt;p&gt;Interfaces in Go have several important uses that contribute to the language's design principles and capabilities.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Polymorphism and Abstraction:&lt;/strong&gt; Interfaces enable polymorphism, which means we can write functions and methods that can work with different types that implement the same interface. This promotes code reuse and allows us to write more generic and flexible code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Decoupling:&lt;/strong&gt; Interfaces help decouple different parts of our codebase. When we write code that depends on interfaces rather than concrete types, we create a separation between the implementation details and the parts of the code that use those implementations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Composition:&lt;/strong&gt; Interfaces encourage composition over inheritance. Instead of building deep inheritance hierarchies, we can compose types by combining smaller interfaces. This approach is often more flexible and easier to manage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Testability:&lt;/strong&gt; Interfaces make it easier to write unit tests and mock implementations. We can create mock implementations of interfaces to isolate and test specific components of our code without relying on real implementations.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexibility and Future-Proofing:&lt;/strong&gt; Interfaces allow us to write code that's more adaptable to changes. If we later need to introduce a new type that satisfies an existing interface, we can seamlessly integrate it without modifying existing code.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Third-Party Libraries:&lt;/strong&gt; Interfaces facilitate the integration of third-party libraries. If a library defines interfaces, we can implement those interfaces to customize or extend the library's functionality.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dynamic Behavior:&lt;/strong&gt; Interfaces provide a way to achieve dynamic behavior in Go. This is particularly useful when working with unknown types at runtime, as in scenarios involving reflection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Code Contracts:&lt;/strong&gt; Interfaces serve as contracts that define what behavior a type must provide. This makes it clear to developers what methods a type should implement to satisfy a particular interface.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Design:&lt;/strong&gt; Interfaces play a crucial role in designing clean and usable APIs. They allow us to define the core behaviours that our types should provide, promoting consistency and ease of use.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Avoiding Tight Coupling:&lt;/strong&gt; By programming interfaces rather than concrete types, we avoid tightly coupling different parts of our application. This makes our codebase more modular and maintainable.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Interfaces in Go contribute to the language's simplicity, flexibility, and focus on clean code design. They encourage practices that lead to better software engineering principles, such as separation of concerns, testability, and maintainability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interfaces vs. Concrete Types
&lt;/h2&gt;

&lt;p&gt;Interfaces and concrete types represent different aspects of data and behavior abstraction, and understanding their distinctions is important for designing effective and maintainable software.&lt;/p&gt;

&lt;p&gt;Concrete types are the building blocks of data in a program. They define the structure and attributes of a specific object or entity. A concrete type provides the blueprint for creating instances with specific data and methods. They types are used to represent &lt;strong&gt;tangible entities and encapsulate their properties and behaviors&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Interfaces define a &lt;strong&gt;contract for behavior&lt;/strong&gt;. It specifies a set of method signatures that any type can choose to implement. It serves as a way to guarantee that certain methods will be available on a type, irrespective of its concrete nature. Interfaces are used to establish a common ground for various unrelated types to interact with the rest of the program in a consistent manner- aka Polymorphism.&lt;/p&gt;

&lt;p&gt;Concrete types provide the specific implementation details and encapsulate data, while interfaces establish a shared language for communication between different parts of the application. This separation promotes loose coupling, allowing different components to work together without needing to know the intricate details of each other.&lt;/p&gt;

&lt;p&gt;Interfaces are implemented implicitly, meaning there's no need to declare that a type explicitly implements an interface. As long as a type defines all the methods in an interface, it automatically satisfies that interface. This design encourages a cleaner codebase, making it easy to add new implementations without modifying existing code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Empty interfaces {}
&lt;/h2&gt;

&lt;p&gt;Empty interfaces, often denoted as &lt;code&gt;interface{}&lt;/code&gt;, are a unique and powerful feature in Go. Unlike regular interfaces that define a set of required methods, an empty interface doesn't have any method requirements. This means that any value in Go can be assigned to an empty interface.&lt;/p&gt;

&lt;p&gt;The flexibility of empty interfaces allows us to work with values of unknown types, making them a powerful tool for handling dynamic and heterogeneous data. They are commonly used in scenarios where we need to create generic functions that can work with a wide range of data types.&lt;/p&gt;

&lt;p&gt;Another important use case for empty interfaces is in reflection, a mechanism that enables programs to inspect their own structure and data at runtime. When combined with reflection, empty interfaces allow us to dynamically examine the type and structure of an unknown value. This is useful when writing code that needs to handle arbitrary data coming from various sources.&lt;/p&gt;

&lt;p&gt;However, it's important to use empty interfaces judiciously. While they offer flexibility, they can also lead to less type safety, as type information is lost when a value is assigned to an empty interface. This leads to runtime errors if the actual types don't match the expected behavior.&lt;/p&gt;

&lt;p&gt;Empty interfaces provide a dynamic and versatile way to work with values of varying types, making them valuable in scenarios that require handling heterogeneous data or using reflection. However, care should be taken to ensure proper type checks and assertions to avoid runtime errors.&lt;/p&gt;

&lt;p&gt;Sumeet N.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building ⚡ Lightning-Fast APIs ⚡ with Go: A Comprehensive Guide</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Mon, 11 Sep 2023 03:36:08 +0000</pubDate>
      <link>https://dev.to/letsdotech/building-lightning-fast-apis-with-go-a-comprehensive-guide-152l</link>
      <guid>https://dev.to/letsdotech/building-lightning-fast-apis-with-go-a-comprehensive-guide-152l</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--i1CunKpT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1558980664-769d59546b3d%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHw5fHxmYXN0JTIwYXBpfGVufDB8fHx8MTY5MzA5MDc2M3ww%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--i1CunKpT--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1558980664-769d59546b3d%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHw5fHxmYXN0JTIwYXBpfGVufDB8fHx8MTY5MzA5MDc2M3ww%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" alt="person riding cruiser motorcycle during daytime" title="person riding cruiser motorcycle during daytime" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by Harley-Davidson on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the 2nd post as part of the &lt;a href="https://blog.letsdote.ch/t/gotheme"&gt;Golang Theme&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The lightning-fast speeds and reduced latency of 5G have ushered in a new era of real-time data exchange, prompting APIs to evolve accordingly. APIs now need to support the seamless and instantaneous transfer of larger data volumes, catering to the increased demand for high-quality multimedia content and dynamic interactions.&lt;/p&gt;

&lt;p&gt;This demands a reevaluation of API design, necessitating the creation of endpoints that can handle the surge in data traffic without sacrificing performance. Modern APIs must prioritize efficiency, scalability, and low latency, ensuring that applications can leverage the technology's capabilities to their fullest extent.&lt;/p&gt;

&lt;p&gt;In this post, we will explore how to build lightning fast APIs using Go programming language.&lt;/p&gt;

&lt;h2&gt;
  
  
  Factors contributing to the API Performance
&lt;/h2&gt;

&lt;p&gt;API performance is a multidimensional concept that encompasses factors like throughput, request/response times, and latency benchmarks. Developers must consider these factors while designing and optimizing APIs to ensure they can handle the demands of modern applications, deliver a seamless user experience, and leverage the capabilities of technologies like 5G to their fullest extent.&lt;/p&gt;

&lt;p&gt;API performance directly impacts user experience, application responsiveness, and overall system efficiency. Several factors come into play when considering API performance, and understanding these factors is essential for creating responsive and reliable applications.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Throughput:&lt;/strong&gt; refers to the number of requests an API can handle within a given time frame. High throughput indicates that the API can efficiently process numerous requests concurrently. It is especially crucial in scenarios where the API is handling a substantial number of simultaneous connections, for example during peak usage periods.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Request-Response Time:&lt;/strong&gt; Request time is the duration it takes for a client to send a request to the API, while response time is the duration it takes for the API to process the request and send a response back to the client. Low request and response times are essential for delivering a seamless user experience, especially in interactive applications where users expect quick results.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Latency:&lt;/strong&gt; This refers to the time it takes for data to travel from the client to the server and back. With the advent of technologies like 5G, where low latency is a hallmark, APIs must strive to minimize latency to provide real-time and interactive experiences.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Several factors can influence API performance, including the hardware and infrastructure on which the API runs, the &lt;strong&gt;efficiency of the code&lt;/strong&gt; , the complexity of the database queries, and the network conditions, scalability, caching mechanisms, data compression, and optimized algorithms can contribute to improved performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Designing Efficient APIs in Go
&lt;/h2&gt;

&lt;p&gt;Designing efficient APIs in Go requires a deep understanding of the language's features and its concurrency model. By leveraging features like strong typing, composition, concurrency, and efficient memory management, we can create APIs that leverage Go's strengths for optimal performance. Here are some key principles and practices to keep in mind:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Strong Typing and Structs:&lt;/strong&gt; Go's &lt;a href="https://dev.to/letsdotech/chatgpt-to-process-rest-responses-in-golang-54mf"&gt;strong typing and struct&lt;/a&gt; support allow you to define well-structured data models. Design your API endpoints to work with well-defined structs, making data handling more efficient and reducing the risk of type-related errors.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Favor Composition over Inheritance:&lt;/strong&gt; Go does not support traditional class-based inheritance. Instead, it promotes composition through embedding structs. This approach encourages clean and modular code, which can lead to more efficient APIs by minimizing unnecessary overhead.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Concurrency with Goroutines:&lt;/strong&gt; Go's concurrency model is centered around goroutines and channels. Utilize goroutines to handle concurrent tasks efficiently. For example, processing incoming requests concurrently, enabling better utilization of resources and improved response times.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimize for the Heap:&lt;/strong&gt; Go's garbage collector can impact performance. Minimize unnecessary memory allocation by reusing objects and using object pooling when appropriate. This reduces the load on the garbage collector and improves overall throughput.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Keep Dependencies Minimal:&lt;/strong&gt; Go's philosophy encourages minimal dependencies. Only import packages that are essential for the API's functionality. Excessive dependencies can bloat the codebase and increase startup times.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Benchmarking:&lt;/strong&gt; The &lt;code&gt;testing&lt;/code&gt; package in Go includes benchmarking tools. Regularly run benchmarks to identify performance bottlenecks and track improvements. This helps us make data-driven decisions to optimize our API.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Profiling:&lt;/strong&gt; Go's built-in profiling tools (like the pprof package) allows us to analyze our code's performance. Profiling helps pinpoint hotspots and bottlenecks, guiding our optimization efforts effectively.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Architectural Considerations For Improving API Performance
&lt;/h2&gt;

&lt;p&gt;Any discussion about system performance is incomplete without discussing the architectural details. Some considerations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Minimalist Design:&lt;/strong&gt; Keep your API design simple and focused. Avoid unnecessary endpoints and minimize data transferred in each request. A minimalist design reduces processing time and response payload size.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Caching Strategies:&lt;/strong&gt; Utilize caching for frequently requested data. Employ edge caching, in-memory caching (Redis), or content delivery networks (CDNs) to serve cached content quickly and reduce the load on the server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Content Compression:&lt;/strong&gt; Compress response payloads using techniques like Gzip or Brotli. This significantly reduces data transfer time and enhances API response speed, especially for clients with limited bandwidth.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Efficient Data Transfer Formats:&lt;/strong&gt; Use lightweight data interchange formats like JSON or Protocol Buffers. Minimize unnecessary fields and nested structures to decrease serialization and deserialization times.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimized Database Queries:&lt;/strong&gt; Optimize database queries by using appropriate indexes, avoiding N+1 query issues, and employing database caching. Well-structured and efficient queries enhance response times.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Asynchronous Processing:&lt;/strong&gt; Offload non-critical tasks to asynchronous processing to free up the API to handle incoming requests promptly. Utilize message queues or event-driven architectures for efficient background processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Microservices with Service Segmentation:&lt;/strong&gt; Employ microservices architecture to segment functionality into discrete services. This allows each microservice to be optimized individually, leading to better performance and scalability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Connection Pooling:&lt;/strong&gt; Use connection pooling for databases and external services. Reusing established connections reduces the overhead of creating new connections for each request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Load Balancing:&lt;/strong&gt; Distribute traffic across multiple server instances using load balancers. This prevents overloading a single server and ensures even resource utilization.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Gateway for Aggregation:&lt;/strong&gt; Implement an API gateway for aggregating requests to multiple microservices. This reduces the number of client-server round trips and minimizes latency.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Announcement:
&lt;/h4&gt;

&lt;blockquote&gt;
&lt;h5&gt;
  
  
  I have enabled the paywall on this publication, and to access posts older than 2 months, the same can be temporarily broken for free by referring this newsletter to more friends. Check out the options below.
&lt;/h5&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://blog.letsdote.ch/leaderboard?&amp;amp;utm_source=post"&gt;Refer a friend&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sumeet N.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Mastering Concurrency ⚔️: A Dive into Go's Goroutines and Channels</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Mon, 04 Sep 2023 03:36:07 +0000</pubDate>
      <link>https://dev.to/letsdotech/mastering-concurrency-a-dive-into-gos-goroutines-and-channels-2gme</link>
      <guid>https://dev.to/letsdotech/mastering-concurrency-a-dive-into-gos-goroutines-and-channels-2gme</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--TILP84Jx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1582213782179-e0d53f98f2ca%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx0b2dldGhlcnxlbnwwfHx8fDE2OTMyNTgxNjR8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--TILP84Jx--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1582213782179-e0d53f98f2ca%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHwxfHx0b2dldGhlcnxlbnwwfHx8fDE2OTMyNTgxNjR8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" alt="person in red sweater holding babys hand" title="person in red sweater holding babys hand" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by Hannah Busing on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the first one in the series of Golang &lt;a href="https://dev.to/letsdotech/some-updates-from-ldt-34ap-temp-slug-8274605"&gt;Themed articles&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;One of the most &lt;strong&gt;in-demand skills in the cloud-native&lt;/strong&gt; community is the ability to write concurrent programs to leverage all the multi-core processor power today’s hardware has to offer. Golang was developed keeping concurrency in mind.&lt;/p&gt;

&lt;p&gt;Digital transformation does not just mean moving to cloud platforms. In fact, digital transformation is indeed a continuous process undertaken by organisations to constantly optimize their IT spend. Well, not just organizations, solo-businesses, and startups alike - who does not want to save money?&lt;/p&gt;

&lt;p&gt;In this post, we will understand the concepts related to Golang’s concurrency, the benefits in the context of cloud architecture, and also explore a few patterns to use concurrency in Golang.&lt;/p&gt;

&lt;h2&gt;
  
  
  Goroutines vs. Threads
&lt;/h2&gt;

&lt;p&gt;Before we move ahead, it is important to understand the difference between threads and goroutines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Threads are managed by OS, while Goroutines are managed by Go runtime environment.&lt;/strong&gt; Threads are single units of execution within a process - multiple threads belong to a process - and all these threads share the same resources. While Goroutines are independently scheduled functions and do not share resources.&lt;/p&gt;

&lt;p&gt;Due to the sharing aspect described above, threads tend to be more prone to deadlocks and race conditions. While Goroutines are better immune to the same. Since Goroutines are managed by Go runtime environment, it offers higher level of abstraction - thus easy to implement by developers. Threads on the other hand offer lower level of abstraction.&lt;/p&gt;

&lt;p&gt;This managed nature of Goroutines is of great advantage over threads when automatically scheduling them on multiple OS threads. As developers, it &lt;strong&gt;reduces the cognitive load&lt;/strong&gt; of worrying about shared resources, and also makes them lightweight so that instantiation and context switching becomes faster.&lt;/p&gt;

&lt;h2&gt;
  
  
  What should we know to understand Golang concurrency?
&lt;/h2&gt;

&lt;p&gt;Goroutines and channels are fundamental concurrency constructs in the Go programming language, designed to &lt;strong&gt;simplify and enhance the development of concurrent and parallel applications&lt;/strong&gt;. They allow us to achieve concurrency by enabling multiple tasks to be executed concurrently without the need for creating separate threads or managing complex synchronization mechanisms.&lt;/p&gt;

&lt;p&gt;Channels, provide a safe and structured way for goroutines to communicate and synchronize their actions. Channels act as &lt;strong&gt;pipes for the exchange of data&lt;/strong&gt; between goroutines. They ensure proper synchronization and avoid race conditions by enforcing a model where data is sent on a channel by one goroutine and received by another in a coordinated manner, resulting in clean and clear communication between concurrent tasks. Channels are used for data sharing and signaling, allowing goroutines to coordinate their actions and operate in a synchronized manner.&lt;/p&gt;

&lt;p&gt;The combination of goroutines and channels enables us to write concurrent programs that are both efficient and comprehensible, fostering easier maintenance and debugging of complex parallel applications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top 5 Benefits of Goroutines in cloud architecture
&lt;/h2&gt;

&lt;p&gt;Backend applications built using Golang have greater impact on reliablility, and resource optimization - in a good way!&lt;/p&gt;

&lt;h4&gt;
  
  
  Efficient Concurrency
&lt;/h4&gt;

&lt;p&gt;Goroutines are designed to be lightweight and have a lower memory footprint compared to traditional threads. This efficiency makes it easier to handle a large number of concurrent tasks in cloud applications without consuming excessive resources.&lt;/p&gt;

&lt;h4&gt;
  
  
  Scalability
&lt;/h4&gt;

&lt;p&gt;Cloud architectures often require the ability to scale resources up or down dynamically based on demand. Goroutines can help distribute workloads efficiently, enabling cloud applications to handle increased traffic and workload reducing the need to provision or manage additional VM instances.&lt;/p&gt;

&lt;h4&gt;
  
  
  Parallelism
&lt;/h4&gt;

&lt;p&gt;Goroutines allow for easy parallelism by executing tasks concurrently. This leads to improved performance for tasks that are divided into smaller subtasks, such as data processing, image manipulation, or network requests. In a cloud environment, this leads to faster response times and optimized resource utilization.&lt;/p&gt;

&lt;h4&gt;
  
  
  Cost Optimization
&lt;/h4&gt;

&lt;p&gt;Cloud services are billed based on resource usage. By utilizing goroutines, applications can make better use of available resources, optimizing the utilization of CPU cores and memory. This efficiency can result in cost savings in cloud deployments.&lt;/p&gt;

&lt;h4&gt;
  
  
  Resource Pooling
&lt;/h4&gt;

&lt;p&gt;In cloud architectures, resources like database connections or network sockets need to be managed efficiently. Goroutines are used to manage resource pooling, allowing multiple tasks to share limited resources effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  Top 3 Frequently used concurrency patterns in Golang
&lt;/h2&gt;

&lt;p&gt;As a very basic example, if you make a function call with a &lt;code&gt;go&lt;/code&gt; keyword at the beginning, it will automatically be executed in parallel. Let us take a look at some of the advanced patterns below.&lt;/p&gt;

&lt;h4&gt;
  
  
  Producer-Consumer
&lt;/h4&gt;

&lt;p&gt;As the name suggests, the producer function is where input data is generated and passed to the calling function via channels to consumer function. The consumer function processes any data that arrives on the channel in concurrent manner. There can me multiple producers, and a single consumer can take care of the input data.&lt;/p&gt;

&lt;h4&gt;
  
  
  Worker Pools
&lt;/h4&gt;

&lt;p&gt;In this pattern, the logic to process the input/data is wrapped in a separate function called “worker” function. The calling function&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;divides the input data into multiple batches&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;creates input channel&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;calls the worker function with each batch of input data&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Fan-Out, Fan-In
&lt;/h4&gt;

&lt;p&gt;Similar to Worker Pools pattern, the worker function is present here as well. The difference here is that the calling function waits for aggregating the results from all the workers (fan-in). The calling function&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;divides the input data into multiple batches&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;creates input channel and results channel&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;passes each batch of data to the worker function using go keyword to induce concurrent execution&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;waits for all the workers to return results on results channel&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Refer to &lt;a href="https://www.youtube.com/watch?v=f6kdp27TYZs"&gt;this video for more information on Golang Concurrency&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Sumeet N.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>My Take On Hashicorp's adoption of BSL 📑</title>
      <dc:creator>Let's Do Tech</dc:creator>
      <pubDate>Mon, 21 Aug 2023 03:35:03 +0000</pubDate>
      <link>https://dev.to/letsdotech/my-take-on-hashicorps-adoption-of-bsl-312i</link>
      <guid>https://dev.to/letsdotech/my-take-on-hashicorps-adoption-of-bsl-312i</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ajt_0jIM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1588689653688-9b312cd6bc2b%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHwxM3x8bG9ja3xlbnwwfHx8fDE2OTIxOTEwNDV8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ajt_0jIM--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://images.unsplash.com/photo-1588689653688-9b312cd6bc2b%3Fcrop%3Dentropy%26cs%3Dtinysrgb%26fit%3Dmax%26fm%3Djpg%26ixid%3DM3wzMDAzMzh8MHwxfHNlYXJjaHwxM3x8bG9ja3xlbnwwfHx8fDE2OTIxOTEwNDV8MA%26ixlib%3Drb-4.0.3%26q%3D80%26w%3D1080" alt="brown padlock on blue wooden door" title="brown padlock on blue wooden door" width="800" height="533"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Photo by Jornada Produtora on Unsplash&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://opentf.org/"&gt;I pledged my support to opentf foundation&lt;/a&gt;, and here are the details about why I did. I have tried to be as clear as possible from the information I have read.&lt;/p&gt;

&lt;h2&gt;
  
  
  Context
&lt;/h2&gt;

&lt;p&gt;Hashicorp’s Terraform has been a leading open source IaC tool for almost a decade now. Terraform has helped startups and organizations to streamline their cloud infrastructure operations - saving them a fortune in the digital transformation journeys.&lt;/p&gt;

&lt;p&gt;Given their open source commitment, and the success described above, an entire ecosystem of partners, vendors, and community was born providing additional features with freemium pricing model. As far as the pricing model comparison between Hashicorp and most of the other vendors is concerned, they are similar.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happened?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.hashicorp.com/blog/hashicorp-adopts-business-source-license"&gt;Hashicorp recently adopted Business Source License v1.1 (BSL or BUSL)&lt;/a&gt; which was previously based on Mozilla Public License v2.0 (MPL v2.0). In summary, now they do support open source and community efforts in terms of “&lt;em&gt;copy, modify, and redistribute the code for all non-commercial and commercial use, except where providing a competitive offering to HashiCorp&lt;/em&gt;“.&lt;/p&gt;

&lt;p&gt;This restriction is specifically targeted towards vendors who are using Hashicorp’s products to build their commercial offerings which provide additional features and benefits in varied flavours.&lt;/p&gt;

&lt;p&gt;The main concern expressed by Hashicorp is that the offerings provided by these vendors are either fundamentally or substantially based on Hashicorp’s core products, while not contributing enough to their open source commitment.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does this affect?
&lt;/h2&gt;

&lt;p&gt;As an individual developer, it does not affect you and me. It also does not affect if your organisation’s commercial offerings are completely different than those of Hashicorp’s. I.e. you can still use Hashicorp’s products in production (if I understand the words correctly).&lt;/p&gt;

&lt;p&gt;But if you are a vendor (or work for a vendor) who creates a platform to offer competing features which are similar to what Hashicorp’s platform has to offer, then that is not allowed after a specific Terraform version - v1.5.5.&lt;/p&gt;

&lt;p&gt;Overall, I think this is a bit vague. Besides, the cause I am rooting for is greater than this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Opinion
&lt;/h2&gt;

&lt;p&gt;I truly appreciate and admire what Hashicorp has achieved and contributed in the space of cloud operations. They are indeed an authority in the space of IaC. I am sure this feeling of gratitude is shared by that specific set of vendors too.&lt;/p&gt;

&lt;p&gt;It is not that I never had this question about the obvious concern they have expressed. I have always admired Hashicorp to play a fatherly figure in this way. This also makes me understand where they are coming from with regards to this issue - for all the right reasons.&lt;/p&gt;

&lt;p&gt;When someone has authority, there is nothing right or wrong as far as the actions performed by them within their area of influence are concerned.&lt;/p&gt;

&lt;p&gt;I understand Hashicorp’s decision and the fact that open source projects do undergo this license evolution phase. But at the same time, this always leaves an impact in upcoming days - especially at this stage. Apart from the specific set of vendors and partners, &lt;strong&gt;this also sends ripples of uncertainty across the community and customers&lt;/strong&gt; - even if each of them fully understand what BSL means.&lt;/p&gt;

&lt;p&gt;It is not about what the current or previous license means - it is more about the conscious action taken in the light of this situation, which somewhere impacts the confidence of community. That goes on to say that the customers - not the targeted vendors and partners - &lt;strong&gt;may now consider other options with lesser licensing complexities - which is also another headache&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To look at it retrospectively with an absurd angle - and I apologise beforehand for this - I question myself - was this all part of the plan by Hashicorp? Why did they wait 10 years for this to happen? In hindsight, Hashicorp could have leveraged this &lt;strong&gt;authority to contribute to the healthy competition - with an upper hand&lt;/strong&gt; - to their vendors. Its not that all the infrastructure pain points are addressed.&lt;/p&gt;

&lt;p&gt;I do understand that the pioneers like Hashicorp have a better view about this situation, more so who else would know their business better than themselves. Hashicorp deserves full credit to what they have built, and they are very well aware of what impact this decision would cause. But &lt;strong&gt;perhaps, they had tough choices to make from business survival perspective&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The contradiction I am unable to figure out for Hashicorp is - did they really understand the meaning of OSS back then? It is easy to talk about charity, and hard to do so. &lt;strong&gt;If business was the corner-stone of their operations, then they should have not touted the “free and open source” slogan in the first place&lt;/strong&gt;. At least an intimation or a hint would have worked too. As I said before, I understand this decision of theirs, but if that is the case then it all sounds like a misuse of OSS model and community. Almost like gaining popularity and hard-but-free great contributions from thousands in the community, and then licensing it.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I want to but I am not in a position to imagine what would happen if all the OSS decides to adopt BSL/close-source hence forth.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It can happen that I did not understand the license fully, but I am skeptical for 2 reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;I am not a legal expert, I am just a developer who loves Terraform. I am really not sure about when to use and when not to use it for production.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;I am not sure if and when will they change their minds again.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I signed the opentf pledge to highlight the impact of this decision on their own ecosystem, more importantly I believe they can do better.&lt;/p&gt;

&lt;p&gt;Rest is described in the &lt;a href="https://opentf.org"&gt;OpenTF Manifesto&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Sumeet N.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
