DEV Community

Dawn Zhao for MarsCode

Posted on • Updated on

Unlock the Power of Real-Time UI: A Beginner's Guide to Streaming Data with React.js, gRPC, Envoy, and Golang

Written by Naveen M

Background

As part of our Kubernetes platform team, we face the constant challenge of providing real-time visibility into user workloads. From monitoring resource usage to tracking Kubernetes cluster activity and application status, there are numerous open-source solutions available for each specific category. However, these tools are often scattered across different platforms, resulting in a fragmented user experience. To address this issue, we have embraced the power of server-side streaming, enabling us to deliver live resource usage, Kubernetes events, and application status as soon as users access our platform portal.

Introduction

By implementing server-side streaming, we can seamlessly stream data to the user interface, providing up-to-date information without the need for manual refreshes or constant API calls. This approach revolutionizes user experience, allowing users to instantly visualize the health and performance of their workloads in a unified and simplified manner. Whether it's monitoring resource utilization, staying informed about Kubernetes events, or keeping tabs on application status, our server-side streaming solution brings together all the critical information in a single, real-time dashboard, but this will be applicable to anyone who wants to provide live streaming data to the user interface.
Gone are the days of navigating through multiple tools and platforms to gather essential insights. With our streamlined approach, users can access a comprehensive overview of their Kubernetes environment the moment they land on our platform portal. By harnessing the power of server-side streaming, we have transformed the way users interact with and monitor their workloads, making their experience more efficient, intuitive, and productive.
Through our blog series, we aim to guide you through the intricacies of setting up server-side streaming with technologies such as React.js, Envoy, gRPC, and Golang.

There are three main components involved in this project:
1. The backend, which is developed using Golang and utilizes gRPC server-side streaming to transmit data.
2. The Envoy proxy, which is responsible for making the backend service accessible to the outside world.
3. The frontend, which is built using React.js and employs grpc-web to establish communication with the backend.
The series is divided into multiple parts to accommodate the diverse language preferences of developers. If you're interested specifically in the role of Envoy in streaming or want to learn about deploying an Envoy proxy in Kubernetes, you can jump to the second part (Envoy as a frontend proxy in Kubernetes) and explore that aspect or just interested in the front end part, then you can just check out the front end part of the blog.
In this initial part, we'll focus on the easiest segment of the series: "How to Set Up gRPC Server-Side Streaming with Go." We are going to show sample applications with server side streaming. Fortunately, there is a wealth of content available on the internet for this topic, tailored to your preferred programming language.

PART 1: How to Set Up gRPC Server-Side Streaming with Go

It's time to put our plan into action! Assuming you have a basic understanding of the following concepts, let's dive right into the implementation:

  1. gRPC: It's a communication protocol that allows the client and server to exchange data efficiently.
  2. Server-side streaming: This feature is particularly useful when the server needs to send a large amount of data to the client. By using server-side streaming, the server can split the data into smaller portions and send them one by one. The client can then choose to stop receiving data if it has received enough or if it has been waiting for too long.

Now, let's start with code implementation.

Step 1: Create the Proto File
To begin, we need to define a protobuf file that will be used by both the client and server sides. Here's a simple example:

syntax = "proto3";

package protobuf;

service StreamService {
  rpc FetchResponse (Request) returns (stream Response) {}
}

message Request {
  int32 id = 1;
}

message Response {
  string result = 1;
}
Enter fullscreen mode Exit fullscreen mode

In this proto file, we have a single function called FetchResponse that takes a Request parameter and returns a stream of Response messages.

Step 2: Generate the Protocol Buffer File

Before we proceed, we need to generate the corresponding pb file that will be used in our Go program. Each programming language has its own way of generating the protocol buffer file. In Go, we will be using the protoc library.
If you haven't installed it yet, you can find the installation guide provided by Google.
To generate the protocol buffer file, run the following command:

protoc --go_out=plugins=grpc:. *.proto
Enter fullscreen mode Exit fullscreen mode

Now, we have the data.pb.go file ready to be used in our implementation.

Step 3: Server side implementation
To create the server file, follow the code snippet below:

package main

import (
        "fmt"
        "log"
        "net"
        "sync"
        "time"

        pb "github.com/mnkg561/go-grpc-server-streaming-example/src/proto"
        "google.golang.org/grpc"
)

type server struct{}

func (s server) FetchResponse(in pb.Request, srv pb.StreamService_FetchResponseServer) error {

        log.Printf("Fetching response for ID: %d", in.Id)

        var wg sync.WaitGroup
        for i := 0; i < 5; i++ {
                wg.Add(1)
                go func(count int) {
                        defer wg.Done()

                        time.Sleep(time.Duration(count)  time.Second)
                        resp := pb.Response{Result: fmt.Sprintf("Request #%d for ID: %d", count, in.Id)}
                        if err := srv.Send(&resp); err != nil {
                                log.Printf("Error sending response: %v", err)
                        }
                        log.Printf("Finished processing request number: %d", count)
                }(i)
        }

        wg.Wait()
        return nil
}

func main() {
        lis, err := net.Listen("tcp", ":50005")
        if err != nil {
                log.Fatalf("Failed to listen: %v", err)
        }

        s := grpc.NewServer()
        pb.RegisterStreamServiceServer(s, server{})

        log.Println("Server started")
        if err := s.Serve(lis); err != nil {
                log.Fatalf("Failed to serve: %v", err)
        }
}
Enter fullscreen mode Exit fullscreen mode

In this server file, I have implemented the FetchResponse function, which receives a request from the client and sends a stream of responses back. The server simulates concurrent processing using goroutines. For each request, it streams five responses back to the client. Each response is delayed by a certain duration to simulate different processing times.
The server listens on port 50005 and registers the StreamServiceServer with the created server. Finally, it starts serving requests and logs a message indicating that the server has started.
Now you have the server file ready to handle streaming requests from clients.

Part 2

Stay tuned for Part 2 where we will continue to dive into the exciting world of streaming data and how it can revolutionize your user interface.

Sponsored by MarsCode
Welcome to join our Discord to discuss your ideas with us.

Top comments (0)