DEV Community

Cover image for Profiling go gRPC service with google cloud profiler.
Bill Chung
Bill Chung

Posted on

Profiling go gRPC service with google cloud profiler.

About

This article is about how to integrate Cloud Profiler with go application (a gRPC server as an example.)

The complete example can be found on github

Profiler

If you're not familiar with profiling or profiler, please check this wiki, but in short profiling is to measure the time, space, or any instructions a software uses, the most common visualization is flamegraph in recent years. The concept of flamegraph can be found on on Brendan Gregg's website or google cloud

Cloud Profiler

Google's Cloud Profiler is a service that lets you collect and visualize application profiles easily, you can instrument your application with minimal code.

below are some basic information about cloud profiler:

  • Supported languages: Python, Go, NodeJS, and Java
  • Pricing: Free
  • Retention: 30 days
  • Performance impacts: see this
  • You can also use it outside of GCP, like below I'll just run the service locally and the data will be sent to Cloud profiler and visualized.

gRPC server

Since the application is built with gRPRC, before diving into the integration, please let me walk through some setup of the application, if you're already familiar with gRPC services, feel free to skip this section.

profobuf - Ping service

There's a ping.proto in protobuf folder that defines the Ping service:

syntax = "proto3";

package ping;
option go_package = ".;ping";

service Ping {
    // Get returns a response with same message id and body, and with timestamp.
    rpc Get (PingRequest) returns (PingResponse);
    // GetAfter is same as Ping but return the response after certain time
    rpc GetAfter (PingRequestWithSleep) returns (PingResponse);
    // GetRandom generates random strings and return, also produce lots of useless stuff to show the effects of heap
    rpc GetRandom (PingRequest) returns (PingResponse);
}


message PingRequest {
    string message_ID = 1;
    string message_body = 2;
}

message PingRequestWithSleep {
    string message_ID = 1;
    string message_body = 2;
    int32 sleep = 3;
}

message PingResponse {
    string message_ID = 1;
    string message_body = 2;
    uint64 timestamp = 3;
}
Enter fullscreen mode Exit fullscreen mode

If you're unfamiliar with protobuf, you can refer to protobuf3 doc

in this proto we defined a service call Ping, it has 3 RPC (remote procedure call) functions:

  1. Get: returns the message ID and body that you gave.
  2. GetAfter: returns the message ID and body after the sleep time (as seconds) you give.
  3. GetRandom: returns a random string as body after some unnecessary random strings generations. (just to demonstrate some cpu time and memory taken but this function.)

Generate gRPC go code

  1. Install protoc if you havent - see here
  2. Run
make proto-go
Enter fullscreen mode Exit fullscreen mode

to generate a file called ping.pb.go

note in the repo this is already generated, in my case I'm using protoc-gen-go v1.25.0 and protoc v3.6.1, you might get different codes if you're using different versions.

The Ping server implementation

After the go gRPC is ready, we can start to implement the functions, in my example it looks like this (a file named ping.go inside ping folder):

package Ping

import (
    "context"
    pb "github.com/billcchung/example-service/protobuf"
    "math/rand"
    "time"
)

var letterRunes = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")

type Server struct{}

func (s Server) Get(ctx context.Context, req *pb.PingRequest) (res *pb.PingResponse, err error) {
    res = &pb.PingResponse{
        Message_ID:  req.Message_ID,
        MessageBody: req.MessageBody,
        Timestamp:   uint64(time.Now().UnixNano() / int64(time.Millisecond)),
    }
    return
}

func (s Server) GetAfter(ctx context.Context, req *pb.PingRequestWithSleep) (res *pb.PingResponse, err error) {
    time.Sleep(time.Duration(req.Sleep) * time.Second)
    return s.Get(ctx, &pb.PingRequest{Message_ID: req.Message_ID, MessageBody: req.MessageBody})
}

func (s Server) GetRandom(ctx context.Context, req *pb.PingRequest) (res *pb.PingResponse, err error) {
    var garbage []string
    for i := 0; i <= 1000000; i++ {
        garbage = append(garbage, string(letterRunes[rand.Intn(len(letterRunes))]))
    }
    return s.Get(ctx, &pb.PingRequest{Message_ID: req.Message_ID, MessageBody: string(letterRunes[rand.Intn(len(letterRunes))])})
}
Enter fullscreen mode Exit fullscreen mode

Setup Cloud Profiler

It's really easy to setup cloud profiler, you just need to import "cloud.google.com/go/profiler" and start the profiler as soon as your process starts, it'll run as a go routine that collects and uploads profiles.

In our example, it's just like below in main.go :

profiler.Start(profiler.Config{
    Service:   service,
    ServiceVersion: serviceVersion
    ProjectID: projectID,
})
Enter fullscreen mode Exit fullscreen mode
  • Service is the service name,
  • ServiceVersion is the service version
  • ProjectID is your GCP project ID

Run the service

Before running the example service, you need to enable cloud profiler, you can enable it from here

And since this example will be run outside of GCP, you'll also need to get the permissions and authorizations, for details please see this (you can skip this if you already have the permissions and logged in to GCP with CLI, it automatically assumes the user you logged in, it's same flow as other GCP services.)

Then you can run the service with (replace $PROJECT_ID with your GCP project ID):

go run main.go -p $PROJECT_ID
Enter fullscreen mode Exit fullscreen mode

Run the client

Once the server starts running, you can then use the client in tools folder to make gRPC calls, the connect.go makes one call of each RPC (Get, GetAfter, and GetRandom), and in order for the profiler to take enough samples, you'll need to run it multiple times, so:

for i in $(seq 1 1000); do go run tools/connect.go ; done
Enter fullscreen mode Exit fullscreen mode

Check the graph and interpret

Wait a while after started the client loop, you can go to cloud profiler console to see the visualization of your application profiles, e.g.,

Screen Shot 2020-11-08 at 10.36.57

you might see slightly different graph, few things you might notice:

  • it has collected 16 profiles, CPU time ranging from 520ms to 780ms
  • the Server.GetRandom took quite a bit of time, and you know it's our application code
  • there's no Server.Get profile because the function call is returned pretty fast and there's also no GetAfter because time.Sleep doesnt consume cpu time. You might have one or two, but they're relatively rare, and it's okay since when we enable profiling we care about what's taking the time or resources.

You can click on Server.GetRandom to drill down to see what's going on within the function:
Screen Shot 2020-11-08 at 10.38.01
You can see the function took 164.38ms in average, and mostly taken by growslice and Intn:
Screen Shot 2020-11-08 at 10.38.08
Screen Shot 2020-11-08 at 10.38.14
Of course they're builtin to go, and they were written that way just to demo, but in a real application you can get an idea on what function is taking the time and you can try to see the root cause and improve them.

You can also choose other Profile type, like Heap to see how much memory each function takes:
Screen Shot 2020-11-08 at 10.38.29
here the Server.Random took 13.24MiB

One other thing to note is for the Profile type, you can select Threads, but for go application it's actually go routines

For other details you can check here

Summary

This article mainly demonstrates

  • Use and interpret cloud profiler
  • Build gRPC server

Hope it's useful, I chose gRPC server as an example because I plan to write a series of articles on how to build microservices based backend system with kubernetes. So I'll build more features on top of this gRPC service along with the entire kubernetes platform.

Top comments (1)

Some comments may only be visible to logged-in visitors. Sign in to view all comments.