DEV Community

Roman Kudryashov
Roman Kudryashov

Posted on • Edited on • Originally published at romankudryashov.com

Introduction to gRPC in Rust

Overview

In this article, you will see how to create gRPC server and client in Rust. For visibility purposes, the client will also be a Telegram bot. Finally, the following architecture will be obtained:

architecture

This article is not a complete tutorial on gRPC in Rust, but rather a practical guide demonstrating the basics and how to create a gRPC-based application.

The domain model includes data about planets in the Solar System and their satellites.

Implementation

There are several gRPC implementations in Rust. For this project, tonic is used.

The project contains the following modules:

The latter module contains gRPC service definition and is responsible for the generation of gRPC code needed for both server and client.

Service definition and code generation

Service definition is written in proto3 version of Protocol Buffers language and is located in the .proto file:

solar-system-info.proto

syntax = "proto3";

package solar_system_info;

import "google/protobuf/timestamp.proto";
import "google/protobuf/empty.proto";

service SolarSystemInfo {
  rpc GetPlanetsList (google.protobuf.Empty) returns (PlanetsListResponse);
  rpc GetPlanet (PlanetRequest) returns (PlanetResponse);
  rpc GetPlanets (google.protobuf.Empty) returns (stream PlanetResponse);
}

message PlanetsListResponse {
  repeated string list = 1;
}

message PlanetRequest {
  string name = 1;
}

message PlanetResponse {
  Planet planet = 1;
}

message Planet {
  uint64 id = 1;
  string name = 2;
  Type type = 3;
  float meanRadius = 4;
  float mass = 5;
  repeated Satellite satellites = 6;
  bytes image = 7;
}

enum Type {
  TERRESTRIAL_PLANET = 0;
  GAS_GIANT = 1;
  ICE_GIANT = 2;
  DWARF_PLANET = 3;
}

message Satellite {
  uint64 id = 1;
  string name = 2;
  google.protobuf.Timestamp first_spacecraft_landing_date = 3;
}
Enter fullscreen mode Exit fullscreen mode

Here simple RPCs (GetPlanetsList and GetPlanet), server-side streaming RPC (GetPlanets), and structures for passing the required data are defined. The structures contain fields of some common types (uint64, string, etc.), as well as of:

  • enum (Planet.type)

  • list (Planet.satellites)

  • binary data (Planet.image)

  • date/timestamp type (Satellite.first_spacecraft_landing_date)

To set up the generation of server and client gRPC code, first let’s add the required dependencies:

Cargo.toml

[package]
name = "solar-system-info-rpc"
version = "0.1.0"
edition = "2018"

[dependencies]
tonic = "0.4.2" # Rust gRPC implementation
prost = "0.7.0" # Rust Protocol Buffers implementation
prost-types = "0.7.0" # Contains definitions of Protocol Buffers well-known types

[build-dependencies]
tonic-build = "0.4.2"
Enter fullscreen mode Exit fullscreen mode

prost-types crate allows us to use some of well-known Protobuf types, such as Empty and Timestamp.

At the root of the module the following should be located:

build.rs

fn main() -> Result<(), Box<dyn std::error::Error>> {
    tonic_build::compile_protos("proto/solar-system-info/solar-system-info.proto")?;
    Ok(())
}
Enter fullscreen mode Exit fullscreen mode

Now let’s create a module that will contain the generated code and will be used by both server and client applications:

lib.rs

pub mod solar_system_info {
    tonic::include_proto!("solar_system_info");
}
Enter fullscreen mode Exit fullscreen mode

After you run server or client, you will find all the generated code in /target/debug/build/solar-system-info-rpc/out/solar_system_info.rs file. For example, to implement the server you will need to implement generated SolarSystemInfo trait:

Generated SolarSystemInfo trait

#[doc = r" Generated server implementations."]
pub mod solar_system_info_server {
    #![allow(unused_variables, dead_code, missing_docs)]
    use tonic::codegen::*;
    #[doc = "Generated trait containing gRPC methods that should be implemented for use with SolarSystemInfoServer."]
    #[async_trait]
    pub trait SolarSystemInfo: Send + Sync + 'static {
        async fn get_planets_list(
            &self,
            request: tonic::Request<()>,
        ) -> Result<tonic::Response<super::PlanetsListResponse>, tonic::Status>;
        async fn get_planet(
            &self,
            request: tonic::Request<super::PlanetRequest>,
        ) -> Result<tonic::Response<super::PlanetResponse>, tonic::Status>;
        #[doc = "Server streaming response type for the GetPlanets method."]
        type GetPlanetsStream: futures_core::Stream<Item = Result<super::PlanetResponse, tonic::Status>>
            + Send
            + Sync
            + 'static;
        async fn get_planets(
            &self,
            request: tonic::Request<()>,
        ) -> Result<tonic::Response<Self::GetPlanetsStream>, tonic::Status>;
    }
    #[derive(Debug)]
    pub struct SolarSystemInfoServer<T: SolarSystemInfo> {
        inner: _Inner<T>,
    }
}
Enter fullscreen mode Exit fullscreen mode

Generated structures used by get_planet function look like:

Generated structures for get_planet function

#[derive(Clone, PartialEq, ::prost::Message)]
pub struct PlanetRequest {
    #[prost(string, tag = "1")]
    pub name: ::prost::alloc::string::String,
}
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct PlanetResponse {
    #[prost(message, optional, tag = "1")]
    pub planet: ::core::option::Option<Planet>,
}
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Planet {
    #[prost(uint64, tag = "1")]
    pub id: u64,
    #[prost(string, tag = "2")]
    pub name: ::prost::alloc::string::String,
    #[prost(enumeration = "Type", tag = "3")]
    pub r#type: i32,
    #[prost(float, tag = "4")]
    pub mean_radius: f32,
    #[prost(float, tag = "5")]
    pub mass: f32,
    #[prost(message, repeated, tag = "6")]
    pub satellites: ::prost::alloc::vec::Vec<Satellite>,
    #[prost(bytes = "vec", tag = "7")]
    pub image: ::prost::alloc::vec::Vec<u8>,
}
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct Satellite {
    #[prost(uint64, tag = "1")]
    pub id: u64,
    #[prost(string, tag = "2")]
    pub name: ::prost::alloc::string::String,
    #[prost(message, optional, tag = "3")]
    pub first_spacecraft_landing_date: ::core::option::Option<::prost_types::Timestamp>,
}
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash, PartialOrd, Ord, ::prost::Enumeration)]
#[repr(i32)]
pub enum Type {
    TerrestrialPlanet = 0,
    GasGiant = 1,
    IceGiant = 2,
    DwarfPlanet = 3,
}
Enter fullscreen mode Exit fullscreen mode

gRPC server

main function of the server looks like this:

main function

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    dotenv().ok();
    env_logger::init();

    info!("Starting Solar System info server");

    let addr = std::env::var("GRPC_SERVER_ADDRESS")?.parse()?;

    let pool = create_connection_pool();
    run_migrations(&pool);

    let solar_system_info = SolarSystemInfoService { pool };
    let svc = SolarSystemInfoServer::new(solar_system_info);

    Server::builder().add_service(svc).serve(addr).await?;

    Ok(())
}
Enter fullscreen mode Exit fullscreen mode

An implementation of SolarSystemInfo trait (it was shown in the previous section) looks like:

gRPC server implementation

struct SolarSystemInfoService {
    pool: PgPool,
}

#[tonic::async_trait]
impl SolarSystemInfo for SolarSystemInfoService {
    type GetPlanetsStream =
        Pin<Box<dyn Stream<Item = Result<PlanetResponse, Status>> + Send + Sync + 'static>>;

    async fn get_planets_list(
        &self,
        request: Request<()>,
    ) -> Result<Response<PlanetsListResponse>, Status> {
        debug!("Got a request: {:?}", request);

        let names_of_planets = persistence::repository::get_names(&get_connection(&self.pool))
            .expect("Can't get names of the planets");

        let reply = PlanetsListResponse {
            list: names_of_planets,
        };

        Ok(Response::new(reply))
    }

    async fn get_planets(
        &self,
        request: Request<()>,
    ) -> Result<Response<Self::GetPlanetsStream>, Status> {
        debug!("Got a request: {:?}", request);

        let (tx, rx) = mpsc::channel(4);

        let planets: Vec<Planet> = persistence::repository::get_all(&get_connection(&self.pool))
            .expect("Can't load planets")
            .into_iter()
            .map(|p| {
                PlanetWrapper {
                    planet: p.0,
                    satellites: p.1,
                }
                .into()
            })
            .collect();

        tokio::spawn(async move {
            let mut stream = tokio_stream::iter(&planets);

            while let Some(planet) = stream.next().await {
                tx.send(Ok(PlanetResponse {
                    planet: Some(planet.clone()),
                }))
                .await
                .unwrap();
            }
        });

        Ok(Response::new(Box::pin(
            tokio_stream::wrappers::ReceiverStream::new(rx),
        )))
    }

    async fn get_planet(
        &self,
        request: Request<PlanetRequest>,
    ) -> Result<Response<PlanetResponse>, Status> {
        debug!("Got a request: {:?}", request);

        let planet_name = request.into_inner().name;

        let planet =
            persistence::repository::get_by_name(&planet_name, &get_connection(&self.pool));

        match planet {
            Ok(planet) => {
                let planet = PlanetWrapper {
                    planet: planet.0,
                    satellites: planet.1,
                }
                .into();

                let reply = PlanetResponse {
                    planet: Some(planet),
                };

                Ok(Response::new(reply))
            }
            Err(e) => {
                error!(
                    "There was an error while getting a planet {}: {}",
                    &planet_name, e
                );
                match e {
                    Error::NotFound => Err(Status::not_found(format!(
                        "Planet with name {} not found",
                        &planet_name
                    ))),
                    _ => Err(Status::unknown(format!(
                        "There was an error while getting a planet {}: {}",
                        &planet_name, e
                    ))),
                }
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Here SolarSystemInfoService custom struct is defined which has access to the database by using Diesel ORM.

Recall that get_planets_list and get_planet are examples of unary RPC, and get_planets is the example of server-side streaming RPC.

Images of planets are included in the application binary at compile time using rust_embed crate (at the development time they are loaded from a file system).

gRPC client

gRPC client in the bot module is created as follows:

gRPC client creation

async fn create_grpc_client() -> SolarSystemInfoClient<tonic::transport::Channel> {
    let channel = tonic::transport::Channel::from_static(&GRPC_SERVER_ADDRESS)
        .connect()
        .await
        .expect("Can't create a channel");

    SolarSystemInfoClient::new(channel)
}
Enter fullscreen mode Exit fullscreen mode

It then can be used like this:

Using of gRPC client

let response = get_planets_list(grpc_client).await?;
Enter fullscreen mode Exit fullscreen mode

Telegram bot

As was said earlier, for visibility purposes, gRPC client is also a Telegram bot. To implement the bot, teloxide library was used.

We’ll go right away to the main.rs:

main.rs

#[tokio::main]
async fn main() {
    dotenv().ok();
    teloxide::enable_logging!();
    log::info!("Starting Solar System info bot");

    let api_url = std::env::var("TELEGRAM_API_URL").expect("Can't get Telegram API URL");
    let api_url = Url::parse(&api_url).expect("Can't parse Telegram API URL");

    let bot = Bot::from_env()
        .set_api_url(api_url)
        .parse_mode(Html)
        .auto_send();

    let bot = Arc::new(bot);

    let grpc_client = create_grpc_client().await;

    teloxide::commands_repl(bot, "solar-system-info-bot", move |cx, command| {
        answer(cx, command, grpc_client.clone())
    })
    .await;
}
Enter fullscreen mode Exit fullscreen mode

To simplify SSL/TLS setup, I included nginx module in the project. It acts as a forward proxy that receives HTTP requests from the bot and redirects them to Telegram API servers.

Launch and testing

For launching the project locally you have two options:

  • using Docker Compose (docker-compose.yml):

    docker-compose up

  • without Docker

    Start both gRPC server and client by using cargo run

To perform requests to the server you can use some gRPC client (for example, BloomRPC):

bloomrpc

or do it indirectly by using running Telegram bot:

telegram bot demo

Commands of the bot are mapped to RPCs as follows:

  • /listGetPlanetsList

  • /planetsGetPlanets

  • /planetGetPlanet

To test the application using the bot, you need a Telegram account and your own bot (here is an introduction on this topic). Depending on the chosen launch option, a token of the bot should be specified here or here.

CI/CD

CI/CD is configured using GitHub Actions (workflow) that builds Docker images of gRPC server and client (that is Telegram bot) and deploys them on Google Cloud Platform.

The bot can be accessed here.

Conclusion

In this article, I showed how to create gRPC server and client in Rust and also how to use the client as a data source for a Telegram bot. Feel free to contact me if you have found any mistakes in the article or the source code. Thanks for reading!

Useful links

Top comments (0)