DEV Community

Cover image for Introducing solar-powered serverless!
Josh Mo
Josh Mo

Posted on

Introducing solar-powered serverless!

Table of Contents

No, this isn't clickbait.

Most cloud providers claim to be green, but when you try to ask them seriously about their green credentials, they tend to not give you a very good response. In a world where climate change is becoming more and more of a problem, it's becoming increasingly important to be able to develop software that isn't just fast - it should also have a low memory footprint.

Enter GreenCloudComputing (GCC), a company that uses solar energy to power their servers! You can host your serverless functions with them, as well as sell them solar energy for those who are solar-savvy. They support multiple languages and you can also chain your serverless functions for an event-driven workflow.

As a short summary of how GCC works: their infrastructure revolves around a matching engine written in Go that simply matches queued requests to users' machines. As well as this, they also buy solar energy back from users to power the requests - so it's a win-win situation! They also support Rust, which ranks highly as the second most environmentally friendly programming language (after C/C++, of course!).

This guide will primarily focus on using GCC with Rust because Rust is my most used language (both professionally and for hobby projects), but they support a variety of languages:

  • Golang
  • Python
  • Node.js
  • C#
  • Ruby
  • And of course, Rust!

No Rust experience is required to make use of this short tutorial, although having some experience with Rust and/or other programming languages will make this much more pleasant.

Getting Started

To get started, it's pretty easy. You need to make an account on their website, then download the CLI tool from the dashboard and add permissions for the file to be used as an executable. You can place the binary anywhere on your computer, but it's strongly suggested to alias the file as gccli (which we'll use to reference the executable throughout this article).

You will also need an API key which you can find by going to the Account tab after logging in, going to API Key then generating a new one. When you're logging in via the CLI (gccli login), you'll be prompted to enter your API key there.

If you're using AMD64 Linux, here's a wget command so you can save some time once you've signed in and got your API key:

wget https://dl.greencloudcomputing.io/gccli/main/gccli-main-linux-amd64 \
 -O ~/.local/bin/gccli && chmod +x ~/.local/bin/gccli
Enter fullscreen mode Exit fullscreen mode

This little Linux command does the following:

  • Downloads the file straight from the source
  • Puts it in ~/.local/bin
  • Allows it to be executed

This assumes you have ~/.local/bin in your PATH.

Using Rust on GreenCloud

You can then get started with using gccli fx init, which will ask for your API key from the website if you already haven't logged in using gccli login. For my own functions, I prefer to add the -l rs flag to fx init which automatically sets the language to Rust.

Once done, you'll have a new project that has the hyper crate pre-installed (v0.14 at the time of writing) with a single function. It takes a hyper::Request and returns a hyper::Response. Currently this is fixed - so for those who want to use their favourite frameworks, you may be out of luck.

We'll also install additional dependencies for serializing and deserializing JSON. You can copy the shell snippet below:

cargo add serde-json serde -F serde/derive
Enter fullscreen mode Exit fullscreen mode

This adds the serde-json and serde libraries (with serde enabling the derive feature). Both of these libraries are used quite often in web services as you may find yourself deserializing requests to a known format (JSON, Messagepack, etc...) and serializing response bodies quite often.

Your initial lib.rs should look like this:

use hyper::{Body, Request, Response};
use hyper::header::{HeaderValue, CONTENT_TYPE};

const PHRASE: &str = "Hello from RUST by GreenCloud!";

pub async fn handle(_req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
    let mut response = Response::new(Body::from(PHRASE));

    let content_type_header = HeaderValue::from_static("text/plain");
    response.headers_mut().insert(CONTENT_TYPE, content_type_header);

    response
}
Enter fullscreen mode Exit fullscreen mode

Next we can try this out by using gccli fx start, which will start a local container we can interact with at http://localhost:8080. Using cURL should return the following:

Hello from RUST by GreenCloud!
Enter fullscreen mode Exit fullscreen mode

Error Handling & Custom Response Types

Now that we've got our basic endpoint set up, let's set up some error handling. Because we're using a very low level library (and error types are not directly propagated back to the user), we will want to implement user-facing error types within our Ok branch.

By default, returning an error in this particular context will return 500 Internal Server Error with a blank page. We can remedy this by creating an Enum with several variants, that can be automatically turned into a Response<Body> - that will then be sent as a "successful" response.

See the example below:

#[derive(Debug)]
enum Resp {
    OkThing(Thing),
    NotPostMethod,
    SerdeJsonError(serde_json::Error),
}

impl From<serde_json::Error> for Resp {
    fn from(e: serde_json::Error) -> Self {
        Self::SerdeJsonError(e)
    }
}
Enter fullscreen mode Exit fullscreen mode

Note that we've created a response that can represent the following:

  • A successful response (in which it just echoes the JSON request body and sends it back)
  • A "method not allowed" response (for example, on POST requests)
  • A (de)serialization error.

To make turning our enum back into a Response<Body> as easy as possible, we can implement the From<T> trait which allows the use of T::from() to automatically convert a type to a known type, as long as it implements From<T>. Note however, that it also automatically implements Into<T> so we can convert the type back!

impl From<Resp> for Response<Body> {
    fn from(resp: Resp) -> Response<Body> {
        let (response_text, content_type, status_code) = match resp {
            Resp::OkThing(thing) => (
                serde_json::to_string_pretty(&thing).unwrap().into_bytes(),
                HeaderValue::from_static("application/json"),
                StatusCode::OK,
            ),
            Resp::NotPostMethod => (
                b"This endpoint only accepts POST methods!".to_vec(),
                HeaderValue::from_static("text/plain"),
                StatusCode::METHOD_NOT_ALLOWED,
            ),

            Resp::SerdeJsonError(err) => (
                format!("serde_json error: {err}").into_bytes(),
                HeaderValue::from_static("text/plain"),
                StatusCode::METHOD_NOT_ALLOWED,
            ),
        };

        let mut response = Response::new(Body::from(response_text));

        response.headers_mut().insert(CONTENT_TYPE, content_type);

        *response.status_mut() = status_code;

        response
    }
}
Enter fullscreen mode Exit fullscreen mode

We can also implement this for serde_json::Error to make it easy to convert errors:

impl From<serde_json::Error> for Resp {
    fn from(e: serde_json::Error) -> Self {
        Self::SerdeJsonError(e)
    }
}
Enter fullscreen mode Exit fullscreen mode

POST requests and JSON

Next, we'll talk about making POST requests. When using the request body from hyper, we can split a given HTTP request into a byte-array body as well as a Parts struct (which essentially represents anything you want to know about a HTTP request besides the body).

let (parts, body) = request.into_parts();

    match parts.method {
        Method::POST => {}
        _ => return Ok(Resp::NotPostMethod.into()),
    }

let body = hyper::body::to_bytes(body).await?;
Enter fullscreen mode Exit fullscreen mode

This does the following:

  • Splits the request into two parts, the body itself and the Parts (everything else that isn't the body - ie headers, etc)
  • If the HTTP method isn't a POST request, return a Method Not Allowed response through type conversion
  • Turns the body into a Vec<u8> that we can then use later on. We use the question mark operator here to automatically propagate the error as this returns hyper::Error.

It should be noted that the original body gets consumed by to_bytes()! This is important to note as Body does not implement Clone or Copy.

Next, we'll define a type that can be deserialized from (and serialized to!) a request body (Vec<u8>). We already added the serde and serde-json crates, which makes this much easier to incorporate into our endpoint. It's important to note here that we added the derive feature for the serde crate. This allows the usage of derive macros, making it much easier to implement (de)serializing of structs and enums in Rust.

use serde::{Deserialize, Serialize};

#[derive(Deserialize, Serialize)]
struct Thing {
    message: String,
}
Enter fullscreen mode Exit fullscreen mode

Now we can write our whole handler function, which should now look like this (note that Response<Body> is still required as the return type due to GCC internal type constraints):

pub async fn handle(req: Request<Body>) -> Result<Response<Body>, ApiError> {
    let (parts, body) = req.into_parts();

    match parts.method {
        Method::POST => {}
        _ => return Ok(Resp::NotPostMethod.into()),
    }

    let body = hyper::body::to_bytes(body).await?;

    let thing: Thing = match serde_json::from_slice(&body) {
        Ok(res) => res,
        Err(e) => return Ok(Resp::from(e).into()),
    };

    Ok(Resp::OkThing(thing).into())
}
Enter fullscreen mode Exit fullscreen mode

As you can see, the way we have designed our endpoint code allows for minimal application code while leveraging the power of Rust traits for maximum efficiency.

URL Query Parameters with Hyper

Additionally, we can also get URL parameters with hyper in three short lines. Before we do this, we'll want to add the url crate. This will allow us to parse the Uri type (from the uri field in the Parts struct) to a Url:

cargo add url
Enter fullscreen mode Exit fullscreen mode

Next, we can write some code to do the following:

  • Grabs the URI and turns it into a string
  • Parses the resulting string to a URL
  • Attempts to get the query pairs, iterate over them and collect all of the pairs into a key-value array (that is then serializable to JSON).
let uri_string = parts.uri.to_string();
let request_url = Url::parse(&uri_string).unwrap();
let params: Value = request_url.query_pairs().into_owned().collect();
Enter fullscreen mode Exit fullscreen mode

On the user end when returned, this would simply appear as a nested JSON object.

To update our response parameter, let's also include our parameters within the response value:

#[derive(Serialize, Debug)]
struct MyResponse {
    thing: Thing,
    params: Value,
}
Enter fullscreen mode Exit fullscreen mode

Next, you'll want to add MyResponse as a possible response to your Resp enum so that we can represent it as a possible response:

#[derive(Debug)]
enum Resp {
    OkThing(Thing),
    Ok(MyResponse),
    NotPostMethod,
    SerdeJsonError(serde_json::Error),
}
Enter fullscreen mode Exit fullscreen mode

After this, we can simply add it as a matching arm to the impl From<Resp> for Response<Body> block:

// .. other stuff here
Resp::Ok(response) => (
    serde_json::to_string_pretty(&response)
    .unwrap()
    .into_bytes(),
HeaderValue::from_static("application/json"),
StatusCode::OK,
    ),
// .. other stuff here
Enter fullscreen mode Exit fullscreen mode

Deploying

To deploy, all you need to write is gccli fx deploy and watch the magic happen! GreenCloud takes care of all of the deployment steps.

Functions can be made public (as HTTP endpoints) by using gccli fx public, which will generate a public endpoint for you that you can then use in other applications. Alternatively, you can also call them from the terminal using cURL or your favourite API tester like Postman. If you need to delete your endpoint, you can use gccli fx public --delete.

If you need to deploy but you're unable to reach the GreenCloudComputing CLI, you can also send your ZIP file from the Build button (from the Functions menu in the dashboard).

You can also schedule your function to fire at a given time - similar to a cronjob or scheduled task. Pretty cool feature to have - you can queue up a load of functions to be ran at different times but pass in different request bodies or URL parameters.

Conclusion

Thanks for reading! As with any new idea, the company have some exciting new features are on the horizon like WASM support, Carbon visualisation/reporting and GreenCloud Storage. If there's anything else you're interested in regarding GreenCloud, let me know and I can do a deeper dive into some of the other features the platform has.

Top comments (1)

Collapse
 
thijsdez profile image
Thijs de Z

Very well written, and GCC is a great company! :)