DEV Community

Cover image for Should you be concerned about vendor lock-in when writing FaaS functions?
Paul Swail
Paul Swail

Posted on • Originally published at winterwindsoftware.com

Should you be concerned about vendor lock-in when writing FaaS functions?

Vendor lock-in is a topic which has been discussed at length since the early days of cloud computing and is particularly prevalent today in the context of serverless applications.

"Lambda and serverless is one of the worst forms of proprietary lock-in that we've ever seen"
โ€” CEO of CoreOS (source: The Register)

But is it a real issue that you need to be concerned about when picking a Functions-as-a-Service (FaaS) provider?

Or is it just fear-mongering put out by competing vendors and "traditional" software architects averse to learning a new paradigm?

In this article, we will walk through the FaaS offerings of the 3 major cloud providers (AWS Lambda, Google Cloud Functions and Microsoft Azure Functions) and examine what would be required to switch providers down the line.

Why might you want to switch?

Firstly, let's look at some potential reasons why you may want to switch FaaS provider:

  • You want to integrate your function with a new back-end service which is only offered by another provider
  • Runtime issues or limitations of current provider being hit
  • Function execution is becoming too costly and would be much cheaper elsewhere
  • Provider drops support for your language runtime
  • Organisational strategy or support reasons

What would be involved in switching?

In terms of gauging the effort to switch FaaS providers, we will look at:

  • changes required to the function codebase.
  • changes to execution configuration, deployment and triggers.

Code changes

Each cloud provider specifies a function signature which your code must adhere to in order to be executable in their cloud. Each provider's signature is slightly different. The following Node.js code examples show how, for each provider, you can read the input from the triggering event and send a response back. Each example assumes it's been triggered by a HTTP request.

AWS Lambda (docs)

exports.myHandler = async (event, context) => {
    const input = JSON.parse(event.body);
    return {
        statusCode: 200,
        body: {
            message: `Hello ${input.myName}`
        }
    };
};
Enter fullscreen mode Exit fullscreen mode

Azure Functions (docs)

module.exports = (context, req) => {
    const input = JSON.parse(req.body);
    context.res = {
        status: 200,
        body: {
            message: `Hello ${input.myName}`
        }
    };
    context.done();
};
Enter fullscreen mode Exit fullscreen mode

Google Cloud Functions (docs)

exports.myHandler = (req, res) => {
    res.status(200).send({
       message: `Hello ${req.body.myName}`
    });
};
Enter fullscreen mode Exit fullscreen mode

As you can see from the above 3 examples, the differences between each provider are small so it should not be a big job to refactor them. Initiatives such as Cloud Events (aimed at standardising the schema for events which trigger functions), should help reduce these differences even more in the future.

Another point to consider is the runtime of your chosen language that is supported by the cloud provider, in particular if you are using new language features. For example, at the time of writing AWS Lambda supports Node.js v8.10 which has the new async/await JavaScript syntax. However, Google Cloud functions only supports up to v6.14 which doesn't support this feature.

Changes to deployment, trigger and execution configuration

Each provider gives different settings to allow you to configure how your functions will execute, in particular around how they're triggered and what compute resources (memory, CPU) should be allocated to them. For deployment, each provider has their own CLI for packaging and publishing functions. So your deployment scripts would need to be updated if you choose to use this CLI.
Regarding triggers, the main difference to be aware of at the time of writing is that to invoke AWS Lambda functions over HTTPS, you need to use the API Gateway service. This will involve setting up its own configuration and it's also billed separately. Azure functions and Google Cloud Functions, on the other hand, support HTTP triggers natively with no extra charge.

How can you mitigate the risks of needing to switch in the future?

The following recommendations should help you design your FaaS solution in such a way as to minimise any future effort required to migrate your functions to a different FaaS provider:

  • Check that your organisation's preferred language runtime is supported by multiple providers.
  • Don't pass provider-specific parameters (e.g. context or res) through multiple helper functions. Instead parse the fields you need out of the parameter object in the main handler function and pass those through.
  • Extract any calls to provider-specific services (e.g. AWS S3) into their own helper function/module so you can easily swap these out in the future.
  • Maintain locally runnable unit tests for each handler function to act as a safety net for your code refactoring.
  • Check the triggers supported required for your app (e.g. HTTP request, queue message, CRON schedule) have equivalents in the other cloud providers.
  • Check that the limits imposed by your provider (e.g. around memory allocation and maximum request timeout) are within the requirements of your app and compare these to the other cloud providers. For example, at the time of writing, AWS Lambda supports up to 3GB memory allocation and max timeout of 5 minutes, whereas Azure supports a maximum of 1.5GB of memory but a max timeout of 10 minutes.
  • Use the Serverless framework to package and deploy your app. It supports all the main cloud providers and means you can master your function and trigger configuration in a single file (serverless.yml) and use the same CLI command to deploy to each provider. This CLI can be invoked from a developer machine or from within a CI pipeline.

Vendor lock-in beyond FaaS

In this article, we've dealt with the FaaS component of a serverless architecture. In reality, I don't believe vendor lock-in should be a major concern for the vast majority of organisations in terms of changes required to the code and configuration of your functions and your build and deployment process, in particular if the above recommendations are heeded.

However, a bigger concern in terms of vendor lock-in a wider serverless architecture is the services which your functions integrate with. Your app will inevitably need to store data, either in relational or non-relational databases, file storage, caches or queues. The big 3 vendors provide a myriad of services in these areas, all with differing levels of lock-in. There are many trade-offs to consider in making these architectural decisions which I hope to dive into in more detail in a future article.

๐Ÿ’Œ If you enjoyed this article, you can sign up to my weekly newsletter on building serverless apps in AWS.

Top comments (0)