What is Serverless Computing?
Serverless Computing is a new form of cloud based computing similar to VM’s and containers running on a cloud provider. While it doesn’t mean there are no servers, the management of servers, scaling, and capacity planning are taken care by the underlying cloud provider. Application developers only need to focus on functionality and business logic.
More developers are adopting serverless technology for the next generation of applications and APIs. Serverless is already being adopted 10x faster than that of Docker and containers. This article will provide some information on the serverless ecosystem and differences between the different providers and frameworks.
Microservices Primer
Before serverless computing, many enterprises adopted microservices, a form of Service Oriented Architecture (SOA). Microservices enabled applications to be organized as a collection of loosely coupled services connected together through APIs. Each service is a entirely separate mini application in its own process/container/VM The main benefit is what we learned in computer science 101: modularity and separation of concerns. Microservices can make complex applications easier to develop and scale in large companies. Applications can be easily divided across functional teams with each team responsible for an individual microservice or set of microservices. The codebase can be kept small and builds short for fast iterative development. Individual pieces of the application can be scaled independently of other microservices.
However, with the advent of microservices, infrastructure and operations work is greatly increased. There are suddenly many more continuous integration/continuous delivery pipelines that need to be tracked. Interoperability of each version or variant of a service needs to considered and what other services it’s dependent on. There is complex orchestration to manage many more moving pieces. Logging context is now scattered across many individual processes. More burden is placed on integration testing. In fact, companies like Basecamp argues why monolothicarchitecture makes sense for certain small companies like startups.
Towards Serverless
Serverless Computing is taking microservices to the extreme. The infrastructure, orchestration layers, and deployment is taken away. There are still servers and VM’s, but they are fully managed by the cloud provider. As an application developer, you only have to write business logic and functionality and leave the rest to AWS or Azure.
Serverless computing can reduce your compute costs also. While most cloud providers will charge an hourly rate for reserving a VM, serverless computing can use a consumption based pricing model. There is no charge if the app isn’t actively using compute or memory resources.
Serverless Computing Providers
Cloud computing venders are driving a lot of the innovation in this space. The below comparison focuses on three aspects: build (languages supported), deployment (ease of setup and management), and triggers (triggering the function).
AWS Lambda
AWS lambda is one of the first serverless compute offerings being introduced in 2014.
AWS Lambda natively supports a variety of languages including Node.js, Python, Java, and C# (.NET Core). Additional languages can be supported by spawning a child process from one of the supported languages which is allowed in the AWS Lambda sandbox. AWS sandbox isolation does not rely on any language constructs which allows this flexibility.
Development can be done online using AWS’s embedded editor. However, once you need to add dependencies for your code to function, you need to develop on your local machine and upload a bundle consisting of your code along with any dependencies. AWS Lambda will not run npm install
if your developing a Node function. Deployment requires creating and uploading a deployment package to bundle the code and dependencies together.
AWS is positioning AWS Lambda in the front and center of AWS, rather than just an additional offering with a very large number of AWS services that can serve as one-click triggers providing a lot of flexibility in using Lambda. The very popular AWS API Gateway trigger can be used if you’re developing an API. API Gateway enables you to map HTTP requests for a RESTful API endpoint such as GET /items/{id}
to an AWS lambda function getItem(event, context, callback) { }
. Outside of REST APIs, AWS provides triggers for everything from Amazon Kinesis (Similar to Kafka) for event streams to updates in DynamoDB or S3 or even be triggered from an Alexa skills app. In fact AWS is pushing AWS Lambda as the primary way to develop new Alexa Skills.
Azure Functions
Azure entered the serverless space a little after AWS with Azure Functions being introduced in mid-2016. Azure supports a wider variety of languages compared to AWS. In addition to Node.js, C#, and Python, Azure also supports F#, PHP, Bash, and PowerShell. On October 4, 2017, Azure announced they will support Java during the JavaOne Conference in San Francisco. Furthermore, their logic apps and flow are enabling non-developers to set up their own logic for business processes just like how Zapier can connect multiple business tools.
Azure provides an online editor similar to AWS. However, Azure’s editor is built on Visual Studio Online. Unlike AWS and Google, Azure provides more infrastructure around deployment. You can set up continuous build and deploy deployment from sources in Visual Studio Team Services, Bitbucket, and Github.
Architecturally, Azure Functions is quite different from AWS Lambda, as much of the infrastructure for Azure Functions came out of Azure App Service and App Service Plans. Azure functions are logically grouped into an application container or environment called an _App Service._All the Azure Functions within an app service shares the same resources such as compute or memory. This also enables the deployment of an application rather than individual functions. You can think of Azure functions as a blend between AWS Lambda and more traditional Azure Web Apps/AWS Elastic Beanstalk like environments.
Unlike AWS, more of the HTTP trigger functionality is built natively in Azure Functions without requiring the set up of a separate API gateway. Much of this has to do with Azure’s logically grouping of functions under an App Service container. Azure supports a variety of other triggers. Such triggers could include Azure Blob Storage, Azure Event Hubs, and Queues.
Google Cloud Functions
Relative to Azure and AWS, Google Cloud Functions is the most limiting in terms of languages as they only support Node.js. Unfortunately, Google Cloud functions appears the least developed relative to AWS and Azure. Part of this is not due to Google Cloud Functions but that Google has least amount of offerings in Google Cloud that could serve as triggers. AWS has Kinesis and Azure has EventHubs, but Google has no equivalent offering to serve as a tigger. Unlike AWS, Google is not pushing Cloud Functions front and center. Internal development at Google relies more on tools like Kubernetes (Borg) rather than Cloud Functions.
On the positive side, Google allows up to 9 minutes of execution before the process is killed. Google is Google also has a separate product Cloud Function for Firebase, which may be useful if you’re a mobile app startup already reliant on Firebase. Firebase enables you to spawn a new Cloud Function from updates in your Firebase Db. Unlike AWS, Google integrated HTTP functionality directly in Cloud Functions without requiring the set up of a separate API gateway.
For deployment, you can upload a zip file or deploy from Google repositories.
Independent Serverless Framework
Because serverless computing is a heavily managed service by cloud providers, there is a very high amount of lock in. The trigger implementations are specific to each provider and thus you won’t find the same AWS Kinesis trigger on Google or Azure. In addition to different triggers, the incoming context and top level function signatures are different depending on using AWS Lambda vs Google Cloud Functions vs Azure Functions.
While there is a high chance of lock in, you can mitigate this by leveraging vendor neutral shims that can translate various services. Such vendor neutral shims allow you to use AWS Kinesis in a similar way as Azure EventHubs. In addition, HTTP APIs, a very common use of serverless, can be entirely open and transparent.
There are independent open source frameworks that will do just this. In addition, they standardize the deployment and normalizing vendor specific context objects for you to access.
There are quite a few frameworks that vary depending on your needs. Some popular ones include:
Monitoring Challenges
One of the growing challenges with serverless computing is monitoring and debugging all those functions. Logging context is now scattered across even more components than even in a microservice architecture. May of the logs are very vendor specific and is not like looking at standard NGINX or HaProxy logs.
In addition, it’s very hard to mirror and run functions locally for debugging in the same environment. It’s crucial to include as much context as possible to debug via failure scenarios rather than just reproducing issues locally. There will be times where you can’t reproduce the issue out of the cloud vendor’s environment.
At Moesif, we are hard at work in providing visibility into the invocations of such serverless functions and already have SDKs for AWS Lambda with different triggers like Moesif’s Lambda Middleware for API Gateway and Moesif’s Lambda Middleware for Alexa Skills Kit.
Top comments (3)
Thanks for sharing! I believe that it has become very common to treat function as a service (FaaS) = serverless. But as you describe it so well it is about not managing servers.
Services like Heroku are there since ~ 10 years and are serverless. Now there are more and more managed services for databases, message queues, etc. When you are precise, even a managed container runtime is serverless. So AWS ECS Fargate is serverless and AWS Lambda is containerless :D
While FaaS is certainly a pretty big deal, going serverless with other infrastructure components and not only your application servers allows you to focus on what is important: Your business.
Agreed. Ultimately these buzzwords are just that, buzzwords. Business and create useful things is what is important.
Having worked extensively with AWS, Azure, and Google Cloud over the past few years, each platform has its strengths and considerations.
AWS is often praised for its comprehensive service offerings, global reach, and mature ecosystem. It's a go-to choice for scalability, and its vast community and extensive documentation make it relatively easier for developers to find solutions.
Azure, on the other hand, integrates seamlessly with Microsoft's products, making it a preferred choice for organizations already invested in the Microsoft ecosystem. Its hybrid cloud capabilities and enterprise-focused solutions are notable advantages.
Google Cloud stands out for its data analytics and machine learning capabilities. TensorFlow and BigQuery are powerful tools that make Google Cloud appealing for projects heavily reliant on data analysis and AI.
In terms of pricing, it's crucial to analyze specific use cases, as pricing models vary between the providers. AWS often has a pay-as-you-go model, Azure offers flexibility with its hybrid pricing, and Google Cloud frequently focuses on sustained use discounts.
Ultimately, the choice between AWS, Azure, and Google Cloud depends on your project requirements, team expertise, and the specific features each cloud provider excels at. It's worth exploring each platform's free tier and documentation to get a hands-on feel and make an informed decision.
What has been your experience with these platforms?