What is serverless computing?
Serverless computing is a cloud computing model where the cloud provider dynamically manages the allocation and scaling of computing resources for running application code, allowing developers to focus solely on writing and deploying code without worrying about the underlying infrastructure. In simpler terms, it's a way of running code without having to worry about the servers on which it runs.
What is server-based computing?
In a traditional server-based architecture, the developer would need to provision and manage virtual machines, operating systems, web servers, and other infrastructure components required to host and run their application. This approach requires a significant investment in time and resources to set up and maintain, and can be expensive to scale and secure.
Server | Serverless |
---|---|
A server does not scale up or down. Its capacity cannot be exceeded, and its resources stay available even if theyβre not being used (being effectively wasted). | Serverless systems automatically scale server instances up and down to handle the load. I just wanted to let you know that you do nothing to achieve this behavior. |
A server requires maintenance. If you run a server, you might have to monitor it, install software, install patches, tune it, and perform other operations. You have to figure out how to deploy your code to it. | Serverless system require no maintenance. The cloud provider handles all these details of managing the underlying hardware. You just write and deploy code using tools provided by the cloud vendor. |
A server has some ongoing cost associated with it. Typically costs are paid on an hourly, daily, or monthly basis just to keep the server up and running, even if itβs not being used. | Serverless systems are billed per function invocation. When you deploy code to a serverless backend, you will be charged for resources uses (invocations, memory, bandwidth). If you use nothing, you are charged nothing. |
A server allows you to deploy services that run on an ongoing basis. You can typically log in and run whatever programs you want, whenever you want, for as long as you want. | Serverless systems are event-driven by nature. You deploy code that runs in response to events that occur in the system. These can be things like database triggers that respond to changes, or HTTP requests that serve an API. Code does not run outside of the context of handling some event, and it is often constrained by some time limits. |
Monolith Architecture | Microservice Architecture |
Example - AWS EC2 | Example - AWS Lambda |
Okay! We will be developing two node API's for better understanding of the process.
Project Requirements:-
Steps:-
1) Initialize Node.js project
npm init -y
2) Install serverless and serverless-offline libraries
npm install -g serverless
npm install serverless-offline --save-dev
3) Execute the below command from your project directory terminal
serverless create --template aws-nodejs
After following the above steps, your project directory will look something like this
4) Delete handler.js
file and make a folder named api
.
5) Inside api
folder, create two files named calculator.js
and sayHello.js
6) copy and paste the below codes in their respective file.
sayHello.js
module.exports.sayHello = (event, context, callback) => {
const response = { statusCode: 200, body: 'Go Serverless!' };
callback(null, response);
};
calculator.js
module.exports.calculator=(event,context,callback)=>{
const payload=event.queryStringParameters
console.log(payload.action)
if(payload.action==="add"){
const add=Number(payload.num1)+Number(payload.num2)
const response={
statusCode:200,
body:JSON.stringify({data:add})
}
callback(response)
}
}
Now, you might be thinking what is event, context and callback? π€ Don't worry I got you covered π.
Event: In AWS Lambda, an event is a trigger that initiates the execution of a Lambda function. An event can come from various sources, such as an API Gateway HTTP request, a file upload to an S3 bucket, or a message in an Amazon Simple Queue Service (SQS) queue.
Context: The context object is an input parameter passed to a Lambda function at runtime, providing information about the execution environment and resources available to the function. The context object includes details such as the AWS request ID, function name, version, and memory size, and can be used by the function to interact with other AWS services and resources.
Callback: A callback is a function that is passed as an argument to the Lambda function and is called by the function when it completes its execution. The callback function allows the Lambda function to return data to the calling code or to report errors or exceptions that occurred during execution. The callback function typically has two arguments: an error object and a data object, which the function can use to communicate the results of its execution back to the calling code.
Now our two API's are ready, it's time to configure our functions in serverless.yml
file so that they can be deployed to AWS without any hassle.
What is serverless.yml
file?
The serverless.yml file is a configuration file used by the Serverless Framework to define and deploy serverless applications on various cloud platforms, such as AWS, Azure, and Google Cloud Platform. The serverless.yml file contains the application's infrastructure as code, including the services, functions, resources, and plugins required to run the application.
The serverless.yml file typically includes the following sections:
service: Defines the name and description of the serverless application.
provider: Defines the cloud provider and its specific configuration options.
functions: Defines the serverless functions, their code, and their configuration options.
resources: Defines the additional resources required by the application, such as databases, queues, and buckets.
plugins: Defines the plugins required by the application to extend the Serverless Framework's functionality.
By defining the application's infrastructure in a serverless.yml file, developers can easily deploy, manage, and version their serverless applications using the Serverless Framework's CLI. The serverless.yml
file also provides a reusable and portable way of defining serverless applications, allowing developers to deploy their applications on multiple cloud providers with minimal changes to the configuration file.
Now since you know what's the purpose of serverless.yml
file, copy and paste the below configuration in your serverless.yml
file.
service: serverless-blog
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs18.x
region: ap-southeast-1
plugins:
- serverless-offline
functions:
hello:
handler: api/sayHello.sayHello
events:
- http:
path: /hello
method: post
calculataor:
handler: api/calculator.calculator
events:
- http:
path: /calculator
method: get
request:
parameters:
querystrings:
action: true
num1: true
num2: true
Let's deep dive into each and every thing mentioned in the configuration file:
1) service: The name of your service.
2) frameworkVersion: Framework version of serverless-lambda
3) provider: In our case, we are using AWS and our functions will be deployed in the ap-southeast-1
region.
4) plugins: If you are using some dev dependencies or external tools make sure to mention that in this section. In our case we are using serverless-offline
library as dev-dependency for local testing.
5) functions: In this section, we list our all functions that we want to deploy on AWS.
i) `hello` and `calculator` are the name of the functions that will be visible on AWS under lambda service.
ii) handler: It is the path where a particular function is defined. Example, in `api` folder we have `sayHello.js` file and in sayHello.js, we have `sayHello` module defined. Hence the handler is `api/sayHello.sayHello`
iii) http: It denotes that we are going to use Lambda proxy, you can refer AWS and serverless docs if you want some other proxies.
iv) path: It is the route on which we will be invoking that function. Example:- https://dev.to/calculator.
v) method: API methods like GET, PUT, POST etc.
vi) request: If you are using GET request then you have to define the query parameters in the style mentioned above.
So, Yes! That's it. It's time to test our API's.
7) Run the below command in your project directory terminal
serverless offline --httpPort 4000
After successful building you will get the following URL's:
http://localhost:4000/dev/hello
http://localhost:4000/dev/calculator
Let's test and See the Output:
Yay! We did it π€.
Time for final deployment π¨βπ»!
8) Run the below command from your project directory terminal
serverless deploy
If everything goes well, you will see API Gateway URL's on your terminal.
Once you run serverless deploy
a cloudformation service is created by AWS with resources such as API gateway, AWS Lambda, S3 bucket, etc based on your yml configurations. Cloudformation gives developers an added advantage to just focus on code and not worry about the resource creation.
To Lean more about serverless - https://www.serverless.com/
If you reached till here, I bet you learned something new today!
Keep Learning, Keep Sharing! π
Top comments (0)