A step by step guide on how to deploy your AWS's lambda function developed using the ServerLess framework using bitbucket's pipeline.
Prerequisite:
- A lambda function project built using the ServerLess Framework
- AWS access key & AWS secret access key with enough rights to deploy lambda function
Step 1: Enable pipelines in bitbucket
For this, head over to your bitbucket repository and enable the pipelines by going to Repository settings -> Pipelines settings
Step 2: Add your AWS credentials to Repository variables by going to
Repository settings -> Repository variables
You can name your variables whatever you like, but make sure that they are different for each environment you would like to deploy to
Step 3: Add a file named
bitbucket-pipelines.yml
in the root of your serverless project
Since this is a serverless framework project, this pipeline will require a container with serverless installed globally. For this I already have a docker image with the following tools installed
- nodejs (v14)
- git
- bash
- curl
- openssh
- py3-pip
- wget
- serverless
You can get this image from here - https://hub.docker.com/repository/docker/metacollective/serverless_ci
or if you wish to push your own image, then you can use this Dockerfile
FROM node:14-alpine
# Install packages
RUN apk update && apk add --update --no-cache \
git \
bash \
curl \
openssh \
python3 \
py3-pip \
py-cryptography \
wget \
curl
RUN apk --no-cache add --virtual builds-deps build-base python3
# Update NPM
RUN npm config set unsafe-perm true
RUN npm update -g
# Install AWSCLI
RUN pip install --upgrade pip && \
pip install --upgrade awscli
# Install Serverless Framework
RUN npm install -g serverless
With the container image sorted, all we have to do is add the following lines to our bitbucket-pipelines.yml
image: metacollective/serverless_ci
pipelines:
branches:
default:
- step:
script:
- npm install
dev:
- step:
caches:
- node
- pip
script:
- npm install
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID_NON_PROD
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY_NON_PROD
- sls deploy --stage dev --region eu-west-1
qa:
- step:
caches:
- node
- pip
script:
- npm install
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID_NON_PROD
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY_NON_PROD
- sls deploy --stage qa --region eu-west-1
master:
- step:
caches:
- node
- pip
script:
- npm install
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
- sls deploy --stage prod --region eu-west-1
As you can see, we can now manage the pipeline for each branch from this one file.
Let's go through some of the important steps -
branches
: It's like a switch case where it picks your deployed branch name and executes whatever is under it
npm install
: Install node packages
export
: Set environment variables. You can pick and choose different AWS accounts for each branch if you like and add their details in the repository variables under pipeline settings
sls deploy
: Serverless command to push your Lambda function to AWS. Not only will it push to AWS, but it will also attach API Gateway with it.
After pushing your changes to bitbucket, you can monitor your pipeline building like this -
Now you have a working pipeline 🎉🎉🎉
p.s: If there is an error that says something along the lines of access denied, then check your AWS credentials and make sure that it has the required lambda execution role (https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html)
Top comments (1)
What does this look like using serverless V4?