Currently I am working on a project where a React SPA connects to an entirely Serverless Backend: some AWS Gateway, a bunch of Lambda functions and an Aurora Serverless DB. The app is only for internal use in the company, therefore, we added AWS Cognito User Pool on top of our API ( and allow login on the frontend using Amplify). I will write a specific post on how to set up the Cognito User Pool Authorizer with Serverless framework but today I just want to quickly show how you can quickly restrict access to your API using API Keys.
Recently we got the request to extend our API and provide access to ETLs run by our DWH. ETL is short for Extract Transforms Load - simply put an ETL is a script that loads data from a source, applies some transformation to the data and then loads it / saves it somewhere else. This scripts can be run manually by data scientists or more likely are run at regular intervals via cron jobs or scheduled lambda functions.
In our case, the ETLs were also run internally, but from another department that had not yet joined the Serverless adventure, therefore we could not just trigger our Lambda from their Lambda and manage everything via IAM Role policies. Since the ETL wasn´t triggered by a user logging into our Single Page App, we could not use the User Pool either.
The quickest and simplest solution has been making the specific endpoint used by the ETL private and providing an API Key to the ETL owners.
This proved very very simple.
First, we checked how to do that with the UI Console
but then we implemented everything through the serverless framework.
As usual, all the button-clicks-and-dialogs-madness of the UI process became a very simple code configuration:
my-lambda-function:
handler: index.handler
events:
- http:
path: api/my_endpoint
method: get
private: true
(Unless you need to limit and charge the usage of your API and therefore you need an API Key and a Usage Plan, the setup consists really in just adding private: true to the HTTP event of your lambda.
The API Key will be generated during deploy ( you can note it down from the deploy output or running sls info -v
afterward.
When testing locally keep in mind that serverless offline is generating a new token every time you run sls offline start
. This token is print on the screen but it´s not very handy having to copy-paste it all the time when testing or debugging.
you can though use a custom token so that you don-t have all the time to grab it from the console.
What I am doing is making up a code ( ie q12w3e4r5t_offline_token ) that then I pass as env variables to my integration tests or save to the Test Collections in Postman.
Just run
sls offline start --apiKey YOUR_CUSTOM_TOKEN
and in the console, you will see that your token is being used Serverless: Key with token: YOUR_CUSTOM_TOKEN
Once you have your token it is just a matter of invoking the API passing it in the x-api-key header of the request.
curl -X GET
'http://myapi/myendpoint?fromDate=2019-03-01&toDate=2019-03-27' -H 'x-api-key: YOUR_CUSTOM_TOKEN'
In Postman you can add it under Headers x-api-key
(as you can see I am also using env variables in postman so that I can save endpoints and keys based on localhost/dev/production and the request in my collection is automatically updated)
Top comments (0)