The final installment of this series details wrapping our AWS resources in some CDK (Cloud Development Kit) magic.
*Disclaimer: The scope of CDK does not include the Athena Workgroup or Athena DynamoDB connector - this will be addressed at a later point
I'll be honest, before this blog series I had only used Cloudformation with YAML files, and while I knew about CDK, I didn't know just how good it would be.
Developing in Cloudformation can be cumbersome, and it can impede the creative process that is so necessary when sandboxing, that's why I had it scheduled at the end: After all the fun stuff is done we go through and define our resources, making sure that it all deploys correctly.
During this segment I realised something: I should have been using CDK to develop from the start. It actually reduces development time and complexity by implementing best practice architecture by default, and filling in the gaps for you. Let's have a look at the differences between declarative Cloudformation and CDK.
- CDK is object based, Cloudformation is declarative
- Cloudformation needs 100% of the information, CDK will fill in the gaps (with best practice architecture)
- Cloudformation requires a deployment mechanism built around it for automated workloads, CDK has it's own CLI and tooling
Process
To turn our deployment into a CDK workload I did the following:
Created a CDK project following this resource: https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html
- Created an object per resource required
- Assigned privileges to the resources
- Deployed!
It was that easy, CDK handles unique bucket names, IAM roles and policies, a whole bunch of stuff that is normally tedious and boring to develop.
Here's the code block as it stands:
class CdkStack(cdk.Stack):
def __init__(self, scope: cdk.Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
dynamodb = _dynamo.Table(self, 'cdk_test',
partition_key=_dynamo.Attribute(name='TransactionID', type=_dynamo.AttributeType.STRING)
)
requests_layer = _lambda.LayerVersion(self, 'requests_layer',
code=_lambda.Code.from_asset('layers\_requests'),
compatible_runtimes=[_lambda.Runtime.PYTHON_3_9],
compatible_architectures=[_lambda.Architecture.X86_64],
description='A layer to send API requests'
)
process_webhook = _lambda.Function(self, 'process_webhook',
runtime=_lambda.Runtime.PYTHON_3_9,
handler='lambda_function.handler',
code= _lambda.Code.from_asset('lambdas\process_webhook'),
layers=[requests_layer]
)
provision_user = _lambda.Function(self, "provision_user",
runtime=_lambda.Runtime.PYTHON_3_9,
handler='lambda_function.handler',
code= _lambda.Code.from_asset('lambdas\provision_user'),
layers=[requests_layer]
)
dynamodb.grant_read_write_data(process_webhook)
dynamodb.grant_read_write_data(provision_user)
process_webhook_integration = HttpLambdaIntegration("Process Webhook Integration", process_webhook)
api = aws_apigatewayv2.HttpApi(self, "HttpApi")
api.add_routes(
path="/webhook",
methods=[aws_apigatewayv2.HttpMethod.POST],
integration=process_webhook_integration
)
Results
Here's a screenshot of the CDK stack deploying succesfully:
And here's the current state of what we've developed so far:
That brings a wrap to this blog series, I hope that you learnt a few things about AWS and how you can apply it the problems you encounter in your own life, I certainly learnt a lot along the way!
I'll be continuing development of this project in DIY Dashboard, an open source repo designed to help you quickly stand up serverless infrastructure to create your own automated dashboards. There are many planned features and a few refactors on the cards as well, I need to sort out that pesky provision_new_user lambda!
Top comments (0)