DEV Community

Cover image for Creating an Advanced Serverless Feature Flag Service using CDK and C#
John Hatton
John Hatton

Posted on • Updated on

Creating an Advanced Serverless Feature Flag Service using CDK and C#

This is a part 2 to my previous blog post

This came about during studying for the Devops Professional Exam were I came across exam examples and resources that were very useful, but there were no actual examples of these being used that I could look at and run myself to better understand them, so I thought as a fun challenge and so I could present to my colleagues I would implement these on my existing service but also define these in CDK.

When trying to put together a presentation to inform and help colleagues with passing the AWS Devops Professional Exam.

I started with a basic example and then thought about what other AWS services I could implement which could showcase other services in this simple application.

There were a couple of aims with this solution, the first was to demonstrate writing this using CDK and to show that it could be done and the second was to show how AWS serverless services can interact with each other.

Continuing from the original application that was written using CDK and c#. I introduced Amazon CloudWatch but more specifically CloudWatch alarms for alarm on errors, Amazon SNS for sending out emails off the back of an alarm, SSM Parameter Store for storing a configuration flag, Amazon Kinesis Firehose for streaming changes from DynamoDB into Amazon S3 and finally AWS Config.

While this is a little over the top and is using more services that you might think is needed this was created to as an example to show how different services can be used.

The main goal of the application has not changed whereby it is still a feature flag service but now it has a few more features in the way of observability and would allow for analytics to be done.

Below is an overview of the new architecture that I will be going through below.

Overview of architecture

The first main change is that there is now an audit log, so when a flag is inserted, retrieved, deleted or updated a DynamoDB stream now streams the data into an Amazon Kinesis Firehose. Although this isn't strictly necessary this was done to demonstrate that any change to data in the DynamoDB table can be streamed elsewhere. In this case it is going through a lambda which is putting a line break in the JSON, this was just to make the JSON more human readable and demonstrate how a lambda can be used as part of Firehose to make additional transformations.
Amazon Firehose then streams the data into S3, which is essentially an audit log.

The main reason for this change and what it demonstrates is the ability to have some audit logs were you could use services such as Amazon QuickSight could be used to create dashboards from this or Athena could be used if wanting to query some specific data. It also demonstrates streaming data from one AWS service to another by using Amazon Kinesis Firehose which comes up on the exam and this shows how you can use it.

Below is the code needed to define an S3 which is the target of the Firehose and the lambda used for transforming and the firehose itself.

var bucketProps = new BucketProps
{
    AutoDeleteObjects = true,
    RemovalPolicy = RemovalPolicy.DESTROY,
    Versioned = false,
    BucketName = "feature-flag-firehose"
};

var firehoseBucket = new Bucket(this, "firehose-s3", bucketProps);

var lambda = CreateLambda("lambda-processor", "Processor");

var lambdaProcessor = new LambdaFunctionProcessor(lambda, new DataProcessorProps
{
    Retries = 5,
});

var dynamoDBChangesPrefix = "ddb-changes";

var s3BucketProps = new S3BucketProps
{
    BufferingInterval = Duration.Seconds(60),
    Processor = lambdaProcessor,
    DataOutputPrefix = $"{dynamoDBChangesPrefix}/"
};

var s3Destination = new S3Bucket(firehoseBucket, s3BucketProps);

if (props.KinesisFirehoseEnabled)
{
    KinesisStream = new Stream(this, "Stream");
    var deliveryStreamProps = new DeliveryStreamProps
    {
        SourceStream = KinesisStream,
        Destinations = new[] { s3Destination }
    };

    new DeliveryStream(this, "Delivery Stream", deliveryStreamProps);
}
Enter fullscreen mode Exit fullscreen mode

From the block above you can see that although there is a little bit of code it doesn't take a lot to be able to create an s3 bucket and a Kinesis Firehose. Also in the block you can see I have a flag as firehose will cost to run for this demo and in the repo it is disabled by default.

In the next block below you can see the DynamoDB table being created but the important part is it is just the one line to enable a kinesis stream.

var tableProps = new TableProps
{
    PartitionKey = new Attribute { Name = "name", Type = AttributeType.STRING },
    BillingMode = BillingMode.PAY_PER_REQUEST,
    RemovalPolicy = Amazon.CDK.RemovalPolicy.DESTROY,
};

if (props.KinesisFirehoseEnabled)
{
    tableProps.KinesisStream = props.KinesisStream;
}

FlagTable = new Table(this, "flags", tableProps);
Enter fullscreen mode Exit fullscreen mode

One thing to note is that currently to define these particular features for Kinesis Firehouse I am using the Alpha(Experimental) namespaces.

To showcase another example of a service I wanted to create a config rule with auto-remediation for when an S3 bucket's public policy was changed from private to public. Unfortunately at the time of writing it is an experimental feature, so there was a bug preventing it from being created correctly.

The reason for adding this was again to just demonstrate another service being defined in CDK as well as to demonstrate how config can be used to keep compliance and to automatically fix rules when they become non-compliant. I was using the S3 bucket I created for the logs as the bucket just for this example as it was easier to show during a live example.

Moving onto observing error, I added a log message in the create feature flag so if a flag already existed it would log an error. In a real world scenario I would have added more errors but i just wanted to demonstrate how cloud watch could be used to alert someone of a particular error.

For this I created a CloudWatch metric, followed by an alarm, in this example I have it set at 15 minutes and and have a threshold of 1. When the threshold is met an SNS Message is published and an email notification is sent out alerting the subscribers there has been an increase in errors.

var pattern = FilterPattern.AnyTerm(new string[] { "ERROR", "Error", "error" });
var metric = new Metric(new MetricProps { MetricName = "Get Function Errors", Namespace = "lambdaErrors", Statistic = "sum" });
metric.With(new MetricOptions { Statistic = "sum", Period = Amazon.CDK.Duration.Minutes(5) });

var metricFilter = new MetricFilter(this, "get-function-errors-metric",
    new MetricFilterProps
    {
        MetricNamespace = "lambdaErrors",
        FilterPattern = pattern,
        MetricName = "Get Function Errors",
        LogGroup = lambda.LogGroup
    });

metricFilter.ApplyRemovalPolicy(Amazon.CDK.RemovalPolicy.DESTROY);

var alarm = new Alarm(this, "get-function-errors-alarm", new AlarmProps
{
    ActionsEnabled = true,
    Metric = metric,
    AlarmName = "Get Function Errors",
    ComparisonOperator = ComparisonOperator.GREATER_THAN_THRESHOLD,
    Threshold = 0,
    EvaluationPeriods = 1
});

alarm.AddAlarmAction(new SnsAction(lambdaAlarmSNS));
Enter fullscreen mode Exit fullscreen mode

From the code block above you can how the metric filter and alarm are created and linked together.

The reason for adding this to the service was to showcase observability with more serverless services and to further demonstrate best AWS practices and how you can alert the right people when errors start to appear in your applications which is another theme in the AWS Devops Professional learning path and exam questions.

The final new service I added was SSM Parameter store. While studying for the exam I found examples of questions and documents from AWS about the best ways to store data that could change that would result in not having to do a code change and therefore redeploy.

In this scenario I used to toggle whether a customer could delete a flag with the idea being if this was used by multiple customers, you could enable and disable per AWS account. If the flag in the param store was set to false then the flag would not be deleted. This was not created in CDK and was created during deployment using the following command

aws ssm put-parameter --name "allow-flag-deletion" --type "String" --value false --profile=personal --overwrite

This was a fun side project to build on top of what I had previously created and was fun to take some of the main points and questions from the Developer Professional Exam and showcase how they can be implemented

Please take a look at the GitHub Repo for more information and to try it out.

Top comments (0)