DEV Community

Cover image for Learn how to Dynamically Add App Features with AWS AppConfig Feature Flags (+ Linux 101)
Brian H. Hough for AWS Community Builders

Posted on

Learn how to Dynamically Add App Features with AWS AppConfig Feature Flags (+ Linux 101)

How's it going everyone? 👋 Welcome back to the Tech Stack Playbook, your guide to apps, software, and tech (but in a fun way I promise).

This past week, I attended a really eye-opening AWS workshop put on for AWS Community Builders on a very interesting service I had not used before, called AppConfig. As described in the documentation, AWS AppConfig is a service that lets developers create, manage, and quickly deploy application changes, modifications, features, or patches which are running on Amazon EC2 (Elastic Cloud Compute), AWS Lambda, containers, mobile apps, or IoT devices.

In this blog post, I'll go through the workshop, the code I used to run everything, screenshots of the deployment process, and I will also go through how to use Linux and an Amazon EC2 instance to deploy all of the code in a VM (virtual machine).

What we will build is a simple Airbnb-for-Cribs home rental application and switch in an image viewer carousel for each listed home via AWS AppConfig. Here's what it will look like (unfortunately, IRL mansions are not included):

Image description

I have the starter repo that AWS created linked on my GitHub here under the branch initial. I also have the final branch code for reference as well.

GitHub logo BrianHHough / AWSAppConfigWorkshop

Exploration of the AWS AppConfig service for dynamic feature changes to a web application. The workshop uses AWS AppSync, AWS Amplify, AWS EC2, DynamoDB. The app's tech stack is: HTML, CSS, JavaScript.

AWS AppConfig Workshop

AWS AppConfig Workshop Repo

Exploration of the AWS AppConfig service for dynamic feature changes to a web application. The workshop uses AWS AppSync, AWS Amplify, AWS EC2, DynamoDB. The app's tech stack is: HTML, CSS, JavaScript.

There are 2 branches:

  • final = includes the script calling the AWS Lambda function connected to AWS AppConfig
  • initial = is the starter repo (if you are following along, start on this branch)

Follow along to the following blog post for steps on how to proceed:

DEV.to: Learn how to Dynamically Add App Features with AWS AppConfig Feature Flags (+ Linux 101)




If you're excited, read on!

☁️ Building for Scale (in the Cloud)

Let's take a step back, to talk about scale. "Building for scale" is something a lot of people talk about, but very seldom do we ever plan for scale to happen in the ways we want. Becoming an "overnight success" could take months, years, or even decades in some cases. We rarely ever know when "hitting scale" will take place, so we have to future-proof tech so that it will scale with our needs, when we need them.

Imagine releasing a feature to a million plus member base. What if the feature includes a breaking change with a bug that decreases use by 60% per week per user. That could be disastrous. We want to get ahead of these issues, which is why AWS AppConfig can help us analyze a feature pushed to a select (but growing) number of users over time so that we can adapt to the change, see what it does to our user base, and adjust accordingly.

When you think about "building for scale," of course one of the top companies that comes to my mind is Facebook (...I mean, Meta...still not used to saying that). In Meta's 2022 Q1 2022 Earnings Report, the company reported a whopping 2.94 billion Monthly Active Users as of March 31, 2022 (Meta Q1 2022 Earnings Report).

One of the interviews that stands out in my mind in particular is Reid Hoffman's Masters of Scale interview with Meta CEO Mark Zuckerberg. Back in 2017, Mark Zuckerberg shared a bit more of the context behind his "move fast and break things" mantra that has allowed him to build the largest social media platform on the planet.

Mark shared that:

"At any given point in time, there isn't just one version of Facebook running, there are probably 10,000. Any engineer at the company can basically decide that they want to test something. There are some rules on sensitive things, but in general, an engineer can test something, and they can launch a version of Facebook not to the whole community, but maybe to 10,000 people or 50,000 people—whatever is necessary to get a good test of an experience. (Entrepreneur)"

Check out the interview here:

4. Imperfect is perfect, w/Facebook's Mark Zuckerberg

If you’re Steve Jobs, you can wait for your product to be perfect. For the rest of us, If you’re not embarrassed by your first product release, you’ve released it too late. Imperfect is perfect. Why? Because your assumptions about what people want are never exactly right. Most entrepreneurs create great products through a tight feedback loop with real customers using a real product. So don’t fear imperfections; they won’t make or break your company. What will make or break you is speed. And no one knows this better than Facebook’s Mark Zuckerberg. He shares the origin story of his mantra “move fast and break things” and how this ethos applied as Facebook evolved from student project to tech giant. Read a transcript of this episode: https://mastersofscale.com Subscribe to the Masters of Scale weekly newsletter: https://mastersofscale.com/subscribe

favicon art19.com

You might think, why would you want something like this? All the users have different user experiences? Isn't that counter-intuitive to UI/UX testing, agile development, and scaling?

Not necessarily... because, let's say you are testing the incorporation of a feature or experience. Sure you can test it in controlled environments, but those are still, controlled. What is more optimal is to test real people, using your app in a real way, without the impression of being under a microscope, so you can really understand if the feature "hits" or resonates with the expected user base at the expected time.

To this day, I have always wondered how dynamic feature adds or dynamic user testing actually works in production. Thanks to the AWS Community Builders Program, I was able to see something like this in action, which is what I will be sharing with you today.

☁️ Intro to AWS AppConfig

With AWS AppConfig, we are going to add in a feature called "Photo Slider" which will allow users to switch between photos in the app for each crib. However, it will not be visible to everyone all at once as soon as the feature is pushed.

AWS AppConfig allows us to set the launch deployment details, where we can release to 1% of users, then 5%, then 10%, and so on. It can be risky to release a new feature to users all at once, so we need to account for that. AWS AppConfig has a feature called "Feature Flags" that can allow us to pull the switch on a feature if something unexpected or disastrous happens in the app and allow us to revert back near-instantly.

☁️ Where does AWS AppConfig fit?

In this architecture diagram provided by the AWS team, it outlines where everything "lives."

  • Deployment; via AWS Amplify
  • Database: via DynamoDB
  • Front-end: via HTML, CS, and JS
  • AppConfig: connected via a AWS Lambda function to the front-end

Image description
Source: AWS

☁️ Set up a Feature Flag

In the AWS Console, you will navigate over to the AWS AppConfig service.

  • Click Get Started
  • In "Create Application" page, name your app: AWSomeCribRentals
  • Add a description: This is for feature flags related to the AWSomeCribRentals app
  • Click Create application

This will be our container for all flags related to our application. Think of it like a wrapper.

We then must create a configuration profile within the AppConfig application. This lets us define the Feature Flag type and set-up. You can think of it as an element within the wrapper (i.e. you could have multiple Configuration Profiles, perhaps even multiple feature flags within the same profile, which all sits within the AppConfig Application).

  • Let's name the config profile of our feature addition of a card: CardFeatureFlag
  • Add a description: related to card

We will then add a new flag by clicking Add new flag which will be a short-term flag, as we will deprecate it in the future granted everything is a success:

  • Name: showcarousel
  • Description: this will let users swipe through images than only showing them one per Crib
  • Select: short-term-flag
  • Click Create flag to create our first Feature Flag with AWS AppConfig
  • Click Save new version to proceed.

☁️ Update the Photo Pagination for the Feature Flag

Navigate to the CardFeatureFlag that we just created and click Add new flag

  • Name: pagination
  • Description: change how many homes returned on the page
  • Attributes: number
  • Type: number
  • Value: 8
  • Required Value: [✅]
  • Constraint: 5 minumum and 12 maximum
  • Click Create flag

Turn the pagination flag on with the switch and then press save new version

☁️ Deploy the Feature Flag

You should see something like this, which tells us we have two Feature Flags (pagination which is switched on, and showcarousel which is turned off) added to the Configuration Profile CardFeatureFlag

Image description

We want to deploy this, so here is what we will do:

  • Click Start deployment
  • Click Create Environment
  • Name the environment: Beta

Next we will want to create a Deployment Strategy. It's important to consider factors like the Bake time: the amount of time AppConfig monitors for CloudWatch alarms before advancing to the next deployment or step, such as rolling back a feature.

There are 3 pre-defined options available: AppConfig.AllAtOnce — instant deployment to all, AppConfig.Linear50PercentEvery30Seconds — deploys to half of target every 30 seconds and deploy time is 1 minute (i.e. useful for testing or demos), and AppConfig.Canary10Percent20Minutes (AWS Recommended) — deploys slowly over time (i.e. useful for production workload).

We will, however, Create deployment strategy:

  • Name: FFWorkshop_Deployment
  • Type: Linear
  • Step percentage: 10
  • Deployment time: 1 minute
  • Bake time: 1 minuteg

Select Create deployment strategy and then Start deployment which will take about a couple of minutes to fully release. It will ultimately look like this when it's been deployed fully:

Image description

☁️ Configure AWS Lambda

AWS Lambda functions are private functions that run in the cloud and allow you to run code without the need to run a server. You simply need to call a script to connect with them and then you can run compute without needing to provision a back-end for it.

For some background context on AWS Lambda, check out Parts 1 and 2 of my "Serverless Workflows with Step Functions and Lambda" here:


To set up Lambda for our project, we will:

  • Navigate to AWS Lambda
  • Click Create Function
  • Name: FF_Lambda
  • Runtime: Node.js 14.x
  • Click Create function

In the code editor, add this code (generously provided to us by AWS) which allows us to toggle the Feature Flag on and off:

const http = require('http');
const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient();

exports.handler = async (event) => {

    const res = await new Promise((resolve, reject) => {
        http.get(
            "http://localhost:2772/applications/AWSomeCribRentals/environments/Beta/configurations/CardFeatureFlag",
            resolve
        );
    });

    let configData = await new Promise((resolve, reject) => {
        let data = '';
        res.on('data', chunk => data += chunk);
        res.on('error', err => reject(err));
        res.on('end', () => resolve(data));
    });

    const parsedConfigData = JSON.parse(configData);

    const DynamoParams = {
        TableName: 'AWSCribsRentalMansions'
    };

//Fetching the listings from DynamoDB

    async function listItems() {
        try {
            const data = await docClient.scan(DynamoParams).promise();
            return data;
        } catch (err) {
            return err;
        }
    }

//Checking for the Caraousel Feature Flag
    if (parsedConfigData.showcarousel.enabled == true) {
        let returnhtml = ``;

        try {

            const data = await listItems();

            for (let i = 0; i < parsedConfigData.pagination.number; i++) {
                returnhtml += `<div class="col-md-4 mt-4">
                <div class="card profile-card-5">
                    <div class="card-img-block">

                        <div class="slideshow-container">`;

                for (let j = 0; j < data.Items[i].Image.length; j++) {
                    returnhtml += `<div class="mySlides` + (i + 1) + `">
                    <img class="card-img-top" style="height: 300px;" src="` + data.Items[i].Image[j].name + `" style="width:100%">
                  </div>`;
                }

                returnhtml += `</div>`;

                if(data.Items[i].Image.length > 1) {
                    returnhtml += `<a class="prev" onclick="plusSlides(-1, ` + i + `)">&#10094;</a>
                            <a class="next" onclick="plusSlides(1, ` + i + `)">&#10095;</a>`;
                }

                    returnhtml += `</div>
                    <div class="card-body pt-0">
                    <h5 class="card-title">` + data.Items[i].Name + ` <span style="font-size: 0.7em;color:rgb(255, 64, 64)">(` + data.Items[i].Location + `)</span></h5>
                    <p class="card-text">` + data.Items[i].Description + `</p>
                    <a class="btn btn-primary"style="display: inline" href="#">Check Availability</a>
                    <span style="float: right;cursor: pointer;" onclick="favoriteStar(this)"><span class="fa fa-star"></span></span>
                  </div>
                </div>
            </div>`;
            }

            return {
                statusCode: 200,
                body: returnhtml,
            };

        } catch (err) {
            return {
                error: err
            }
        }

    } else {

        let returnhtml = ``;

        try {

            const data = await listItems();
            console.log("dynamo db data: ", data)

//Checking for Pagination Numbers
            for (let i = 0; i < parsedConfigData.pagination.number; i++) {
                returnhtml += `<div class="col-md-4 mt-4">
                <div class="card profile-card-5">
                    <div class="card-img-block">
                    <img class="card-img-top" style="height: 300px;" src="` + data.Items[i].Image[0].name + `" style="width:100%"
                        alt="Card image cap" style="height: 300px;">
                    </div>
                    <div class="card-body pt-0">
                    <h5 class="card-title">` + data.Items[i].Name + ` <span style="font-size: 0.7em;color:rgb(255, 64, 64)">(` + data.Items[i].Location + `)</span></h5>
                    <p class="card-text">` + data.Items[i].Description + `</p>
                    <a class="btn btn-primary"style="display: inline" href="#">Check Availability</a>
                    <span style="float: right;cursor: pointer;" onclick="favoriteStar(this)"><span class="fa fa-star"></span></span>
                  </div>
                </div>
            </div>`;
            }

            return {
                statusCode: 200,
                body: returnhtml,
            };

        } catch (err) {
            return {
                error: err
            };
        }
    }
};

Enter fullscreen mode Exit fullscreen mode

Now we will want to add a layer to our Lambda Function (scroll to the bottom of the page and find Add a layer)

  • Select AWS-AppConfig-Extension
  • Click Add

If all goes well, it should look like this:
Image description

⚠️ Warning

If you do not see AWS-AppConfig-Extension in the drop-down of AWS-provided layers...make sure you are actually in Node.js 14.x version of runtime (not 16.x like below). At the time of making this blog post, Node.js 16.x has not been configured to allow for the AppConfig extension yet.
Image description

☁️ Set up the AWS Lambda Function URL

One of the amazing benefits of AWS Lambda is the ability to add HTTPS endpoints to any serverless function we launch in Lambda, as well as configure CORS headers if we wish.

Under the function overview:

  • Click Configuration
  • Click Function URL and then Create Function URL
  • Auth Type: NONE
  • Configure cross-origin resource sharing (CORS) [✅]
  • We will include the wildcat * for this demo for Allow origin, Expose headers, Allow headers, and Allow methods...but you would want to configure more secure methods of connection for origin/headers in production.
  • Click Save and save the Function URL for later.

Next, we need to update the Execution Role to ensure our Lambda function has the right IAM permissions to access AppConfig using the AWS best practice and principle of "least privilege." Basically, don't allow more permissions than are required for the need/resource.

  • Go to IAM (Identity and Access Management)
  • Select Policies
  • Go to Create Policy, then Choose a service and select AppConfig.
  • For Actions, select All AppConfig actions
  • For Resources, select All Resources
  • Click Next to proceed through the steps and then at the end under name, write AppConfigAllAccess to name this IAM role.

Image description

Image description

Next, we will attach these policies to our Lambda Function:

  • AmplifyFullAccess
  • AppConfigFullAccess
  • AppConfigAllAccess - this is the one we just made
  • AmazonDynamoDBFullAccess

The screen will look like this when you update the function and attached the policies successfully:
Image description

☁️ Create and Populate a DynamoDB Table using an EC2 Instance (VM)

This was a very eye-opening part for me because the workshop had a lot of elements where a single user would be completing the upload/scripts from their computer. But what if you wanted to do it from a Virtual Machine because you have a different AWS Profile on your computer you don't want to overwrite/mess with? Or what if you wanted multiple people to have access to the same server files/items?

You can configure an EC2 server and run all the commands as you would on your normal computer, but instead on a rented server/computer, in the cloud!

This is where the real fun starts...

First, we'll set up an EC2 instance so that we can get a real-time server running and that we can SSH into.

  • Navigate to the EC2 service in the AWS list of tools
  • Launch an instance
  • Choose: t2.medium (this will give us a little bit more power than the t2.micro BUT be cognizant that the t2.medium server will incur costs. If you do not want to incur costs, choose a t2.micro server instead).
  • Turn on all public access to the server (this is just for demo/testing purposes, so if you are going into production, you would want to lock down the server).
  • Once you confirm all the settings, save your keypair to your Computer/drive (this is important that you do not lose this), and click Launch

The server will look like this once it is running:
Image description

☁️ Create the DynamoDB Table from within an EC2 Server Using Linux

EC2 forces you to chmod 400 of the keypair so only your user can read/edit the file — anyone else would need to sudo. Make sure that the path does link to your terminal (Mac) or PowerShell (Windows). TIP: You can easily drag and drop a file from the Finder or Windows Explorer into your command line to automatically generate the file path.

chmod 400 /User/FF_Workshop_Keypair.cer
Enter fullscreen mode Exit fullscreen mode

Now we will SSH into the server with the keypair and Public DNS.

  • We will use ec2-user in the first part because this is an EC2 Instance
  • We will then pull the Public IPv4 DNS of the EC2 instance which you will see in the above screenshot on the right for the Instance Summary: ec2-3-82-148-71.compute-1.amazonaws.com
ssh -i /User/FF_Workshop_Keypair.cer ec2-user@ec2-3-82-148-71.compute-1.amazonaws.com
Enter fullscreen mode Exit fullscreen mode

It will look like this if you are successfully signed into the server:
Image description

Now we need to configure aws inside of the server.

The elements we will need to configure attributes for are: Default Region, Access Key ID, Secret Access Key, and AWS Session Token. These were generated for us via the AWS Event Bridge, but without it being set up for us, we would need to configure the aws profile (link to more about this in the AWS docs)

We will write aws configure and enter, and then copy and paste all of these values into the command line and press enter when prompted for each one. And then under output format, put json.

AWS_DEFAULT_REGION=us-east-1
AWS_ACCESS_KEY_ID=MSIA...MQ5U
AWS_SECRET_ACCESS_KEY=YI2d...JuZU
AWS_SESSION_TOKEN=IQoJ...wef4

Enter fullscreen mode Exit fullscreen mode

It will look like this:
Image description

☁️ Next, you have to make sure the right AWS account is active for the session in the command line

Right now, you have configured AWS above but it’s not set up to connect to anything…we know this because if we then try to do: aws sts get-caller-identity we will get the following error: Or code snippet:
Image description

An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid.
Enter fullscreen mode Exit fullscreen mode

To fix this, we need to add in the credentials into the command line as environmental variables and export (print) them into the profile for AWS in the Linux instance (same values as before):

export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=MSIA...MQ5U
export AWS_SECRET_ACCESS_KEY=YI2d...JuZU
export AWS_SESSION_TOKEN=IQoJ...wef4
Enter fullscreen mode Exit fullscreen mode

We can verify that the environmental variables are saved to our Linux session with printenv AWS_DEFAULT_REGION or printenv AWS_ACCESS_KEY_ID

If the output prints what we entered in previously, then it worked!

To validate that our AWS account is authenticated/synced, type aws sts get-caller-identity. If this worked correctly, you should see a print out of something like the below:
Image description

{
    "Account": "354789352487", 
    "UserId": "AROAVFGYX4QT3O2UKAPLI:MasterKey", 
    "Arn": "arn:aws:sts::354789352487:assumed-role/TeamRole/MasterKey"
}
Enter fullscreen mode Exit fullscreen mode

In our AWS account, in the top right, you'll see the right Account # (the same as in "Account" in the terminal).
Image description

Alright we did it!

To further validate this, if we use aws ec2 describe-instances then we should get a print out of the instances we are running or have terminated:
Image description

☁️ Now we populate the DynamoDB Table to hold image links and home information

While we have our SSH link open to our server and running, let's open up another terminal window. This is because we cannot upload a local file from within the server (remember, we are in a virtual machine there...so we are in a completely different computer that doesn't recognize our local directories on our computer).

We need to upload a file to the server and then call it that way to communicate with DynamoDB. We will use the scp (Secure Copy Protocol) command in Linux, which is used to copy/send/retrieve files securely between servers.

For us, the script we will run is this below: we reference a IAM key-pair certificate file FF_Workshop_Keypair.cer file on our computer and then the AWSCribsRentalMansions.json file on our computer and then our EC2 AMI’s Public IPv4 DNS ec2-3-82-148-71.compute-1.amazonaws.com and declare that we will put it as a loose-hanging file in the server at position: / .

The linux command is:

  • 1st part is the path to the secret key file
  • 2nd part is the path to the file we want to upload
  • 3rd part is ec2-user because it's an AMI EC2 "at" the Public IPv4 DNS URL ec2-3-82-148-71.compute-1.amazonaws.com
  • 4th part is a ~ since we want to upload it as a loose-hanging file in the server.
scp -i /Users/FF_Workshop_Keypair.cer /Users/AWSCribsRentalMansions.json  ec2-user@ec2-3-82-148-71.compute-1.amazonaws.com:~
Enter fullscreen mode Exit fullscreen mode

If all goes well, you should see a 100% printed below the script you entered.

Now, let's test if the file was uploaded in our server. Switch back to the terminal window where we SSH'ed into the server. If we do an ls from the root directory, we should now see the json file!

Now we will populate the DynamoDB table with our file uploaded to the EC2 instance:

aws dynamodb batch-write-item --request-items file://AWSCribsRentalMansions.json
Enter fullscreen mode Exit fullscreen mode

It will look like this (don't worry that it says "Unprocessed Items" - this is normal ☺️):
Image description

Let's check if DynamoDB really did receive all of the files like our command line told us. If it worked successfully, it will look like this:
Image description

Image description

Nice!! The files are all there!

☁️ Set up website on AWS with AWS Amplify and AWS CodeCommit

We will need 2 specific tools to hold our code and then to push our code to the internet with a link users can access.

  • AWS Amplify: for delivering the front-end
  • AWS CodeCommit: hold the code to deploy to Amplify

In our index.html file in the repo (check out my starter repo here from the code provided by AWS), we will need add the Lambda function calling AppConfig into the <script> tag of the index.html file.

We will turn this:

<!-- api call to populate cribs --> -->
<script>
     fetch('lamba-url-here')
     .then(response => response.text())
     .then((data) => {
            document.getElementById("populateCribsHere").innerHTML=data;
            setUI();
  });
</script>
Enter fullscreen mode Exit fullscreen mode

Into this:

<!-- api call to populate cribs --> -->
<script>
        fetch('https://2v35t2ppjx4vh7ygjzip7nauji0yzmxl.lambda-url.us-east-1.on.aws/')
     .then(response => response.text())
     .then((data) => {
            document.getElementById("populateCribsHere").innerHTML=data;
            setUI();
  });
</script>
Enter fullscreen mode Exit fullscreen mode

Remember that Lambda Function URL we got from before? We will put that into the lamba-url-here. We can get that here:
Image description

☁️ Use AWS CodeCommit to Host Private Git Repo

AWS CodeCommit is a secure, highly scalable, managed source control service that hosts private Git repositories.

  • Go to CodeCommit
  • Click Create a Repository
  • Name: AWSomeCribRepo
  • Description: This holds the code of my AWSomeCribRepo project
  • Click Create

Image description

After the repo is created, we need to add our code to this CodeCommit repo. We will see a couple different methods for how to Connect. We will choose the HTTPS (GRC) method.

HTTPS (GRC) is the protocol to use with git-remote-codecommit (GRC). This utility provides a simple method for pushing and pulling code from CodeCommit repositories by extending Git. It is the recommended method for supporting connections made with federated access, identity providers, and temporary credentials.

What we really want to do is pip install git-remote-codecommit to install the git-remote-code-commit framework, BUT, we don't have pip installed yet. We need Python to run pip so let's check if it's installed:

  • Run python --version — for me, the output is Python 2.7.18 and it should be added since this is an EC2 instance.
  • To install pip, run sudo yum install python-pip

It should look like this if it's successfully installed:
Image description

  • Now that pip is installed, let’s run the command we wanted to initially: pip install git-remote-codecommit.

It should look like this if all goes well:
Image description

Now we want to clone the repository to our EC2 instance with this command: git clone codecommit::us-east-1://AWSomeCribRepo. However, we don’t have git yet in our server, so we’ll install git this way:

  • Run this command: sudo yum install git

If all goes well, it will look like this:
Image description
Image description

  • We will verify the installation went through with: git --version — if it worked, you will see a print out like git version 2.32.0
    Image description

  • Now we will finally run: codecommit::us-east-1://AWSomeCribRepo — this will clone the empty repo into our server. When we do ls we see the blue folder in there:
    Image description

✨ Amazing job! We now have a link to our AWS CodeCommit Repo (in our AWS Environment) and our EC2 server. Especially cool because this is now OUR computer, but a virtual machine.

☁️ How do we upload a folder’s contents locally (our code repo) into a code repo in our EC2 instance using Linux?

Linux can be a bit confusing here but what we essentially will be doing is using a 2nd terminal window to connect to our SSH'ed terminal window, and then use that to connect to AWS. Don't worry... we will get there 😊

We will use scp to upload the file contents WITHIN our local folder and serve that content INTO the folder that is our AWS CodeCommit-connected folder.

  • We will use scp (secure copy protocol)
  • We use the -r flag (to recursively go through all the files)
  • We will input the path of our Keypair
  • We will then input the path of our folder and because it's the contents WITHIN it, we add the wildcard /* after the folder name
  • We will then input the EC2 instance with ec2-user and @ and then our EC2's Public IPv4 DNS value.
scp -r -i  /Users/FF_Workshop_Keypair.cer /Users/AWSAppConfigWorkshop/*  ec2-user@ec2-3-82-148-71.compute-1.amazonaws.com:~/AWSomeCribRepo
Enter fullscreen mode Exit fullscreen mode

You can see how the commands work between the 1st and 2nd terminal windows. On the right (connected to our local computer), we serve the files to the server. Then on the left (connected to the EC2 instance), we check to see if the files were in fact uploaded.

⚠️ NOTE: in the screenshot, it says ...amazonaws.com:~/test but this was taken while I was testing Linux 😊. You will want that part to read amazonaws.com:~/AWSomeCribRepo like what is referenced in the above code snippet:

Image description

☁️ Now we will push the code to AWS Code Commit

Back in our SSH'ed terminal window, we will want to make sure we are in our AWSomeCribRepo and run the commands to send the repo into AWS CodeCommit:

  • Run: cd AWSomeCribRepo
  • Add all the files in the directory to git with git add -A
  • Create a commit message: git commit -m "commit from EC2"
  • Push the code to AWS CodeCommit: git push

When you do the git push, you will see output like this:

Enumerating objects: 1847, done.
Counting objects: 100% (1847/1847), done.
Delta compression using up to 2 threads
Compressing objects: 100% (1845/1845), done.
Writing objects: 100% (1847/1847), 3.99 MiB | 4.68 MiB/s, done.
Total 1847 (delta 213), reused 0 (delta 0), pack-reused 0
To codecommit::us-east-1://AWSomeCribRepo
 * [new branch]      master -> master
Enter fullscreen mode Exit fullscreen mode

Image description

⚠️ IMPORTANT: If you run into this type of error after git push:

fatal: unable to access 'https://git-codecommit.us-east-1.amazonaws.com/v1/repos/AWSomeCribRepo/': The requested URL returned error: 403

Or looks like this in the command line:
Image description

It’s probably because your configuration environmental variables aren’t set correctly in the EC2 server.

These are provided to us thanks the AWS EventBridge for the workshop, but you can retrieve these as part of your AWS Account. You need to make sure you are "logged in" or authenticated in the EC2 instance for the push command to know which AWS Account's CodeCommit to push to. Imagine you push your code to a stranger's CodeCommit... (that would be so bad...). This is to make sure we are truly authenticated in the right place to then send our code to the right destination.

AWS_DEFAULT_REGION=us-east-1
AWS_ACCESS_KEY_ID=MSIA...MQ5U
AWS_SECRET_ACCESS_KEY=YI2d...JuZU
AWS_SESSION_TOKEN=IQoJ...wef4

Confirm that aws sts get-caller-identity returns a value like this:

{
    "Account": "354789352487", 
    "UserId": "AROAVFGYX4QT3O2UKAPLI:MasterKey", 
    "Arn": "arn:aws:sts::354789352487:assumed-role/TeamRole/MasterKey"
}

Now when you do the git push, you will see output like this:

Enumerating objects: 1847, done.
Counting objects: 100% (1847/1847), done.
Delta compression using up to 2 threads
Compressing objects: 100% (1845/1845), done.
Writing objects: 100% (1847/1847), 3.99 MiB | 4.68 MiB/s, done.
Total 1847 (delta 213), reused 0 (delta 0), pack-reused 0
To codecommit::us-east-1://AWSomeCribRepo
 * [new branch]      master -> master

Image description

Back in the AWS Console, CodeCommit will now go from this:
Image description

To this:
Image description

✨ You have now created a repository for your website with files uploaded from your computer, to your EC2 instance, and now to AWS. Next, you will connect this repository to AWS Amplify to host your website with a real URL.

🚀 Host your Website on AWS Amplify

One of the best benefits of AWS Amplify when running deployments is the ability to see the various stages of the build process. This becomes very helpful when there are errors or you need to check the validation of a build, the logs include a great amount of detail, and of course you can roll in a vast amount of tools and AWS features thanks to Amplify and it's framework to abstract complexity from deployments.

To accomplish this we will:

  • Go to the AWS Amplify service in the AWS Console
  • Go to All apps
  • Select New App and in the drop-down, select Host web app > Image description
  • Select AWS CodeCommit and then Continue to advance > Image description
  • Select AWSomeCribRepoin the CodeCommit dropdown and then Next > Image description
  • In the Configure build settings make sure to click ✅ Allow AWS Amplify to automatically deploy all files hosted in your project root directory and then press Next. Image description
  • Now click Save and deploy > Image description

AWS Amplify will then go through it’s 4 stages of deployment automatically for us

  • Provision
  • Build
  • Deploy
  • Verify

It will go from this:
Image description

✨ To this:
Image description

When our site is building, it will go from this:
Image description

✨ To this:
Image description

☁️ Using Feature Flags

If you notice in the deployed version, it doesn't show that carousel feature we created AppConfig for in the first place... where did it go? Wasn't it supposed to appear there?

We haven't turned it on yet 😊

So what we are going to now do is turn on and deploy the carousel Feature Flag to launch this feature. We will then put on an operational toggle to ensure that during spikes in usage, it will limit the number of homes displayed on the homepage. This will help to optimize our performance with fewer queries on our DynamoDB table at peak times.

  • Go to AppConfig
  • Select our app AWSomeCribRentals
  • Click on the Feature Flag we created: CardFeatureFlag (notice that the CardFeature Flag is OFF for the showcarousel Flag as shown below)

Image description

  • Toggle the showcarousel flag ON and then Save new version
  • Click Start Deployment and configure the elements
  • Environment: Beta - we set this up earlier
  • Hosted configuration version: 2
  • Deployment strategy: FFWorkshop_Deployment
  • Description: Switching showcarousel feature on gradually for users
  • Click Start deployment

Image description

The deployment (this is technically Deployment #2 because we already deployed earlier on this Feature Flag) will look like this as it builds and gives you data about the deployment like so until its state reaches 100%:

Image description

Image description

✨ Viola! Check out the updated changes to our app:

Notice the carousel? This is now a feature that users will begin to see in our application:

Image description

Here is a video of the deployed AppConfig Feature Flag in action live on the URL:

☁️ Operational Feature Flags

This above is an example of a feature add that we will definitely add to our site and keep. However, let's say we wanted to throttle the number of requests coming back to users when they search for something or want a certain type of data.

Remember our pagination Feature Flag in CardFeatureFlag that we configured earlier? Well, if we want to adjust how many results are returned in the pagination results (i.e. to improve performance when there is a lot of traffic), we can do this by:

  • Navigate to AppConfig
  • Click on the CardFeatureFlag that we set up
  • Select the pagination flag
  • Click Edit
  • Change the number in the value to: 6
  • Click Confirm
  • Ensure this flag's toggle is turned ON
  • Click Save new version
  • Click Start deployment
  • Keep the same settings as you did in Deployment #2 above, and especially make sure the Deployment Strategy says: FFWorkshop_Deployment
  • Click Start deployment

✨ When you navigate to your URL, you should now see 6 crib results! This means we can adjust this feature manually from AWS AppConfig without needing to delete or comment out the code and push a new version.

🧹 Clean up Release Flags

If you need to clean up a release flag — for example, let's say the feature add was a big success and there's no reason for you to revert or roll-back the changes — it's important to know what to do.

If you do not need to have a Feature Flag for one of your released features, you can simply delete that short-term flag since it's no longer needed. We already marked the showcarousel as a short-term flag

To delete a flag, we will:

  • Go to AppConfig
  • Select our app: AWSomeCribRentals
  • Click on the Feature Flag: CardFeatureFlag
  • On the showcarousel Feature Flag, click Delete and re-confirm Delete on the pop-up.
  • Click Save new version to reflect this change

In a real life situation, we would want to clean up the code to reflect this feature add, but since this was a workshop setting, we did not do that here. For example, in line 39 of the Lambda Function, you would need to make sure that the Crib carousel feature is not conditionally turned on via AppConfig.

🧹 Clean Up Our AWS Resources:

Congrats on making it through! Since you are probably running these services on your own AWS account, it is important to make sure you delete instances of the services so that you do not incur costs:

  • EC2: terminate/delete the running instance
  • AWS Lambda: delete the function
  • DynamoDB: delete the AWSCribsRentalMansions table
  • Amplify: delete the app we created
  • CodeCommit: delete the repository we created
  • AppConfig: delete each Feature Flag, the configuration profile, and in environments, delete Beta, as well as the AWSomeCribRentals application

WOW! That was a lot! Hopefully you enjoyed going through this AWS workshop with me and the steps I took. Let me know what you think below and if you have any questions on any of the AWS tools I used, feel free to let me know what you thought 😊

🧑‍💻 Subscribe to the Tech Stack Playbook for more:

Let me know if you found this post helpful! And if you haven't yet, make sure to check out these free resources below:

Let's digitize the world together! 🚀

-- Brian

Top comments (0)