<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: John Doyle</title>
    <description>The latest articles on DEV Community by John Doyle (@art_wolf).</description>
    <link>https://dev.to/art_wolf</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/art_wolf"/>
    <language>en</language>
    <item>
      <title>GitMosaic Hackathon</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Tue, 23 May 2023 23:05:55 +0000</pubDate>
      <link>https://dev.to/art_wolf/gitmosaic-hackathon-2j9d</link>
      <guid>https://dev.to/art_wolf/gitmosaic-hackathon-2j9d</guid>
      <description>&lt;h2&gt;
  
  
  What I built
&lt;/h2&gt;

&lt;p&gt;A mosaic showing off your GitHub Avatar based on your git frequency.&lt;/p&gt;

&lt;h3&gt;
  
  
  Category Submission:
&lt;/h3&gt;

&lt;p&gt;Wacky Wildcards&lt;/p&gt;

&lt;h3&gt;
  
  
  App Link
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://gitmosaic.com"&gt;https://gitmosaic.com&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Screenshots
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Description
&lt;/h3&gt;

&lt;p&gt;Compete with others to claim your section of the mosaic with your avatar. Let others commit to your repo to allow your team to contribute together!&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Source Code
&lt;/h3&gt;

&lt;p&gt;Action: &lt;a href="https://github.com/johncolmdoyle/gitmosaic-action"&gt;https://github.com/johncolmdoyle/gitmosaic-action&lt;/a&gt;&lt;br&gt;
Application: &lt;a href="https://github.com/johncolmdoyle/gitmosaic"&gt;https://github.com/johncolmdoyle/gitmosaic&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Permissive License
&lt;/h3&gt;

&lt;p&gt;Apache License&lt;/p&gt;

&lt;h2&gt;
  
  
  Background (What made you decide to build this particular app? What inspired you?)
&lt;/h2&gt;

&lt;p&gt;This was inspired by a now archived sub-reddit, &lt;a href="https://www.reddit.com/r/place/"&gt;/r/place&lt;/a&gt; where the community would come together to generate a giant piece of art.&lt;/p&gt;

&lt;h3&gt;
  
  
  How I built it
&lt;/h3&gt;

&lt;p&gt;I came into the hackathon very late so I kept this very simple. I created a GitHub Action that sent the context of the git commit to an api. The owner of the repo would be identify, and their image would be downloaded and broken into 100 pieces.&lt;/p&gt;

&lt;p&gt;Each url was assigned a starting x, y co-ordinate.&lt;/p&gt;

&lt;p&gt;Each of the 100 pieces had an off set from those x, y coordinates.&lt;/p&gt;

&lt;p&gt;Each commit id would identify a random index in this array. This x, y coordinate was set on the mosaic pointing to this image.&lt;/p&gt;

&lt;p&gt;The latest version of each coordinate was returned to the user to display the current mosaic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional Resources/Info
&lt;/h3&gt;

</description>
      <category>githubhack23</category>
    </item>
    <item>
      <title>EuroSquares: An AWS Amplify-Powered Game for Eurovision 2023</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Sun, 14 May 2023 06:58:20 +0000</pubDate>
      <link>https://dev.to/aws-builders/eurosquares-an-aws-amplify-powered-game-for-eurovision-2023-294h</link>
      <guid>https://dev.to/aws-builders/eurosquares-an-aws-amplify-powered-game-for-eurovision-2023-294h</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://eurovision.tv/event/liverpool-2023"&gt;Eurovision Song Contest&lt;/a&gt; is one of the most watched events on television, with millions of viewers tuning in from around the world to watch the performances of various countries' musical acts. &lt;/p&gt;

&lt;p&gt;For Eurovision 2023, I decided to create &lt;a href="https://eurosquares.com"&gt;EuroSquares&lt;/a&gt;, a game that allows a group of friends to be assigned random countries to see who will have the most points at the end of the contest.  In this post, I'll explain how &lt;a href="https://aws.amazon.com/amplify/"&gt;AWS Amplify&lt;/a&gt; was used to power the EuroSquares application, the code is available on &lt;a href="https://github.com/johncolmdoyle/EuroSquares/"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5ZtULCXL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ra2d0vzin744545am7tf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5ZtULCXL--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ra2d0vzin744545am7tf.png" alt="Game Results" width="800" height="567"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;p&gt;AWS Amplify is a platform that allows developers to quickly and easily build full-stack applications using popular web frameworks and integrate them with AWS services.&lt;/p&gt;

&lt;p&gt;EuroSquares was built using &lt;a href="https://react.dev/"&gt;React JS&lt;/a&gt; and was &lt;a href="https://nodejs.org/en"&gt;NodeJS&lt;/a&gt;. The application consists of a front-end React application, a REST API, AWS Lambdas, and DynamoDB tables for storing data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--xD9-W4ss--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2z5jo0klz3bvg5q37ca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--xD9-W4ss--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m2z5jo0klz3bvg5q37ca.png" alt="Architecture" width="441" height="599"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Front End
&lt;/h2&gt;

&lt;p&gt;The front-end of the application was built using React and was designed to be simple, especially with a time crunch on the project. The user interface was designed using the &lt;a href="https://getbootstrap.com/"&gt;Bootstrap&lt;/a&gt; library, which provides a set of pre-built UI components that can be easily customized to fit the needs of the application.&lt;/p&gt;

&lt;p&gt;React allowed me to control the content of the calls between the Front End/Mid Tier/Back End, the actions that would trigger the calls, etc.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mid Tier
&lt;/h2&gt;

&lt;p&gt;The React application communicates with the back-end of the application through a REST API. &lt;/p&gt;

&lt;p&gt;With any REST API, it is always good to build out the &lt;a href="https://www.postman.com/punchlistusa/workspace/euro-squares"&gt;Postman Workspace&lt;/a&gt; to allow easy testing. &lt;/p&gt;

&lt;p&gt;There are only 3 resources defined for the project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Squares&lt;/li&gt;
&lt;li&gt;Bands&lt;/li&gt;
&lt;li&gt;Score&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Amplify gives an option of using a template and linking the API Endpoint with the Lambda and a connected DynamoDB table. I selected the template, &lt;a href="https://github.com/vendia/serverless-express"&gt;Serverless Espress&lt;/a&gt;, which provided a lot of code in the template, though the majority of it felt overly bloated for what was effectively CRUD functionality.&lt;/p&gt;

&lt;p&gt;Amplify has a strict way of building all the components, and this was the time I ran into some issues where I wanted to go back and grant a function access to multiple DynamoDB Tables. I struggled trying to find the proper way to implement this, and eventually succumbed to granting an over-broad &lt;a href="https://github.com/johncolmdoyle/EuroSquares/blob/main/amplify/backend/function/scoreLambda/custom-policies.json"&gt;set of permissions&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[
  {
    "Action": [
      "dynamodb:*"
    ],
    "Resource": [
      "*"
    ]
  }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Back End
&lt;/h2&gt;

&lt;p&gt;The DynamoDB database was used to store the state of the game, including the current state of the grid, the selected squares, and the points each player had accumulated. The benefit of the database was how highly scalable and resilient it was, with automatic scaling and failover capabilities built in.&lt;/p&gt;

&lt;p&gt;There was nothing very complex about this setup, as the project was not data intensive.&lt;/p&gt;

&lt;p&gt;Future work, for the next contest, would be to try and hook this into a data source vs manually entering in the data. Furthermore, rather than manually calling the scores endpoint, it would have been better to have a trigger that would automatically recalculate every player's score as a country's points were announced.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, EuroSquares was built very quickly using the AWS Amplify platform, allowing a serverless project to be released in a matter of hours. Managing a lot of the components and hosting helped with the basics.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>amplify</category>
      <category>react</category>
    </item>
    <item>
      <title>AWS SSO &amp; GitHub OpenID Connect Setup</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Fri, 07 Apr 2023 20:19:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-sso-github-setup-35ld</link>
      <guid>https://dev.to/aws-builders/aws-sso-github-setup-35ld</guid>
      <description>&lt;p&gt;After setting up our AWS accounts, we need to ensure that only our user and our CI/CD pipeline have permission to perform updates. AWS has published a lot of &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html" rel="noopener noreferrer"&gt;resources&lt;/a&gt; around best practices.&lt;/p&gt;

&lt;h1&gt;
  
  
  Identity Center
&lt;/h1&gt;

&lt;p&gt;Since we plan to have multiple AWS accounts, we need to manage access to each of them. The &lt;a href="https://aws.amazon.com/iam/identity-center/" rel="noopener noreferrer"&gt;AWS Identity Center&lt;/a&gt; enables you to create and manage AWS users, groups, and permissions to grant or deny access to AWS resources across AWS accounts in your organizations.&lt;/p&gt;

&lt;p&gt;We do not want to access any of our accounts with the root account. Instead, we will create an IAM user with our primary account's AWS Identity Center that will have access to each of the AWS accounts we need.&lt;/p&gt;

&lt;p&gt;Similar to how IAM is generally configured, you have a User, who can be assigned to a Group, and that Group can be granted permissions.&lt;/p&gt;

&lt;p&gt;As this is across AWS Accounts, the mapping of permissions is configured account by account. So your development group might have full access to the development account, but maybe only read access to CloudWatch logs etc in the production account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F581wad18dzstkjkgxv6e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F581wad18dzstkjkgxv6e.png" alt="AWS Identity Center Permission Mapping"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Single Sign On
&lt;/h2&gt;

&lt;p&gt;Another great feature of AWS Identity Center is that it gives us a personalized URL to sign in with our IAM user and access all the accounts that the user has access to! The portal allows us to either log in to the account or download temporary access credentials for use with the AWS CLI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpeyjvmjr5paqbj7zelit.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpeyjvmjr5paqbj7zelit.png" alt="Single Sign On URL"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we login with our user and can get access to which ever account we need:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkaz7ouoegu8ch19ijifd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkaz7ouoegu8ch19ijifd.png" alt="SSO Access"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  CI/CD
&lt;/h1&gt;

&lt;p&gt;With our user having access, we start to shift our focus over to our pipeline and CODE! The aim here is to use &lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt; to deploy our changes.&lt;/p&gt;

&lt;p&gt;It is a bad practice to create an IAM User and use their credentials for our pipeline's permission set. We end up with Access Keys that never get rotated, in an external service. If the Access Key is compromised, it can lead to unauthorized access to your resources, resulting in data breaches, data loss, or other security incidents.&lt;/p&gt;

&lt;p&gt;We prevent this by granting GitHub access to assume IAM roles which grants the service temporary credentials to our AWS Account. &lt;/p&gt;

&lt;h2&gt;
  
  
  Identity Provider
&lt;/h2&gt;

&lt;p&gt;The first step in this configuration is to authorize GitHub as a trusted service. We will perform all of these steps in each AWS account we want GitHub to have access to, so login to the Dev account now.&lt;/p&gt;

&lt;p&gt;Once logged in, we can configure &lt;a href="https://us-east-1.console.aws.amazon.com/iamv2/home?region=us-east-1#/identity_providers/create" rel="noopener noreferrer"&gt;create a new Identity Provider&lt;/a&gt;. GitHub has provided &lt;a href="https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-amazon-web-services" rel="noopener noreferrer"&gt;documentation&lt;/a&gt; on setting this up.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focpg7afp6jxapm3pad5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Focpg7afp6jxapm3pad5g.png" alt="Adding Identity Provider"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Role
&lt;/h2&gt;

&lt;p&gt;Now that we have a trusted identity provider, select it from the list and you should have the option to Assign Role to it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe37s8od7n0oqja8g88db.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe37s8od7n0oqja8g88db.png" alt="Assign Role"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let's create a new role, and it should pre-populate the role with some options. You will need to select the audience from the drop down, but there should only be one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsc6dg80v56fzjejrm6z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsc6dg80v56fzjejrm6z.png" alt="Creating Role"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Assign the role the permissions we will want the CI/CD Pipeline to have. Don't worry too much about this, we will come back to it often to refine the permissions as we go. Finally set a name like &lt;code&gt;GitHub&lt;/code&gt; and create the role.&lt;/p&gt;

&lt;h2&gt;
  
  
  Restricting the Role
&lt;/h2&gt;

&lt;p&gt;Having a role is great, but we don't want anyone using GitHub to be able to assume it. &lt;/p&gt;

&lt;p&gt;We will lock this down to our GitHub Repo and to a specific branch.&lt;/p&gt;

&lt;p&gt;Open the &lt;a href="https://us-east-1.console.aws.amazon.com/iamv2/home#/roles/details/GitHub?section=trust_relationships" rel="noopener noreferrer"&gt;Trust Relationships&lt;/a&gt; for the role.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrr8okvtu9y0pjr47f5p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frrr8okvtu9y0pjr47f5p.png" alt="Initial Trust Relationships"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will update the Condition statement to only grant access from the dev branch of the github repo:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"Condition": {
   "ForAllValues:StringEquals": {
      "token.actions.githubusercontent.com:ref": "refs/heads/dev",
      "token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
   },
   "StringLike": {
      "token.actions.githubusercontent.com:sub": "repo:johncolmdoyle/holycitypaddle-code:*"
   },
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  Production
&lt;/h2&gt;

&lt;p&gt;Now repeat these same steps for the production account and update the condition statement to reference our main branch:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

"Condition": {
   "ForAllValues:StringEquals": {
      "token.actions.githubusercontent.com:ref": "refs/heads/main",
      "token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
   },
   "StringLike": {
      "token.actions.githubusercontent.com:sub": "repo:johncolmdoyle/holycitypaddle-code:*"
   },
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;We are now ready to utilize &lt;a href="https://github.com/aws-actions/configure-aws-credentials" rel="noopener noreferrer"&gt;configure-aws-credentials&lt;/a&gt; within our GitHub Actions as we move onto deploying our code!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>github</category>
      <category>iam</category>
    </item>
    <item>
      <title>High Level Architecture</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Fri, 31 Mar 2023 14:55:24 +0000</pubDate>
      <link>https://dev.to/aws-builders/high-level-architecture-1d4l</link>
      <guid>https://dev.to/aws-builders/high-level-architecture-1d4l</guid>
      <description>&lt;p&gt;We have our accounts, and we have our requirements - the next step is to sketch out at a high level what services we will use, where they will reside, and a quick point on CI/CD.&lt;/p&gt;

&lt;h1&gt;
  
  
  Services
&lt;/h1&gt;

&lt;p&gt;There are several AWS services that we will be utilizing in this project:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/amplify/" rel="noopener noreferrer"&gt;AWS Amplify&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/location/" rel="noopener noreferrer"&gt;AWS Location Services&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/api-gateway/" rel="noopener noreferrer"&gt;API Gateway&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;Lambda&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/dynamodb/" rel="noopener noreferrer"&gt;DynamoDB&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;S3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudfront/" rel="noopener noreferrer"&gt;CloudFront&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Amplify
&lt;/h2&gt;

&lt;p&gt;This is a neat little service that allows us to build out web and mobile applications. This service integrates our front end using Flutter or ReactNative, and hooks it straight up to the back end! &lt;/p&gt;

&lt;h2&gt;
  
  
  Location Services
&lt;/h2&gt;

&lt;p&gt;Launched back in June 2021, this service specializes in maps, geocoding, geofencing, tracking, and routing. Something that will be pretty important when navigating the rivers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Serverless
&lt;/h2&gt;

&lt;p&gt;API Gatweay, Lambda, and DynamoDB make up the primary serverless triad. By serverless, in this context, it means that we do not manage the underlying infrastructure. Indeed these three services allows our application to scale in and out dynamically while minimizing operational overhead. &lt;/p&gt;

&lt;h2&gt;
  
  
  File Storage
&lt;/h2&gt;

&lt;p&gt;S3 and CloudFront will be used to manage any media that is uploaded - from logos and images for the application, to images other users upload. They will be stored with an S3 bucket, and then hosted by CloudFront. CloudFront is a content delivery network (CDN), the content in edge locations that are closest to the user, reducing the amount of time it takes to fetch and deliver the content. This improves the user experience by reducing latency and improving performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq95qq3kw0ypdhaehuxps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq95qq3kw0ypdhaehuxps.png" alt="Service Setup"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  CI/CD
&lt;/h1&gt;

&lt;p&gt;We will be building our application using &lt;a href="https://github.com" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and &lt;a href="https://github.com/features/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt; to deploy the code. &lt;/p&gt;

&lt;p&gt;Before we get into deployment, we want to setup our Git Repositories and align them with our AWS Accounts.&lt;/p&gt;

&lt;p&gt;Due to the simplicity of this project, I will only setup 2 repos - one for the application that will be deployed to Development and Production, and a second one that manages the Shared Services account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbs7tlcfsfn3wgb6h8mly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbs7tlcfsfn3wgb6h8mly.png" alt="Git Repo To AWS Accounts"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next we will setup AWS SSO to allow us to login to our AWS Accounts and also grant our GitHub repos permission to deploy to the AWS Accounts.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
    </item>
    <item>
      <title>Accounts &amp; Organizations</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Fri, 31 Mar 2023 13:44:14 +0000</pubDate>
      <link>https://dev.to/aws-builders/accounts-organizations-3oah</link>
      <guid>https://dev.to/aws-builders/accounts-organizations-3oah</guid>
      <description>&lt;p&gt;When setting up your AWS account, it's easy to jump right in and start deploying resources, following tutorials, and becoming familiar with your account. &lt;/p&gt;

&lt;p&gt;Pause though! &lt;/p&gt;

&lt;p&gt;It's important to remember to remove specific resources when you're finished with them to avoid surprise bills later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Billing Alerts
&lt;/h2&gt;

&lt;p&gt;To avoid this unpleasant surprise, it's a good idea to set up a billing alert for a specific amount each month. You can do this by navigating to AWS Billing Budgets and creating a budget using AWS's preconfigured templates. Choose an amount that you're comfortable with, such as $100 per month.&lt;/p&gt;

&lt;p&gt;Navigate to &lt;code&gt;[AWS Billing Budgets](https://console.aws.amazon.com/billing/home?#/budgets/overview)&lt;/code&gt; where you can easily create a Budget using AWS configured templates. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm95rxtag1noiiru2ddy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhm95rxtag1noiiru2ddy.png" alt="Create a Budget Alert"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now that you won't be caught off guard by unexpected charges, let's configure your organization.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Organizations
&lt;/h2&gt;

&lt;p&gt;While you have your primary account, it's recommended to create a separate AWS account for each project. This keeps everything related to the project grouped together, both from a budget and resource perspective. When you're finished with a project, you can simply delete the resources in the account and shut it down.&lt;/p&gt;

&lt;p&gt;To manage multiple AWS accounts more easily, you can use &lt;a href="https://console.aws.amazon.com/organizations/v2/home/accounts" rel="noopener noreferrer"&gt;AWS Organizations&lt;/a&gt;, which allows you to consolidate multiple accounts and centrally manage and control them. There is no additional cost to set up AWS Organizations, so it's highly recommended.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project OUs
&lt;/h3&gt;

&lt;p&gt;Within AWS Organizations you can organize your multiple accounts into Organizational Units, i.e OUs. These are like folders to keep all of your related accounts in. &lt;/p&gt;

&lt;p&gt;When kicking off a project, I look at having the following three main AWS accounts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Production&lt;/li&gt;
&lt;li&gt;Development&lt;/li&gt;
&lt;li&gt;Shared Services&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Production and Development accounts are straightforward: code is deployed to the Development account, tested, and then promoted to Production. The Shared Services account is more complex, as it hosts resources used by both the Production and Development accounts. &lt;/p&gt;

&lt;p&gt;For example, if you're purchasing a domain name, you would register it within the Shared Services account, which would then point the domain to the appropriate accounts so that they can use the primary domain. This keeps the Production account from pointing subdomains back down to Development.&lt;/p&gt;

&lt;p&gt;When creating an account, there are a few rules of thumb:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add tags - I usually add a project and environment tag.&lt;/li&gt;
&lt;li&gt;Account Email - I will use email subaddressing, also known as plus sign (+), to create accounts with; but don't login and setup a password with them. We will utilize AWS SSO service for that.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6l6doh92v6fv7qfsrbr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg6l6doh92v6fv7qfsrbr.png" alt="Adding an AWS Account"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now after setting up my accounts, I have the following organization unit:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42bumy5sfn6bf2zi1r7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F42bumy5sfn6bf2zi1r7o.png" alt="Organization Unit"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Infrastructure As Code
&lt;/h2&gt;

&lt;p&gt;While setting up accounts I tend to do manually, you can also programmatically do this using Terraform. You could build out scripts that allow you to have a project template which you can run each time you have a new project idea.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;provider "aws" {
  region = "us-east-1"
}

resource "aws_organizations_account" "root" {
  name = "my-root-account"
  email = "my-root-account@fake.com"
}

resource "aws_organizations_organizational_unit" "ou" {
  name = "holy city paddle"
  parent_id = aws_organizations_account.root.id
}

resource "aws_organizations_account" "prod" {
  name = "hcp-prod"
  email = "my-root-account+hcp-prod@fake.com"
  parent_id = aws_organizations_organizational_unit.ou.id
}

resource "aws_organizations_account" "dev" {
  name = "hcp-dev"
  email = "my-root-account+hcp-dev@fake.com"
  parent_id = aws_organizations_organizational_unit.ou.id
}

resource "aws_organizations_account" "shared" {
  name = "hcp-shared"
  email = "my-root-account+hcp-shared@fake.com"
  parent_id = aws_organizations_organizational_unit.ou.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>terraform</category>
      <category>devops</category>
    </item>
    <item>
      <title>Project Requirements</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Fri, 31 Mar 2023 04:29:24 +0000</pubDate>
      <link>https://dev.to/aws-builders/quick-side-projects-3789</link>
      <guid>https://dev.to/aws-builders/quick-side-projects-3789</guid>
      <description>&lt;p&gt;One of the things I love about coding is the ability to start a random side project and try out new things. Over the years, I've played with side projects and participated in hackathons, and I've discovered some best practices for quickly building a solid project.&lt;/p&gt;

&lt;p&gt;Recently, I bought a kayak and realized I needed to figure out where to go, when to go, and what are the best things to do. Although there are multiple websites that could help me, I decided to build something that met my exact needs. This side project is called "Holy City Paddlers."&lt;/p&gt;

&lt;h2&gt;
  
  
  Requirements
&lt;/h2&gt;

&lt;p&gt;Before diving into the code, it's essential to write down some basic goals and key features to minimize the cost of refactoring later. &lt;/p&gt;

&lt;p&gt;Requirements typically fall into two primary groups: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Functional&lt;/li&gt;
&lt;li&gt;Non-Functional&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The easy way to think about these differences are the actions or operations that are needed to take place (functional) vs well.. really everything else - how does it perform, what usability aspects does it need to account for.&lt;/p&gt;

&lt;h3&gt;
  
  
  Functional Requirements
&lt;/h3&gt;

&lt;p&gt;For this app, there are a few things I want to get from it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Maps: I want some geolocation based maps, where am I currently, let me plan out my routes between rivers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Weather Forecasts: Some folks may be all weather kayakers, but I only want to hit the water when its nice and sunny.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;River/Tide Information: This is a big one, and it depends on my route. Am I looking to go up-river or down-river. Matching the tides with when I want to go out and where I want to go is critical.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Reviews: While this is personal, it would be great for other people to be able to recommend routes, or even points on the river to launch from.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wildlife and Landmarks: It would be cool to be able to see historic landmarks along the route.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Non-Functional Requirements
&lt;/h3&gt;

&lt;p&gt;Many things fall under non-functional, but I'm going to kick most of those into the next post around the architecture especially when we thing about performance, reliability etc.&lt;/p&gt;

&lt;p&gt;For me, and this project, the number 1 non-functional requirement is around usability.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Usability: I need an easy planning mode coupled with an easy to understand on-water mode.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With these requirements in mind, I'll dive into the architecture in my next post.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>programming</category>
      <category>architecture</category>
      <category>design</category>
    </item>
    <item>
      <title>Building a Multi-Region app with AWS CDK - Part 1</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Sun, 07 Feb 2021 20:35:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-multi-region-app-with-aws-cdk-part-1-34g8</link>
      <guid>https://dev.to/aws-builders/building-a-multi-region-app-with-aws-cdk-part-1-34g8</guid>
      <description>&lt;p&gt;There are lots of tutorials of deploying basic CRUD applications with &lt;a href="https://aws.amazon.com/cdk/"&gt;AWS CDK&lt;/a&gt; using API Gateway, and Lambda connecting to a DynamoDB table. The defacto &lt;a href="https://github.com/aws-samples/aws-cdk-examples/tree/master/typescript/api-cors-lambda-crud-dynamodb/"&gt;starter set&lt;/a&gt; for a serverless application. You build and deploy, modify the hello-world application to match your needs, you're happy! The API is running and acting perfectly correctly in your default region.. us-east-1...&lt;/p&gt;

&lt;h2&gt;
  
  
  The Issues Begin
&lt;/h2&gt;

&lt;p&gt;Then &lt;a href="https://aws.amazon.com/message/11201/"&gt;November 25th, 2020&lt;/a&gt; strikes. The whole region is down for hours, your API is inaccessible, people are angry at YOU for this inconvenience. You read more about us-east-1 and realize that it tends to have a history of outages... and plan to make the jump to another region.&lt;/p&gt;

&lt;p&gt;December rolls around and you've successfully migrated your application to us-west-2. It wasn't too difficult, your CDK app was able to tear down everything in us-east-1, and then deploy everything again in the new region. You let out the breath you'd been holding. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://downdetector.com/status/aws-amazon-web-services/news/359283-problems-at-amazon-web-services/"&gt;January 7th 2021&lt;/a&gt; started off really well, and as you started looking forward to the end of the day... the alerts start ringing. Your new region, us-west-2, has begun to act up! After two hours of sweating, you are back up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Region Solution
&lt;/h2&gt;

&lt;p&gt;After all this pain and stress, you're determined not to be caught flat-footed again. While the goal of a multi-region setup seemed daunting, you roll up your sleeves and dive into it.&lt;/p&gt;

&lt;p&gt;For our CRUD application, there are two components we need to deal with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ensuring our data is consistent between the regions.&lt;/li&gt;
&lt;li&gt;That DNS always resolves to a region that is up.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Thankfully there are solutions to both of these issues in our existing tech stack.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tech&lt;/th&gt;
&lt;th&gt;Single Region Solution&lt;/th&gt;
&lt;th&gt;Multi Region Solution&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Data&lt;/td&gt;
&lt;td&gt;DynamoDB Table&lt;/td&gt;
&lt;td&gt;DynamoDB Global Table&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DNS&lt;/td&gt;
&lt;td&gt;Route53 Simple Routing Policy&lt;/td&gt;
&lt;td&gt;Route53 Latency Routing Policy&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;In part 1, I'll dive into the complexities of implementing a multi-region data configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  DynamoDB Global Tables
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://aws.amazon.com/dynamodb/global-tables/"&gt;DynamoDB Global Tables&lt;/a&gt; are a managed solutions from AWS where they keep replicate data changes from one DynamoDB table to all related tables in other regions.&lt;/p&gt;

&lt;p&gt;For simplicity of use, there can be a number of "gotchas" from an Infrastructure as Code perspective.&lt;/p&gt;

&lt;h3&gt;
  
  
  CDK Primary Region
&lt;/h3&gt;

&lt;p&gt;While DynamoDB Global Tables are multi-master, multi-region, from an AWS CDK point of view, we deploy them in only a single region. The resource is configured to replicate to the other regions that you're interested in.&lt;/p&gt;

&lt;p&gt;So I would instantiate my stack in us-west-2, and configure it to replicate to us-east-1, us-east-2, and us-west-1.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt;  &lt;span class="nx"&gt;dynamodb&lt;/span&gt;  &lt;span class="k"&gt;from&lt;/span&gt;  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@aws-cdk/aws-dynamodb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="c1"&gt;// Stack was called in us-west-2&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;gloablTable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;dynamodb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;globalTable&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;partitionKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;id&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;dynamodb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;AttributeType&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;STRING&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;billingMode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;dynamodb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;BillingMode&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;PROVISIONED&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;replicationRegions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-east-2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-west-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;removalPolicy&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;RemovalPolicy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;DESTROY&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we want to reference our global table's name later on as environment variables for our Lambdas, we will want to build this out into two stacks.&lt;/p&gt;

&lt;p&gt;This will make our app resemble the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;appRegions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-east-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-east-2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-west-1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-west-2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt;  &lt;span class="nx"&gt;app&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt;  &lt;span class="k"&gt;new&lt;/span&gt;  &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;App&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;globalstack&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;GlobalStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;DynamoDBGlobalStack&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;us-west-2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;}});&lt;/span&gt;

&lt;span class="nx"&gt;appRegions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;forEach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kd"&gt;function&lt;/span&gt;  &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="nx"&gt;index&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;AppStack&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;AppStack-&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;concat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="na"&gt;account&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;CDK_DEFAULT_ACCOUNT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;item&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="na"&gt;globalTableName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;globalstack&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;globalTable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;tableName&lt;/span&gt;
  &lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Region References - Mocks
&lt;/h3&gt;

&lt;p&gt;You might have noticed that I am passing only the table name to the app stacks, rather than the actual table object which would be best practice. This is because the actual table is specifically referencing our primary table in us-west-2. I'm afraid in our App stack we will need to either mock out the table OR use the AWS SDK to retrieve the full details.&lt;/p&gt;

&lt;p&gt;For most use cases, simply mocking out the table should be all that you need.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt;  &lt;span class="nx"&gt;cdk&lt;/span&gt;  &lt;span class="k"&gt;from&lt;/span&gt;  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@aws-cdk/core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt;  &lt;span class="nx"&gt;dynamodb&lt;/span&gt;  &lt;span class="k"&gt;from&lt;/span&gt;  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@aws-cdk/aws-dynamodb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kr"&gt;interface&lt;/span&gt;  &lt;span class="nx"&gt;CustomStackProps&lt;/span&gt;  &lt;span class="kd"&gt;extends&lt;/span&gt;  &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;StackProps&lt;/span&gt;  &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;readonly&lt;/span&gt;  &lt;span class="nx"&gt;globalTableName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;readonly&lt;/span&gt;  &lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nx"&gt;AppStack&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Stack&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;CustomStackProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;globalTable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;dynamodb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fromTableName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;globalTable&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;globalTableName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can reference the globalTable to grant IAM permissions etc.&lt;/p&gt;

&lt;h3&gt;
  
  
  Triggers
&lt;/h3&gt;

&lt;p&gt;I did find that there is ONE instance where there was the need to use the AWS SDK, and that was around implementing triggers off the DynamoDB table. Global Tables are kept in sync by the use of DyanmoDB Streams, and AWS automatically names these streams in the following format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;arn:aws:dynamodb:REGION:AWS-ACCOUNT:table/TABLE-NAME/stream/2021-01-16T19:47:47.531
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This timestamp is specific to each region, and so the only way to retrieve this information is to describe the specific region table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="p"&gt;...&lt;/span&gt;
  &lt;span class="kd"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;CustomStackProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="kd"&gt;const&lt;/span&gt;  &lt;span class="nx"&gt;client&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt;  &lt;span class="k"&gt;new&lt;/span&gt;  &lt;span class="nx"&gt;DynamoDB&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;  &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;region&lt;/span&gt;  &lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="c1"&gt;// Query the regions table&lt;/span&gt;
    &lt;span class="kd"&gt;let&lt;/span&gt;  &lt;span class="nx"&gt;globalTableInfoRequest&lt;/span&gt;  &lt;span class="o"&gt;=&lt;/span&gt;  &lt;span class="k"&gt;async&lt;/span&gt;  &lt;span class="p"&gt;()&lt;/span&gt;  &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;  &lt;span class="k"&gt;await&lt;/span&gt;  &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;describeTable&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;  &lt;span class="na"&gt;TableName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;globalTableName&lt;/span&gt;&lt;span class="p"&gt;});&lt;/span&gt;

    &lt;span class="nx"&gt;globalTableInfoRequest&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nx"&gt;then&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;  &lt;span class="nx"&gt;globalTableInfoResult&lt;/span&gt;  &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt;  &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c1"&gt;// Mock the table with the specific ARN and Stream ARN&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;globalTable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;dynamodb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;fromTableAttributes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;globalTable&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;tableArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;globalTableInfoResult&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;TableArn&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;tableStreamArn&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;globalTableInfoResult&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;Table&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;LatestStreamArn&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="c1"&gt;// Lambda&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;triggerLambda&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nb"&gt;Function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;triggerLambda&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="p"&gt;...&lt;/span&gt;
        &lt;span class="na"&gt;environment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;TABLE_NAME&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;globalTableName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="p"&gt;...&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;});&lt;/span&gt;

      &lt;span class="c1"&gt;// Grant read access&lt;/span&gt;
      &lt;span class="nx"&gt;globalTable&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;grantStreamRead&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;triggerLambda&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="c1"&gt;// Deadletter queue&lt;/span&gt;
      &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;triggerDLQueue&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;sqs&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Queue&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;triggerDLQueue&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

      &lt;span class="c1"&gt;// Trigger Event&lt;/span&gt;
      &lt;span class="nx"&gt;triggerLambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;addEventSource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;DynamoEventSource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;globalTable&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;startingPosition&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;StartingPosition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TRIM_HORIZON&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;batchSize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;bisectBatchOnError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;onFailure&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;SqsDlq&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;triggerDLQueue&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="na"&gt;retryAttempts&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
      &lt;span class="p"&gt;}));&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deployment
&lt;/h3&gt;

&lt;p&gt;One specific issue we do run into with this design, is that the multiple app stacks which are deployed in multi regions are dependent on the single global stack that was deployed in one region. Cloudformation does not allow us to create a cross region dependency between stacks. We will want to deploy our GlobalStack first, and then we can deploy all stacks.&lt;/p&gt;

&lt;p&gt;We want to synthesize the CDK to produce the Cloudformation template for us:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cdk synth GlobalStack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The we can depoy just the global stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cdk deploy GlobalStack
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once that is complete, we can generate all the cloudformation templates - the AWS SDK code is excuted during synthesizing so we need the Global tables deployed beforehand:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cdk synth
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally we can deploy all stacks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cdk deploy &lt;span class="nt"&gt;--all&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we are performing this ordered deploy, to remember that to tear this down you will need to work in reverse order due to the dependencies that have been created. Tear down all the app stacks first, then tear down the global tables.&lt;/p&gt;

&lt;h3&gt;
  
  
  Regions Available
&lt;/h3&gt;

&lt;p&gt;A final piece to consider is that DynamoDB Global tables in most, but NOT all regions -  specifically the following regions do not support them at the time I write this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Region Name&lt;/th&gt;
&lt;th&gt;Region Code&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Africa (Cape Town)&lt;/td&gt;
&lt;td&gt;af-south-1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Asia Pacific (Hong Kong)&lt;/td&gt;
&lt;td&gt;ap-east-1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Europe (Milan)&lt;/td&gt;
&lt;td&gt;eu-south-1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Middle East (Bahrain)&lt;/td&gt;
&lt;td&gt;me-south-1&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These four regions are disabled by default, and you will need to &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_enable-regions.html?icmpid=docs_iam_console#id_credentials_region-endpoints"&gt;enable them&lt;/a&gt; if you wanted to use them. &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;DynamoDB Global tables are a great method to quickly implement a multi-region, multi-master data solution that is managed by AWS. In 2019 the &lt;a href="https://aws.amazon.com/about-aws/whats-new/2019/04/amazon-dynamodb-drops-the-price-of-global-tables-by-eliminating-associated-charges-for-dynamodb-streams/"&gt;pricing model for Global tables was updated&lt;/a&gt; that removed the cost to replicate data between regions, which really elevated this into a great solution.&lt;/p&gt;

&lt;p&gt;My next post will examine the DNS routing policy that we will want to implement to ensure users are not impacted by any region downtime in the future.&lt;/p&gt;

&lt;p&gt;All code for this post can be accessed on &lt;a href="https://github.com/johncolmdoyle/aws-cdk-multi-region-dynamodb-global-tables"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dynamodb</category>
      <category>multiregion</category>
      <category>cdk</category>
    </item>
    <item>
      <title>287 Hours</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Tue, 02 Feb 2021 21:52:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/287-hours-538h</link>
      <guid>https://dev.to/aws-builders/287-hours-538h</guid>
      <description>&lt;p&gt;The race was already on, and I was late. Over a thousand people were ahead of me, with almost 50 teams having submitted completed projects. Yet this is a hackathon, and no one knows what might stand out till the end!&lt;/p&gt;

&lt;p&gt;I stumbled onto the &lt;a href="https://postman-hack.devpost.com/"&gt;Postman API Hackathon&lt;/a&gt; a week after it had opened. Back in November 2020, Postman had released a new service called Public Workspaces that was aimed at helping people to collaborate on APIs. With &lt;a href="https://www.postman.com/postman-galaxy/"&gt;Postman Galaxy&lt;/a&gt;, their annual conference coming up on February 2nd-4th, they announced a hackathon with the results announced at it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--iRFZZAE0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/hackathon-signup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--iRFZZAE0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/hackathon-signup.png" alt="" width="880" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The gauntlet was thrown down in the format of a challenge to, “not just create a Postman public workspace with a Collection of APIs, but build something that is creative, has compelling value to developers, addresses a problem, or has community interest.” Alright, I thought, this is doable - I’ve spent late nights banging my head against the table trying to understand why something isn’t acting right. &lt;/p&gt;

&lt;p&gt;Inspiration struck with a memory of Boss-Man Paul calling late at night, asking if a service was down. Nope, turned out he just had horrific internet. Yet this did bring me back to wondering if there was a better way to tell. Sure, the API is up for me and maybe for &lt;a href="https://www.isitdownrightnow.com/"&gt;isitdownrightnow.com&lt;/a&gt; - but maybe the Dublin data center is experiencing issues that I’m not seeing!&lt;/p&gt;

&lt;p&gt;Flexing my knuckles, imagining the crack, I dive into some code - create a GitHub repo, initialize a fresh AWS CDK project, and the hardest part… buy a snazzy domain name. And so, on January 15th api-network.info was acquired!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Q_OcuueP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/api-network-logo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Q_OcuueP--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/api-network-logo.png" alt="" width="790" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Instantly I started running into issues as my domain was not resolving no matter how much I refreshed my browser - so the first network took was decided upon, time for NSLoopup to give me insights! This was familiar ground thankfully, hustle some NodeJS code into a Lambda and deploy it. &lt;/p&gt;

&lt;p&gt;Yet having a single region tell me its status wasn’t very enlightening - so I need to think broader, multi-region broad. I went down several paths trying to figure a good way to coordinate my requests, both within a region and across them. The final solution I landed on was DynamoDB Global Tables - automatically replicate data across regions and allowing me to set up triggers within each region to fire off on any change to that data!&lt;/p&gt;

&lt;p&gt;This sounded perfect - I could add additional multiple regions as I needed to! Back to the true work so, adding more network context than I would have a clue what to do with. Utilizing the newly released Lambda Docker containers I was even able to run some OS-level commands, like traceroute and dig. I assembled my core group of 5 commands and was ready to go global!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--zcNEdkdI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/postman-api-architecture.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--zcNEdkdI--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/postman-api-architecture.jpeg" alt="" width="880" height="545"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At this point, I ran straight into a wall. The perfect solution utilizing Global Tables and Triggers came back to bite in with a vengeance. It turns out that Global Tables are built in a single region, then that region builds the replications in other regions. My original code that simply would duplicate my whole stack in each region was thrown a curveball - now I needed to build my global tables, and then loop over all the regions to add my app stack.  &lt;/p&gt;

&lt;p&gt;Not too big of a change… until I realized that the DynamoDB triggers that the Lambda’s use are systematically named. With a timestamp. The primary region that built out the Global Tables had no way of telling me what the Stream ARN would be in Hong Kong, the only way to find it would be to describe the table IN Hong Kong. Dr. Frankenstein would be disgusted (at least the CDK team would be) at my abomination as I started calling the AWS SDK within the CDK construct. &lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is a hackathon though - please leave your good practices at the door.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Now that I had a multi-region API, I could finally start tackling the actual main hackathon - building out a Public Workspace. Postman had provided links to several example collections to inspire folks, which led me to a big facepalm. Despite having used the application for years, I had never known you could visualize responses. From basic HTML to full-on Bootstrap, to D3JS charts. The quantity of data I was returning was too much to parse manually, this was the perfect solution - to build out charts that highlighted different aspects!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--baI8EjxS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/latency-bar-chart.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--baI8EjxS--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/latency-bar-chart.png" alt="" width="880" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Over the next few days, I added data, massaged it, worked it into several charts that started to give me hope that the project made some sense. From simple bar charts, I stepped up to plotting out the traceroutes on a map, to designing a region reconciliation report on DNS or Requests.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t77goLp2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/traceroute-map.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t77goLp2--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/traceroute-map.png" alt="" width="880" height="507"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--lHs6Wlwv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/version-check.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--lHs6Wlwv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://gizmo.codes/images/Postman-API-Hackathon/version-check.png" alt="" width="880" height="646"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The final hours were drawing to a close, it had been multiple days of late nights, tearing the infrastructure down and rebuilding it. Making decisions that I would go on and completely regret, but there was no time to go back and change them. It was the last day.&lt;/p&gt;

&lt;p&gt;Five hours to go, no worries. Code is basically locked in now. Just one last part to do - a video demo. How hard can this be? Open up Photo Booth to record me, and start recording my screen.&lt;/p&gt;

&lt;p&gt;Three hours later I just don’t care anymore - stumble over the script, sipping a can of Guinness that is slowly warming. Don’t know when I opened it.  Edit the videos together and stare at the upload bar on YouTube. The hackathon site was about to start the 60-minute countdown. If there was an issue at this stage, it might require something stronger than Guinness to power me through to the end.&lt;/p&gt;

&lt;p&gt;Yet it succeeded. &lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.youtube.com/watch?v=SaCpu0_S-IE&amp;amp;feature=emb_title&amp;amp;ab_channel=JohnDoyle"&gt;video was uploaded&lt;/a&gt;, linked, and submitted with the project. &lt;/p&gt;

&lt;p&gt;Time for some much-needed sleep, and fruitlessly thinking that for the next hackathon I’ll be better prepared. Time will tell.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.postman.com/galactic-station-814028/workspace/distributed-api-network-information/overview"&gt;Postman Public Workspace&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://api-network.info/"&gt;API-Network.info&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://app.api-network.info/"&gt;API Key Management&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/johncolmdoyle/postman-hackathon"&gt;Multi Region Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/johncolmdoyle/postman-hackathon-demo"&gt;Demo API to test against Code&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/johncolmdoyle/postman-hackathon-app"&gt;API Key Management Code&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Follow-up
&lt;/h2&gt;

&lt;p&gt;I'll post up some technical blogs diving into some of the problems I encountered later, from deploying custom Lambda Docker containers to managing multi-region API Keys.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>postman</category>
      <category>hackathon</category>
      <category>api</category>
    </item>
    <item>
      <title>Java Lambda Containers</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Sun, 13 Dec 2020 07:11:34 +0000</pubDate>
      <link>https://dev.to/aws-builders/java-lambda-containers-3hgo</link>
      <guid>https://dev.to/aws-builders/java-lambda-containers-3hgo</guid>
      <description>&lt;p&gt;&lt;a href="https://reinvent.awsevents.com/"&gt;AWS Re:Invent 2020&lt;/a&gt; held a number of great announcements for AWS Lambda! One of these included the support for &lt;a href="https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/"&gt;Docker Containers&lt;/a&gt; - previously the only supported packaging was Zip that was stored in S3 behind the scenes.&lt;/p&gt;

&lt;p&gt;At the same time, Lambda also saw the memory configuration expand - from the 3GB limit initially, all the way up to 10GBs! This is especially useful as Lambda will also support Docker images that reach 10GB in size - though this size increase is only for Docker containers, ZIP packages remain at a maximum of 250MB.&lt;/p&gt;

&lt;p&gt;AWS provides two ways to support docker lambda&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use an AWS base docker image, all the support run times:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;amazon/nodejs12.x-base&lt;/li&gt;
&lt;li&gt;amazon/nodejs10.x-base&lt;/li&gt;
&lt;li&gt;amazon/python3.8-base&lt;/li&gt;
&lt;li&gt;amazon/python3.7-base&lt;/li&gt;
&lt;li&gt;amazon/python3.6-base&lt;/li&gt;
&lt;li&gt;amazon/python2.7-base&lt;/li&gt;
&lt;li&gt;amazon/ruby2.7-base&lt;/li&gt;
&lt;li&gt;amazon/ruby2.5-base&lt;/li&gt;
&lt;li&gt;amazon/go1.x-base&lt;/li&gt;
&lt;li&gt;amazon/java11-base&lt;/li&gt;
&lt;li&gt;amazon/java8.al2-base&lt;/li&gt;
&lt;li&gt;amazon/java8-base&lt;/li&gt;
&lt;li&gt;amazon/dotnetcore3.1-base&lt;/li&gt;
&lt;li&gt;amazon/dotnetcore2.1-base&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Use your own base docker image and you import the AWS runtime interface client and the lambda talks to the interface client which then talks to your code.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Apparently, Lambda is not actually running the container, rather it will build a function from the container image. I have noticed when running the Java 11 Base Lambda image a pretty significant cold start penalty - up to 20 seconds. &lt;/p&gt;

&lt;p&gt;To test out the docker containers, I'll utilize &lt;a href="https://aws.amazon.com/serverless/sam/"&gt;AWS Serverless Application Model&lt;/a&gt; to generate, build, and deploy the lambda! &lt;/p&gt;

&lt;h2&gt;
  
  
  Build &amp;amp; Deploy Docker Lambda
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Initialization
&lt;/h3&gt;

&lt;p&gt;To simplify some of these commands, I'll create some environment variables to hold the default region and AWS Account ID:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export AWS_ACCOUNT_ID=$(aws sts get-caller-identity | jq -r '.Account')
export AWS_REGION=$(aws configure get default.region)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we will be building out an image for the lambda, we want to store it in an &lt;a href="https://aws.amazon.com/ecr/"&gt;Elastic Container Registry&lt;/a&gt;. We can create one with the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr create-repository --repository-name gizmo-lambda-container | jq '.repository.repositoryUri'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which should output the container registry url, similar to: &lt;code&gt;196295636944.dkr.ecr.us-east-1.amazonaws.com/gizmo-lambda-container&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;We can then authenticate with the AWS ECR repo:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ecr get-login-password | docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGIO
N}.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now we will use the &lt;a href="https://aws.amazon.com/serverless/sam/"&gt;AWS Serverless Application Model&lt;/a&gt; to build and deploy our application. In an empty directory we will create our project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will be prompted to select several options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Which template source would you like to use?
    1 - AWS Quick Start Templates
    2 - Custom Template Location
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We want to use AWS Quick Start Templates.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;What package type would you like to use?
    1 - Zip (artifact is a zip uploaded to S3)
    2 - Image (artifact is an image uploaded to an ECR image repository)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And here we want to use Image! For this example, I went with the &lt;code&gt;amazon/java11-base&lt;/code&gt; base image. Since it is Java we are prompted about the dependency manager, which I recommend &lt;code&gt;gradle&lt;/code&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build
&lt;/h3&gt;

&lt;p&gt;The skeleton project that is generated has our Dockerfile and application code already setup. The AWS SAM command will build the Docker image for us!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we can see the Docker build steps in the output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Building codeuri: . runtime: None metadata: {'DockerTag': 'java11-gradle-v1', 'DockerContext': './HelloWorldFunction', 'Dockerfile': 'Dockerfile'} functions: ['HelloWorldFunction']
Building image for HelloWorldFunction function
Setting DockerBuildArgs: {} for HelloWorldFunction function
Step 1/16 : FROM public.ecr.aws/lambda/java:11 as build-image
 ---&amp;gt; 9d9f8f6bbeea
Step 2/16 : ARG SCRATCH_DIR=/var/task/build
 ---&amp;gt; Running in 3f077f57a99f
 ---&amp;gt; 1790443364e1
Step 3/16 : COPY src/ src/
 ---&amp;gt; 6357ad3066ed
Step 4/16 : COPY gradle/ gradle/
 ---&amp;gt; 86b36578535b
Step 5/16 : COPY build.gradle gradlew ./
 ---&amp;gt; 7a0e366bbda5
Step 6/16 : RUN mkdir build
 ---&amp;gt; Running in 57ceac09d965
 ---&amp;gt; 6d547fcbb584
Step 7/16 : COPY gradle/lambda-build-init.gradle ./build
 ---&amp;gt; ee1fb2ece820
Step 8/16 : RUN ./gradlew --project-cache-dir $SCRATCH_DIR/gradle-cache -Dsoftware.amazon.aws.lambdabuilders.scratch-dir=$SCRATCH_DIR --init-script $SCRATCH_DIR/lambda-build-init.gradle build
 ---&amp;gt; Running in 06116e2de319
Downloading https://services.gradle.org/distributions/gradle-5.1.1-bin.zip
.................................................................................

Welcome to Gradle 5.1.1!

Here are the highlights of this release:
 - Control which dependencies can be retrieved from which repositories
 - Production-ready configuration avoidance APIs

For more details see https://docs.gradle.org/5.1.1/release-notes.html

Starting a Gradle Daemon (subsequent builds will be faster)
&amp;gt; Task :compileJava
&amp;gt; Task :processResources NO-SOURCE
&amp;gt; Task :classes
&amp;gt; Task :jar
&amp;gt; Task :assemble
&amp;gt; Task :compileTestJava
&amp;gt; Task :processTestResources NO-SOURCE
&amp;gt; Task :testClasses
&amp;gt; Task :test
&amp;gt; Task :check
&amp;gt; Task :build

BUILD SUCCESSFUL in 17s
4 actionable tasks: 4 executed
 ---&amp;gt; f394216fbb70
Step 9/16 : RUN rm -r $SCRATCH_DIR/gradle-cache
 ---&amp;gt; Running in 0b8e94e2670c
 ---&amp;gt; 7f9033989b1a
Step 10/16 : RUN rm -r $SCRATCH_DIR/lambda-build-init.gradle
 ---&amp;gt; Running in 4965d01c3455
 ---&amp;gt; fd5e12377577
Step 11/16 : RUN cp -r $SCRATCH_DIR/*/build/distributions/lambda-build/* .
 ---&amp;gt; Running in 9c29934509fe
 ---&amp;gt; 434e1178004e
Step 12/16 : FROM public.ecr.aws/lambda/java:11
 ---&amp;gt; 9d9f8f6bbeea
Step 13/16 : COPY --from=build-image /var/task/META-INF ./
 ---&amp;gt; af6f91bce6e5
Step 14/16 : COPY --from=build-image /var/task/helloworld ./helloworld
 ---&amp;gt; 9a7cea4f8cbd
Step 15/16 : COPY --from=build-image /var/task/lib/ ./lib
 ---&amp;gt; 09a9974d2dec
Step 16/16 : CMD ["helloworld.App::handleRequest"]
 ---&amp;gt; Running in 11e9683c59dc
 ---&amp;gt; c9fbfd205cd4
Successfully built c9fbfd205cd4
Successfully tagged helloworldfunction:java11-gradle-v1

Build Succeeded

Built Artifacts  : .aws-sam/build
Built Template   : .aws-sam/build/template.yaml

Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Deploy: sam deploy --guided
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A very nice aspect of the AWS SAM is the ability to run the container locally!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam local invoke
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;With the output matching what we would normally see in the Cloudwatch logs:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Invoking Container created from helloworldfunction:java11-gradle-v1
Image was not found.
Building image..........
Skip pulling image and use local one: helloworldfunction:rapid-1.13.2.

START RequestId: 22cd82ae-e042-4863-9447-4831faf9490a Version: $LATEST
END RequestId: 22cd82ae-e042-4863-9447-4831faf9490a
REPORT RequestId: 22cd82ae-e042-4863-9447-4831faf9490a  Init Duration: 1.02 ms  Duration: 1262.58 ms    Billed Duration: 1300 ms    Memory Size: 128 MB Max Memory Used: 128 MB
{"statusCode":200,"headers":{"X-Custom-Header":"application/json","Content-Type":"application/json"},"body":"{ \"message\": \"hello world\", \"location\": \"71.174.101.210\" }"}%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deployment
&lt;/h3&gt;

&lt;p&gt;Finally, we can have AWS SAM create our Lambda and connect it to a REST API for us!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam deploy --guided
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we get the image being pushed to the ECR repository along with the Cloudformation stack deploying the REST API and Lambda.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Configuring SAM deploy
======================

    Looking for config file [samconfig.toml] :  Not found

    Setting default arguments for 'sam deploy'
    =========================================
    Stack Name [sam-app]: gizmo-example
    AWS Region [us-east-1]:
    Image Repository []: 196295636944.dkr.ecr.us-east-1.amazonaws.com/gizmo-lambda-container
    Images that will be pushed:
      helloworldfunction:java11-gradle-v1 to 196295636944.dkr.ecr.us-east-1.amazonaws.com/gizmo-lambda-container:helloworldfunction-c9fbfd205cd4-java11-gradle-v1

    #Shows you resources changes to be deployed and require a 'Y' to initiate deploy
    Confirm changes before deploy [y/N]: y
    #SAM needs permission to be able to create roles to connect to the resources in your template
    Allow SAM CLI IAM role creation [Y/n]: Y
    HelloWorldFunction may not have authorization defined, Is this okay? [y/N]: y
    Save arguments to configuration file [Y/n]: Y
    SAM configuration file [samconfig.toml]:
    SAM configuration environment [default]:

    Looking for resources needed for deployment: Not found.
    Creating the required resources...
        Successfully created!

        Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-1owsb6dv8zl8j
        A different default S3 bucket can be set in samconfig.toml

    Saved arguments to config file
    Running 'sam deploy' for future deployments will use the parameters saved above.
    The above parameters can be changed by modifying samconfig.toml
    Learn more about samconfig.toml syntax at
    https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-config.html
The push refers to repository [196295636944.dkr.ecr.us-east-1.amazonaws.com/gizmo-lambda-container]
0bd0a3c72bc7: Pushed
11bc4c037913: Pushed
e3d7108f2c0b: Pushed
d1d758bc3380: Pushed
577ed33a1f65: Pushed
d6fa53d6caa6: Pushed
016e6d3f9722: Pushed
898f760e15c4: Pushed
af6d16f2417e: Pushed
helloworldfunction-c9fbfd205cd4-java11-gradle-v1: digest: sha256:507124a3f49f85a91a97508c50acc1b66be3731d1cfeb187a245a44cefb5979e size: 2205


    Deploying with following values
    ===============================
    Stack name                   : gizmo-example
    Region                       : us-east-1
    Confirm changeset            : True
    Deployment image repository  : 196295636944.dkr.ecr.us-east-1.amazonaws.com/gizmo-lambda-container
    Deployment s3 bucket         : aws-sam-cli-managed-default-samclisourcebucket-1owsb6dv8zl8j
    Capabilities                 : ["CAPABILITY_IAM"]
    Parameter overrides          : {}
    Signing Profiles           : {}

Initiating deployment
=====================
HelloWorldFunction may not have authorization defined.
Uploading to gizmo-example/40514ed797d174c979e4a5a29672a62a.template  1199 / 1199.0  (100.00%)

Waiting for changeset to be created..

CloudFormation stack changeset
-------------------------------------------------------------------------------------------------------------------------------------
Operation                         LogicalResourceId                 ResourceType                      Replacement
-------------------------------------------------------------------------------------------------------------------------------------
+ Add                             HelloWorldFunctionHelloWorldPer   AWS::Lambda::Permission           N/A
                                  missionProd
+ Add                             HelloWorldFunctionRole            AWS::IAM::Role                    N/A
+ Add                             HelloWorldFunction                AWS::Lambda::Function             N/A
+ Add                             ServerlessRestApiDeployment47fc   AWS::ApiGateway::Deployment       N/A
                                  2d5f9d
+ Add                             ServerlessRestApiProdStage        AWS::ApiGateway::Stage            N/A
+ Add                             ServerlessRestApi                 AWS::ApiGateway::RestApi          N/A
-------------------------------------------------------------------------------------------------------------------------------------

Changeset created successfully. arn:aws:cloudformation:us-east-1:196295636944:changeSet/samcli-deploy1607463634/9467a1b3-b809-4a57-aecb-8638ec49f4ca


Previewing CloudFormation changeset before deployment
======================================================
Deploy this changeset? [y/N]: y

2020-12-08 16:41:12 - Waiting for stack create/update to complete

CloudFormation events from changeset
-------------------------------------------------------------------------------------------------------------------------------------
ResourceStatus                    ResourceType                      LogicalResourceId                 ResourceStatusReason
-------------------------------------------------------------------------------------------------------------------------------------
CREATE_IN_PROGRESS                AWS::IAM::Role                    HelloWorldFunctionRole            -
CREATE_IN_PROGRESS                AWS::IAM::Role                    HelloWorldFunctionRole            Resource creation Initiated
CREATE_COMPLETE                   AWS::IAM::Role                    HelloWorldFunctionRole            -
CREATE_IN_PROGRESS                AWS::Lambda::Function             HelloWorldFunction                -
CREATE_IN_PROGRESS                AWS::Lambda::Function             HelloWorldFunction                Resource creation Initiated
CREATE_COMPLETE                   AWS::Lambda::Function             HelloWorldFunction                -
CREATE_IN_PROGRESS                AWS::ApiGateway::RestApi          ServerlessRestApi                 Resource creation Initiated
CREATE_IN_PROGRESS                AWS::ApiGateway::RestApi          ServerlessRestApi                 -
CREATE_COMPLETE                   AWS::ApiGateway::RestApi          ServerlessRestApi                 -
CREATE_IN_PROGRESS                AWS::Lambda::Permission           HelloWorldFunctionHelloWorldPer   -
                                                                    missionProd
CREATE_IN_PROGRESS                AWS::ApiGateway::Deployment       ServerlessRestApiDeployment47fc   -
                                                                    2d5f9d
CREATE_IN_PROGRESS                AWS::ApiGateway::Deployment       ServerlessRestApiDeployment47fc   Resource creation Initiated
                                                                    2d5f9d
CREATE_IN_PROGRESS                AWS::Lambda::Permission           HelloWorldFunctionHelloWorldPer   Resource creation Initiated
                                                                    missionProd
CREATE_COMPLETE                   AWS::ApiGateway::Deployment       ServerlessRestApiDeployment47fc   -
                                                                    2d5f9d
CREATE_IN_PROGRESS                AWS::ApiGateway::Stage            ServerlessRestApiProdStage        -
CREATE_IN_PROGRESS                AWS::ApiGateway::Stage            ServerlessRestApiProdStage        Resource creation Initiated
CREATE_COMPLETE                   AWS::ApiGateway::Stage            ServerlessRestApiProdStage        -
CREATE_COMPLETE                   AWS::Lambda::Permission           HelloWorldFunctionHelloWorldPer   -
                                                                    missionProd
CREATE_COMPLETE                   AWS::CloudFormation::Stack        gizmo-example                     -
-------------------------------------------------------------------------------------------------------------------------------------

CloudFormation outputs from deployed stack
----------------------------------------------------------------------------------------------------------------------------------------
Outputs
----------------------------------------------------------------------------------------------------------------------------------------
Key                 HelloWorldFunctionIamRole
Description         Implicit IAM Role created for Hello World function
Value               arn:aws:iam::196295636944:role/gizmo-example-HelloWorldFunctionRole-13AV2P47KN805

Key                 HelloWorldApi
Description         API Gateway endpoint URL for Prod stage for Hello World function
Value               https://uwuy5qpvmg.execute-api.us-east-1.amazonaws.com/Prod/hello/

Key                 HelloWorldFunction
Description         Hello World Lambda Function ARN
Value               arn:aws:lambda:us-east-1:196295636944:function:gizmo-example-HelloWorldFunction-LX0IVO85A1CM
----------------------------------------------------------------------------------------------------------------------------------------

Successfully created/updated stack - gizmo-example in us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We ended up with a REST API endpoint: &lt;a href="https://uwuy5qpvmg.execute-api.us-east-1.amazonaws.com/Prod/hello/:"&gt;https://uwuy5qpvmg.execute-api.us-east-1.amazonaws.com/Prod/hello/:&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; curl https://uwuy5qpvmg.execute-api.us-east-1.amazonaws.com/Prod/hello/
{ "message": "hello world", "location": "3.239.84.207" }%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we curl this, we should get the output - note back to the beginning of the blog post, that I often see a long delay in the first request as the lambda is experiencing the cold start penalty.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>aws</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Quick Project: KubeconVibes</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Thu, 27 Aug 2020 21:16:35 +0000</pubDate>
      <link>https://dev.to/art_wolf/quick-project-kubeconvibes-3fei</link>
      <guid>https://dev.to/art_wolf/quick-project-kubeconvibes-3fei</guid>
      <description>&lt;p&gt;While attending &lt;a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/?"&gt;Kubecon EU 2020&lt;/a&gt;, I wanted to maximize my time by watching the most popular sessions. Which hopefully would also be the most useful. When the sessions go up on YouTube, I can look at the view counts to judge, but right now there didn't appear to be anything on the website that provides this information. Time to look behind the curtains!&lt;/p&gt;

&lt;p&gt;Turned out they have a nice little API running that you can access utilizing your session cookie. They return the sessions based on the tracks, so I was able to note down the URLs for each track:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;101 Track&lt;/li&gt;
&lt;li&gt;Application + Development&lt;/li&gt;
&lt;li&gt;Case Studies&lt;/li&gt;
&lt;li&gt;CI/CD&lt;/li&gt;
&lt;li&gt;Community&lt;/li&gt;
&lt;li&gt;Customizing + Extended Kubernetes&lt;/li&gt;
&lt;li&gt;Experiences&lt;/li&gt;
&lt;li&gt;Keynotes&lt;/li&gt;
&lt;li&gt;Lightning Talks&lt;/li&gt;
&lt;li&gt;Machine Learning + Data&lt;/li&gt;
&lt;li&gt;Maintainer Track&lt;/li&gt;
&lt;li&gt;Networking&lt;/li&gt;
&lt;li&gt;Observability&lt;/li&gt;
&lt;li&gt;Operations&lt;/li&gt;
&lt;li&gt;Performance&lt;/li&gt;
&lt;li&gt;Runtimes&lt;/li&gt;
&lt;li&gt;Security + Identity + Policy&lt;/li&gt;
&lt;li&gt;Serverless&lt;/li&gt;
&lt;li&gt;Service Mesh&lt;/li&gt;
&lt;li&gt;Storage&lt;/li&gt;
&lt;li&gt;Tutorials&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Hitting one of these tracks returned a ton of information on each session including the number of likes and views! BINGO! &lt;/p&gt;

&lt;p&gt;The only authentication was utilizing my session cookie, but since it was only a quick project, I decided to pass that parameter in during the CI/CD deploy rather than coding up a request... I didn't want to inadvertently checkin my password during testing...&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  let kubeConTracks = [
    {name: '101 Track', url: 'https://onlinexperiences.com/scripts/Server.nxp?LASCmd=AI:1;F:LBSEXPORT!JSON&amp;amp;SQLID=14523&amp;amp;EventPackageKey=55064&amp;amp;RandomValue=1597773745984'},
    ...
  ]

  let options = {
    json: true,
    headers: {
      'Cookie': kubeconCookie
    }
  };

  for(let track of kubeConTracks) {
    request(track.url, options, (error, res, body) =&amp;gt; {
      let sessionsArray = body.ResultSet[0];
      for (let session of sessionsArray) {
        console.log(session.NumViews.toString());
      }
    }
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using the AWS CDK I was able to quickly turn this into a Cron Job to pull the information and store it in a DynamoDB table.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    const kubeconCookie = this.node.tryGetContext('kubecon_cookie');

    const consumeFunction = new lambda.Function(this, 'KubeConConsume', {
      runtime: lambda.Runtime.NODEJS_10_X,
      handler: 'index.handler',
      code: lambda.Code.asset('./resources/consumer'),
      description: 'Query KubeCon APIs to get the latest data and insert into DynamoDB',
      timeout: core.Duration.seconds(30),
      environment: {
        DYNAMODB_TABLE: table.tableName,
        DYNAMODB_HISTORY_TABLE: historyTable.tableName,
        KUBECON_COOKIE: kubeconCookie
      },
    });

    const lambdaTarget = new eventstargets.LambdaFunction(consumeFunction);

    new events.Rule(this, 'ScheduleRule', {
      schedule: events.Schedule.cron({ minute: '0' }),
      targets: [lambdaTarget],
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I setup two tables, one to contain the latest information and another to group it by time - with an idea of maybe seeing how the popularity of sessions changed over the days.&lt;/p&gt;

&lt;p&gt;With the data stored, the only thing remaining was slapping a quick API onto my DynamoDB table and throw up a front end website! I was able to structure everything in the single repo which I liked, seperating the lambdas, the front end and the infrastructure pulling everything together.&lt;/p&gt;

&lt;p&gt;The AWS CDK has made it a lot easier to setup an API Gateway, certainly compared to building this up in straight Cloudformation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    const listFunction = new lambda.Function(this, 'KubeConList', {
      runtime: lambda.Runtime.NODEJS_10_X,
      handler: 'index.list',
      code: lambda.Code.asset('./resources/api'),
      description: 'Used by the API Gateway to return the data.',
      environment: {
        DYNAMODB_TABLE: table.tableName
      },
    });

    table.grantReadData(listFunction);

    const api = new apigateway.RestApi(this, "kubeconvides-api", {
      restApiName: "KubeCon Vibes",
      description: "This service to access the kubecon data.",
      defaultCorsPreflightOptions: {
        allowOrigins: apigateway.Cors.ALL_ORIGINS
      }
    });

    const getIntegration = new apigateway.LambdaIntegration(listFunction, {
      requestTemplates: { "application/json": '{ "statusCode": "200" }' }
    });

    api.root.addMethod("GET", getIntegration);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Actually adding a custom domain to the API Gateway is more complicated than setting up the endpoint itself!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    const domainName = 'kubeconvibes.com'

    const zone = route53.HostedZone.fromLookup(this, 'KubeConZone', {
      domainName: domainName
    });

    const cert = new acm.Certificate(this, 'Certificate', {
      domainName: 'api.' + domainName,
      validation: acm.CertificateValidation.fromDns(zone)
    });

    const customDomain = new apigateway.DomainName(this, 'customDomain', {
      domainName: 'api.' + domainName,
      certificate: cert,
      endpointType: apigateway.EndpointType.EDGE
    });

    new apigateway.BasePathMapping(this, 'CustomBasePathMapping', {
      domainName: customDomain,
      restApi: api
    });

    new route53.CnameRecord(this, 'ApiGatewayRecordSet', {
      zone: zone,
      recordName: 'api',
      domainName: customDomain.domainNameAliasDomainName
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In short, I enjoy throwing out quick projects - especially when I want to extend my understanding of some functionality. While none of this had anything to do with Kubernetes, I did learn more about CDK and React!&lt;/p&gt;

&lt;p&gt;The code for this project is available here: &lt;a href="https://github.com/johncolmdoyle/kubecon-eu-popular-sessions/"&gt;kubecon-eu-popular-sessions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The website is: &lt;a href="https://kubeconvibes.com"&gt;KubeconVibes&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>AWS CDK RDS Backup At Silbo</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Fri, 21 Aug 2020 16:51:23 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-cdk-rds-backup-at-silbo-47pc</link>
      <guid>https://dev.to/aws-builders/aws-cdk-rds-backup-at-silbo-47pc</guid>
      <description>&lt;p&gt;I was reading Trilochn Parida's article on &lt;a href="https://dev.to/techparida/how-to-schedule-backup-of-mysql-db-and-store-it-in-s3-using-cron-job-49o7"&gt;How to Schedule Backup of MySQL DB and Store it in S3 using Cron Job&lt;/a&gt; and it got me thinking that I should show how &lt;a href="https://silbo.ai" rel="noopener noreferrer"&gt;Silbo&lt;/a&gt; handles our own database backups. We utilize &lt;a href="https://aws.amazon.com/rds/" rel="noopener noreferrer"&gt;AWS RDS&lt;/a&gt; which can be configured to automate backups of our database, creating snapshots for the last 35 days.&lt;/p&gt;

&lt;p&gt;A rolling 35 days of snapshots is great to quickly restore a short term backup. Yet there are a number of limitations with this system. The 35 days would be the major factor from an audit perspective. The automated backups are also not SQL exports that you can directly access - they are usable only within AWS RDS. When you deal with disaster recovery, you need to plan for regions going down. We want to be able to recovery in another region. &lt;/p&gt;

&lt;p&gt;To achieve this, I will walk through some &lt;a href="https://aws.amazon.com/cdk/" rel="noopener noreferrer"&gt;AWS CDK&lt;/a&gt; code to demonstrate how to schedule a such a backup policy.&lt;/p&gt;

&lt;p&gt;The code for this is up on &lt;a href="https://github.com/johncolmdoyle/aws-rds-nightly-backup" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;With &lt;a href="https://aws.amazon.com/lambda/" rel="noopener noreferrer"&gt;AWS Lambda&lt;/a&gt; having &lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html" rel="noopener noreferrer"&gt;predefined quotas&lt;/a&gt;: timeout limit of 15 minutes, memory limits etc, we know we will want to perform our backup on an &lt;a href="https://aws.amazon.com/ec2/" rel="noopener noreferrer"&gt;Amazon EC2&lt;/a&gt; instance. To reduce our costs we only want this instance to startup and shutdown just for the period of the backup. So we will have a Lambda create an EC2 instance and configure its commands via it's &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-add-user-data.html" rel="noopener noreferrer"&gt;Instance User Data&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/..%2Fimages%2FAWS-CDK-RDS-Backup-At-Silbo%2Farchitecture.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/..%2Fimages%2FAWS-CDK-RDS-Backup-At-Silbo%2Farchitecture.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  EC2 User Data
&lt;/h3&gt;

&lt;p&gt;The most important aspect of this work flow is contained within the EC2 Instance User Data. Here we need to install any dependencies we have, and execute the commands, and shutdown the instance afterwards.&lt;/p&gt;

&lt;p&gt;For my example here, I created a &lt;a href="https://www.postgresql.org/" rel="noopener noreferrer"&gt;PostgreSQL&lt;/a&gt; AWS RDS instance with the default version at the time of writing being &lt;code&gt;PostgreSQL 11.6-R1&lt;/code&gt;. PostgreSQL can perform an export of a database using &lt;a href="https://www.postgresql.org/docs/9.1/app-pgdump.html" rel="noopener noreferrer"&gt;pg_dump&lt;/a&gt;. Our first few commands will be to get this installed on an &lt;a href="https://aws.amazon.com/amazon-linux-2/" rel="noopener noreferrer"&gt;Amazon Linux 2&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yum install -y wget tar gcc make
wget https://ftp.postgresql.org/pub/source/v11.6/postgresql-11.6.tar.gz
tar -zxvf postgresql-11.6.tar.gz
cd postgresql-11.6/
./configure --without-readline --without-zlib
make
make install
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We want to configure the username and password for &lt;code&gt;pg_dump&lt;/code&gt;, and PostgreSQL supports the &lt;a href="https://www.postgresql.org/docs/9.1/libpq-pgpass.html" rel="noopener noreferrer"&gt;Password File&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "hostname:port:database:username:password" &amp;gt; ~/.pgpass
chmod 600 ~/.pgpass
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally we want to run &lt;code&gt;pg_dump&lt;/code&gt;, and upload the output to S3:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/usr/local/pgsql/bin/pg_dump -h hostname \
           -U username \
           -w \
           -c \
           -f output.pgsql \
           databaseName
S3_KEY=S3BucketName/hostname/$(date "+%Y-%m-%d")-databaseName-backup.tar.gz
tar -cvzf output.tar.gz output.pgsql
aws s3 cp output.tar.gz s3://$S3_KEY --sse AES256
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is all basic logic we need here, you could add more commands in, or change this to use &lt;a href="https://www.postgresql.org/docs/9.2/app-pg-dumpall.html" rel="noopener noreferrer"&gt;pg_dumpall&lt;/a&gt; - whatever is needed!&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS CDK
&lt;/h3&gt;

&lt;p&gt;With the main logic complete, lets utilize the CDK to stand this all up!&lt;/p&gt;

&lt;p&gt;Our first step is to create a bucket where our exports will live:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    const rdsBackupsBucket = new s3.Bucket(this, 'rdsBackupsBucket', {
      blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
      encryption: s3.BucketEncryption.S3_MANAGED
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is just a standalone bucket, but you could create replication logic across regions, archiving logic into Glacier etc. &lt;/p&gt;

&lt;p&gt;Now that we have an &lt;a href="https://aws.amazon.com/s3/" rel="noopener noreferrer"&gt;Amazon S3&lt;/a&gt; bucket, we need to think about how the EC2 instance will be able to access it. We will want to ensure that the EC2 has an &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html" rel="noopener noreferrer"&gt;Instance Profile&lt;/a&gt; with the appropriate permissions. This is broken into 3 parts:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create an &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html" rel="noopener noreferrer"&gt;IAM Role&lt;/a&gt; that can be used by an EC2 instance&lt;/li&gt;
&lt;li&gt;Create an &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html" rel="noopener noreferrer"&gt;IAM Policy&lt;/a&gt; for the role&lt;/li&gt;
&lt;li&gt;Create an &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html" rel="noopener noreferrer"&gt;Instance Profile&lt;/a&gt; and attach the role
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    const backupInstanceRole = new iam.Role(this, 'backupInstanceRole', {
      assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com')
    });

    backupInstanceRole.addToPolicy(
      new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        resources: [rdsBackupsBucket.bucketArn + '/*'],
        actions: [            
          's3:PutObject',
          's3:PutObjectAcl'
        ]
      })
    );

    new iam.CfnInstanceProfile(
      this,
      'backupInstanceProfile',
      {
        roles: [
          backupInstanceRole.roleName,
        ],
      }
    );
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the User Data we reviewed up above was pretty static, hard coded values etc. We want this stack to be more configurable in regards to the database configuration. I'll configure this as parameters within AWS CDK and pass them as environment variables to the AWS Lambda for the purpose of this simple setup. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&amp;lt;IMPORTANT SEGUE&amp;gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When dealing with actual environments, these values should be stored in &lt;a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html" rel="noopener noreferrer"&gt;AWS Secrets Manager&lt;/a&gt;. We would then pass the secret name as a parameter via the AWS CDK and AWS Lambda, down into the User Data for the EC2 Instance. Granting the EC2 Instance Profile access to the secrets, it would retrieve them directly and we wouldn't have username/passwords bouncing around.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&amp;lt;/IMPORTANT SEGUE&amp;gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Back to the CDK! With our variables being passed in via the &lt;a href="https://docs.aws.amazon.com/cdk/latest/guide/get_context_var.html" rel="noopener noreferrer"&gt;CDK Context&lt;/a&gt;, we will want to retrieve them an set them as environment variables for the lambda. Also, important, we will need to grant the Lambda the required permissions to launch an EC2 instance!&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    const ec2_region = this.node.tryGetContext('ec2_region');
    const ec2_type = this.node.tryGetContext('ec2_type');
    const db_host = this.node.tryGetContext('db_host');
    const db_user = this.node.tryGetContext('db_user');
    const db_pass = this.node.tryGetContext('db_pass');
    const db_database = this.node.tryGetContext('db_database');

    const launchingLambda = new lambda.Function(this, 'Lambda', {
      runtime: lambda.Runtime.PYTHON_3_7,
      handler: 'function.lambda_to_ec2',
      code: lambda.Code.asset('./resources'),
      description: 'Backup Database to S3',
      timeout: core.Duration.seconds(30),
      environment: {
        INSTANCE_REGION: ec2_region,
        INSTANCE_TYPE: ec2_type,
        INSTANCE_ROLE: backupInstanceRole.roleName,
        DB_HOST: db_host,
        DB_USER: db_user,
        DB_PASS: db_pass,
        DB_DATABASE: db_database,
        S3_BUCKET: rdsBackupsBucket.bucketName
      }
    });

    launchingLambda.addToRolePolicy(
      new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        resources: ['*'],
        actions: ['ec2:*']
      })
    );

    launchingLambda.addToRolePolicy(
      new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        resources: [backupInstanceRole.roleArn],
        actions: ['iam:PassRole']
      })
    );
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally we want to schedule our lambda. In a real environment, a lot of consideration will need to go into the frequency of your updates. What level of reliability do you require? Have you worked out your &lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/recovery-time-objective-rto-and-recovery-point-objective-rpo.html" rel="noopener noreferrer"&gt;Disaster Recovery strategy&lt;/a&gt; and defined your RTOs (Recovery Point Objective)? &lt;/p&gt;

&lt;p&gt;Again, to make this configurable, lets set the cron settings via our CDK Context, pulling out the required values and passing them into the Cloudwatch Event Rule:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    const lambdaTarget = new eventstargets.LambdaFunction(launchingLambda);

    const cron_minute = this.node.tryGetContext('cron_minute');
    const cron_hour = this.node.tryGetContext('cron_hour');
    const cron_day = this.node.tryGetContext('cron_day');
    const cron_month = this.node.tryGetContext('cron_month');
    const cron_year = this.node.tryGetContext('cron_year');

    new events.Rule(this, 'ScheduleRule', {
      schedule: events.Schedule.cron({ 
        minute: cron_minute,
        hour: cron_hour,
        day: cron_day,
        month: cron_month,
        year: cron_year
      }),
      targets: [lambdaTarget],
    });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can then configure all these settings within &lt;code&gt;cdk.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "app": "npx ts-node bin/aws-rds-nightly-backup.ts",
  "context": {
    "@aws-cdk/core:enableStackNameDuplicates": "true",
    "aws-cdk:enableDiffNoFail": "true",
    "@aws-cdk/core:stackRelativeExports": "true",
    "ec2_region": "us-east-1",
    "ec2_type": "t2.large",
    "db_host": "database-1.ct5iuxjgrvl6.us-east-1.rds.amazonaws.com",
    "db_user": "postgres",
    "db_pass": "exampledb",
    "db_database": "testdb",
    "cron_minute": "0",
    "cron_hour": "8",
    "cron_day": "1",
    "cron_month": "*",
    "cron_year": "*"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enjoy some happy backups!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>database</category>
    </item>
    <item>
      <title>KubeCon EU Day 1</title>
      <dc:creator>John Doyle</dc:creator>
      <pubDate>Thu, 20 Aug 2020 01:39:40 +0000</pubDate>
      <link>https://dev.to/art_wolf/kubecon-eu-day-1-3nfi</link>
      <guid>https://dev.to/art_wolf/kubecon-eu-day-1-3nfi</guid>
      <description>&lt;p&gt;I really enjoyed Day 1 of &lt;a href="https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/"&gt;KubeCon + CloudNativeCon Europe 2020 - Virtual&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;Having a convention go fully virtual gives it a very different feel to any convention I'd previously attended - though this is my first KubeCon that I'm attending! That said, KubeCon &amp;amp; CNCF have always been wonderful in putting all their sessions up on YouTube after the convention.&lt;/p&gt;

&lt;p&gt;The virtual aspect was very interesting. The sessions themselves had a Q&amp;amp;A feature, which was all fine, though I really liked encouraging folks to chat with the presenters in the CNCF Slack. I would rarely ask questions at a convention, or network with folks, so this felt far more accessible - jump in when you want, lurk the rest of the time! Now if I only could get all the usual swag from the expo hall ;)&lt;/p&gt;

&lt;h2&gt;
  
  
  Attended Sessions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  KubeCon + CloudNativeCon 101: A Beginner's Guide to The Conference
&lt;/h3&gt;

&lt;p&gt;This was the first talk I checked out, presented by &lt;a href="https://twitter.com/karenhchu"&gt;Karen Chu&lt;/a&gt; and &lt;a href="https://twitter.com/michellenoorali"&gt;Michelle Noorali&lt;/a&gt;. This was far better than I expected as there was a lot of terminology that I wasn't aware of - especially around CNCF as there was sessions aimed at SIGs (Special Interest Group) and TOCs (Technical Oversight Group). Details about the Expo hall, and introducing me to the &lt;a href="https://blogs.vmware.com/opensource/2018/05/15/hallway-track-open-source-conferences/"&gt;Hallway Track&lt;/a&gt; - which sounded like Lighting Talks but more general? The talk itself also was an interesting combo of pre-recording and then switching over to live for the Q&amp;amp;A!&lt;/p&gt;

&lt;h3&gt;
  
  
  From Minikube to Production, Never Miss a Step in Getting Your K8s Ready
&lt;/h3&gt;

&lt;p&gt;Heh, this was an interesting talk, but as I've only started on my Kubernetes journey, it became apparent that this was a bit beyond my experience. Though I definitely could relate as &lt;a href="https://twitter.com/LostInBrittany"&gt;Horacio Gonzalez&lt;/a&gt; talked about testing Kubernetes locally with &lt;a href="https://kubernetes.io/docs/tasks/tools/install-minikube/"&gt;Minikube&lt;/a&gt; etc, that your laptop will be slow... its going to get hot... but, its going to work :)&lt;/p&gt;

&lt;p&gt;It was great to see Horacio and &lt;a href="https://twitter.com/0xd33d33"&gt;Kevin Georges&lt;/a&gt; talk through all the steps to go from MiniKube to Production.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Locally Running&lt;/li&gt;
&lt;li&gt;Security Team sets up ingress rules&lt;/li&gt;
&lt;li&gt;App breaks, and you figure out a solution&lt;/li&gt;
&lt;li&gt;Iterate over ..and over.. and over..&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This quickly got DEEP into the networking complexity! Book marked it to return when I will hopefully have a better idea of whats going on...&lt;/p&gt;

&lt;h3&gt;
  
  
  Tutorial: Hands-On Intro to Cloud-Native CI/CD with Tekton
&lt;/h3&gt;

&lt;p&gt;Welp, double down on Minikube and this time with a CD/CD pipeline with &lt;a href="https://tekton.dev/"&gt;Tekton&lt;/a&gt; presented by &lt;a href="https://twitter.com/joel__lord"&gt;Joel Lord&lt;/a&gt; and &lt;a href="https://twitter.com/jankleinert"&gt;Jan Kleinert&lt;/a&gt;. I tend to like understanding a pipeline so that I can understand where products come in or patterns can be used. It was pretty straight forward with the pipelines - there are steps, with inputs and outputs. This talk came with &lt;a href="https://github.com/joellord/handson-tekton"&gt;actual code&lt;/a&gt;, which was great! Let me read the code while the talk is in the background.&lt;/p&gt;

&lt;p&gt;I was hoping to learn more about performing the actual deploys, though this really focused on building and pushing the images. The &lt;a href="https://hub-preview.tekton.dev/"&gt;Tekton Hub&lt;/a&gt; has a ton of resources with shared tasks and pipelines, which will be really helpful!&lt;/p&gt;

&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;A solid start to the first day :) Will have to check out more of expo hall, review notes some more etc.. but, I'm enjoying myself!&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
  </channel>
</rss>
