<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Tom Milner</title>
    <description>The latest articles on DEV Community by Tom Milner (@tom_millner).</description>
    <link>https://dev.to/tom_millner</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/tom_millner"/>
    <language>en</language>
    <item>
      <title>Passing the AWS Certified DevOps Engineer - Professional exam</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Tue, 13 Feb 2024 09:30:12 +0000</pubDate>
      <link>https://dev.to/aws-builders/passing-the-aws-certified-devops-engineer-professional-exam-16ca</link>
      <guid>https://dev.to/aws-builders/passing-the-aws-certified-devops-engineer-professional-exam-16ca</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;I recently passed the AWS Certified DevOps Engineer - Professional exam and I've put this post together to outline how I prepared for the exam and notes I took along the way. My primary motivation to take the exam was that my AWS Certified Developer - Associate certification was expiring in February 2024. Passing the DevOps Engineer - Professional exam would renew this and my AWS Certified SysOps Administrator - Associate certification for another 3 years. I like this approach to certifications as it allows to build on your experience and training as you progress with your AWS experience. Check out a previous article that I wrote on this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/aws-builders/which-aws-certification-exam-should-i-sit-hah"&gt;https://dev.to/aws-builders/which-aws-certification-exam-should-i-sit-hah&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In effect, passing this one exam would gain me one new certification and renew my Developer - Associate, SysOps Administrator - Associate and Cloud Practitioner certifications for another 3 years. &lt;/p&gt;

&lt;h1&gt;
  
  
  Where to begin
&lt;/h1&gt;

&lt;p&gt;Every AWS certification has a page on the AWS certification website and I always find it the best place to start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/certification/certified-devops-engineer-professional/" rel="noopener noreferrer"&gt;https://aws.amazon.com/certification/certified-devops-engineer-professional/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You will find details here about the exam with a study guide and sample questions. You'll also find links to the FAQs for the services and white-papers to read. You might be tempted to ignore these but they are worth reading.&lt;/p&gt;

&lt;p&gt;From there, you can attend the free &lt;a href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/16352/exam-prep-standard-course-aws-certified-devops-engineer-professional-dop-c02-english" rel="noopener noreferrer"&gt;Exam Prep Standard Course: AWS Certified DevOps Engineer - Professional (DOP-C02 - English)&lt;/a&gt; provided by AWS on their skillbuilder site.&lt;/p&gt;

&lt;p&gt;This course is not a complete guide to passing but it is useful in giving you an outline of where you should focus. The exam breaks down into following 6 domains and the exam readiness course goes through the services that you need to study in each domain.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Domain&lt;/th&gt;
&lt;th&gt;% of Exam&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1.0  SDLC Automation&lt;/td&gt;
&lt;td&gt;22%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2.0  Configuration Management and IaC&lt;/td&gt;
&lt;td&gt;17%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3.0  Resilient Cloud Solutions&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4.0  Monitoring and Logging&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5.0  Incident and Event Response&lt;/td&gt;
&lt;td&gt;14%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6.0  Security and Compliance&lt;/td&gt;
&lt;td&gt;17%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Most of the links posted below come from this course.&lt;/p&gt;

&lt;p&gt;The certification page also links to &lt;a href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/14673/aws-certified-devops-engineer-professional-official-practice-question-set-dop-c02-english" rel="noopener noreferrer"&gt;sample exam questions&lt;/a&gt;. Getting some practise with these questions will really help with your preparation for the exam.&lt;br&gt;
And the &lt;a href="https://aws.amazon.com/certification/certified-devops-engineer-professional/" rel="noopener noreferrer"&gt;AWS Certified DevOps Engineer - Professional Exam Guide&lt;/a&gt; also recommends these two courses. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://explore.skillbuilder.aws/learn/course/5741/advanced-testing-practices-using-aws-devops-tools" rel="noopener noreferrer"&gt;Advanced Testing Practices Using AWS DevOps Tools&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://explore.skillbuilder.aws/learn/course/113/advanced-cloudformation-macros" rel="noopener noreferrer"&gt;Advanced CloudFormation: Macros&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I found the first one really good if you are looking for an overview of the different AWS CI/CD services with some hands on exercises at the end.&lt;/p&gt;

&lt;p&gt;And if you are lucky enough to have a subscription to Skillbuilder, this practise exam is really good. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/14810/aws-certified-devops-engineer-professional-official-practice-exam-dop-c02-english" rel="noopener noreferrer"&gt;Exam Prep Official Practice Exam: AWS Certified DevOps Engineer - Professional (DOP-C02 - English)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's 75 questions with a timer of 3 hours to help mimic the exam. I sat it 2 weeks before my exam date and it gave a good indication of where I needed to focus more time. You can review your answers and there are links to AWS from the answers where you dive into what you got wrong.&lt;/p&gt;

&lt;p&gt;What follows are my notes and links to resources provided by AWS to help you pass. It is not a definitive guide to passing  and should only be considered an aid by the reader. &lt;br&gt;
The most important of what follows are the links. These come recommended from AWS training materials. I believe if you have relevant hands on experience with AWS services and use the links below to go deep into the theory of these, you should have enough to pass. &lt;br&gt;
Everything outside the links are my own rough notes that I took as I studied. I have tidied them up as well as I can but apologies if you find them confusing.&lt;br&gt;
My article covering the &lt;a href="https://aws.amazon.com/certification/certified-sysops-admin-associate/" rel="noopener noreferrer"&gt;AWS Certified SysOps Administrator - Associate&lt;/a&gt; is still very relevant to a lot of areas covered in this professional exam.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/aws-builders/how-i-passed-the-aws-certified-sysops-exam-3if"&gt;https://dev.to/aws-builders/how-i-passed-the-aws-certified-sysops-exam-3if&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 1 - SDLC Automation (22%)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Services in scope
&lt;/h2&gt;

&lt;p&gt;AWS Codepipeline - understand each stage&lt;br&gt;
AWS CodeBuild&lt;br&gt;
AWS CodeDeploy&lt;br&gt;
AWS CodeCommit&lt;br&gt;
AWS CodeArtifact&lt;br&gt;
Amazon S3&lt;br&gt;
Amazon Elastic Container Registry [Amazon ECR]&lt;br&gt;
AWS Lambda&lt;br&gt;
EC2 Image Builder&lt;br&gt;
AWS Codestar&lt;br&gt;
AWS Secrets Manager&lt;br&gt;
AWS Systems Manager Parameter Store&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 1.1: Implement CI/CD pipelines
&lt;/h2&gt;

&lt;p&gt;This domain is all about getting a good handle on the Amazon CI/CD offerings. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5amz4cxnozu5qm0f6zz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5amz4cxnozu5qm0f6zz1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Source
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codecommit/" rel="noopener noreferrer"&gt;AWS CodeCommit&lt;/a&gt; is AWS's hosted GitHub in the cloud. It is integrated with IAM and you should understand how different levels of permissions that can be granted to users of the service. Know what the AWSCodeCommitPowerUser policy is for.&lt;br&gt;
You can receive notifications via SNS to invoke downstream actions in AWS CodeBuild, AWS CodePipeline or other competing services.&lt;/p&gt;

&lt;p&gt;S3 and ECR can also be sources for CodePipeline.&lt;/p&gt;

&lt;h3&gt;
  
  
  Build &amp;amp; Test
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codebuild" rel="noopener noreferrer"&gt;AWS CodeBuild&lt;/a&gt; is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. It is important to point out that you can use CodeBuild for running tests. Test results can be captured via report groups which can be accessed via the CodeBuild API or the AWS CodeBuild console. You can also export test results to S3.&lt;br&gt;
If there is any build output, the build environment uploads its output to an S3 bucket.&lt;br&gt;
CodeBuild can integrate with CodeCommit, GitHub, BitBucket, GitHub Enterprise and S3 as sources.&lt;/p&gt;

&lt;p&gt;CodeBuild needs a build project to define how to gather application dependencies, run tests, and build the output to be used in preparing the deployment. A project includes information such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;source code location&lt;/li&gt;
&lt;li&gt;build environment to use&lt;/li&gt;
&lt;li&gt;build commands to run&lt;/li&gt;
&lt;li&gt;storage of build output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A build environment is the combination of operating system, programming language runtime, and tools used by CodeBuild to run a build. &lt;br&gt;
You use buildspec.yml file to specify build commands. The buildspec file is a YAML-formatted file used by CodeBuild that includes a collection of build commands and related settings that CodeBuild uses to run a build.&lt;br&gt;
You can include a buildspec as part of the source code or define a buildspec when you create a build project.&lt;br&gt;
When included as part of the source code, the buildspec file is named buildspec.yml and is located in the root of the source directory. It can be overridden with create-project or update-project commands. There are several sections of a buildspec file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Version&lt;/li&gt;
&lt;li&gt;Phases

&lt;ul&gt;
&lt;li&gt;Install&lt;/li&gt;
&lt;li&gt;Pre-Build&lt;/li&gt;
&lt;li&gt;Build&lt;/li&gt;
&lt;li&gt;Post-build&lt;/li&gt;
&lt;li&gt;Finally blocks&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Artifacts&lt;/li&gt;

&lt;li&gt;Reports &lt;/li&gt;

&lt;li&gt;Cache&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Here is an example buildspec.yml file:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2

env:
  variables:
    JAVA_HOME: "/usr/lib/jvm/java-8-openjdk-amd64"
  parameter-store:
    LOGIN_PASSWORD: /CodeBuild/dockerLoginPassword

phases:
  install:
    commands:
      - echo Entered the install phase...
      - apt-get update -y
      - apt-get install -y maven
    finally:
      - echo This always runs even if the update or install command fails 
  pre_build:
    commands:
      - echo Entered the pre_build phase...
      - docker login –u User –p $LOGIN_PASSWORD
    finally:
      - echo This always runs even if the login command fails 
  build:
    commands:
      - echo Entered the build phase...
      - echo Build started on `date`
      - mvn install
    finally:
      - echo This always runs even if the install command fails
  post_build:
    commands:
      - echo Entered the post_build phase...
      - echo Build completed on `date`

reports:
  arn:aws:codebuild:your-region:your-aws-account-id:report-group/report-group-name-1:
    files:
      - "**/*"
    base-directory: 'target/tests/reports'
    discard-paths: no
  reportGroupCucumberJson:
    files:
      - 'cucumber/target/cucumber-tests.xml'
    discard-paths: yes
    file-format: CucumberJson # default is JunitXml
artifacts:
  files:
    - target/messageUtil-1.0.jar
  discard-paths: yes
  secondary-artifacts:
    artifact1:
      files:
        - target/artifact-1.0.jar
      discard-paths: yes
    artifact2:
      files:
        - target/artifact-2.0.jar
      discard-paths: yes
cache:
  paths:
    - '/root/.m2/**/*'    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can use CloudWatch to monitor and troubleshoot progress of your CodeBuild project. And EventBridge and SNS for notifications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/codebuild/latest/userguide/concepts.html" rel="noopener noreferrer"&gt;CodeBuild concepts&lt;/a&gt; is a good resource to find out more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1760xw3qgnf0sl8x1muf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1760xw3qgnf0sl8x1muf.png" alt="CodeBuild Concepts"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codeartifact" rel="noopener noreferrer"&gt;AWS CodeArtifact&lt;/a&gt; is a fully managed artifact repository service that can be used by organizations to securely store, publish, and share software packages used in their software development process. CodeArtifact can be configured to automatically fetch software packages and dependencies from public artifact repositories so developers have access to the latest versions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codedeploy/" rel="noopener noreferrer"&gt;AWS CodeDeploy&lt;/a&gt; is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. A compute platform is a platform on which CodeDeploy deploys an application. There are three compute platforms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2/On-Premises&lt;/li&gt;
&lt;li&gt;AWS Lambda&lt;/li&gt;
&lt;li&gt;Amazon ECS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodeDeploy runs deployments under an &lt;strong&gt;application&lt;/strong&gt; which functions as a container for the correct combination of revision, deployment configuration, and deployment group are referenced during a deployment.&lt;br&gt;
A &lt;strong&gt;deployment configuration&lt;/strong&gt; is a set of deployment rules and deployment success and failure conditions used by CodeDeploy during a deployment. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Canary10Percent30Minutes&lt;/li&gt;
&lt;li&gt;Canary10Percent5Minutes&lt;/li&gt;
&lt;li&gt;Canary10Percent10Minutes&lt;/li&gt;
&lt;li&gt;Linear10PercentEvery10Minutes&lt;/li&gt;
&lt;li&gt;Linear10PercentEvery1Minute&lt;/li&gt;
&lt;li&gt;Linear10PercentEvery2Minutes&lt;/li&gt;
&lt;li&gt;Linear10PercentEvery3Minutes&lt;/li&gt;
&lt;li&gt;AllAtOnce&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A &lt;strong&gt;revision&lt;/strong&gt; is a version of your application.&lt;/p&gt;

&lt;p&gt;The storage location for files required by CodeDeploy is called a &lt;strong&gt;repository&lt;/strong&gt;. Use of a repository depends on which compute platform your deployment uses. You can use S3 for all 3 compute platforms and GitHub and Bitbucket for for EC2/On-Premises deployments.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;deployment group&lt;/strong&gt; is a set of individual instances. This can be EC2 or on-premise instances in the case of deployment for the EC2/On-Premises compute platform, an ECS cluster for ECS deployments. &lt;br&gt;
In an Amazon ECS deployment, a deployment group specifies the Amazon ECS service, load balancer, optional test listener, and two target groups.&lt;br&gt;
In an AWS Lambda deployment, a deployment group defines a set of CodeDeploy configurations for future deployments of an AWS Lambda function.&lt;br&gt;
In an EC2/On-Premises deployment, a deployment group is a set of individual instances targeted for a deployment. A deployment group contains individually tagged instances, Amazon EC2 instances in Amazon EC2 Auto Scaling groups, or both. For both EC2 and on-premise instances, the CodeDeploy agent needs to be installed and tagged.&lt;br&gt;
For EC2 instances, an IAM role must be attached to the instance and it must have the correct access permissions.&lt;br&gt;
For on-premises instances, you can use an IAM role or user to register an on-premises instance with CodeDeploy with the role being the preferred method. Use of a role with AWS Security Token Service (AWS STS) is the preferred. Once access is in place the AWS CLI installed, you need to register the instance with CodeDeploy.&lt;br&gt;
It is an involved process so worth reading over the official AWS instructions:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/on-premises-instances-register.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/on-premises-instances-register.html&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Monitoring deployments in CodeDeploy
&lt;/h4&gt;

&lt;p&gt;You can monitor CodeDeploy deployments using the following CloudWatch tools: Amazon CloudWatch Events, CloudWatch alarms, and Amazon CloudWatch Logs.&lt;br&gt;
You can create a CloudWatch alarm for an instance or Amazon EC2 Auto Scaling group you are using in your CodeDeploy operations. An alarm watches a single metric over a time period you specify and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods.&lt;br&gt;
You can use Amazon EventBridge to detect and react to changes in the state of an instance or a deployment (an "event") in your CodeDeploy operations. Then, based on rules you create, EventBridge will invoke one or more target actions when a deployment or instance enters the state you specify in a rule.&lt;br&gt;
CodeDeploy is integrated with CloudTrail, a service that captures API calls made by or on behalf of CodeDeploy in your AWS account and delivers the log files to an Amazon S3 bucket you specify. CloudTrail captures API calls from the CodeDeploy console, from CodeDeploy commands through the AWS CLI, or from the CodeDeploy APIs directly. &lt;br&gt;
You can add triggers to a CodeDeploy deployment group to receive notifications about events related to deployments or instances in that deployment group. These notifications are sent to recipients who are subscribed to an Amazon SNS topic you have made part of the trigger's action.&lt;/p&gt;

&lt;p&gt;CodeDeploy Rolling Deployments&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Oneatatime&lt;/li&gt;
&lt;li&gt;Halfatatime&lt;/li&gt;
&lt;li&gt;Custom&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodeDeploy can deploy your application on Amazon EC2 instances, Amazon Elastic Container Service (Amazon ECS) containers, Lambda functions, and even an on-premises environment.&lt;br&gt;
When using CodeDeploy, there are two types of deployments available to you: in-place and blue/green. All Lambda and Amazon ECS deployments are blue/green. An Amazon EC2 or on-premises deployment can be in-place or blue/green. &lt;br&gt;
When using a blue/green deployment, you have several options for shifting traffic to the new green environment. &lt;/p&gt;

&lt;p&gt;Canary - You can choose from predefined canary options that specify the percentage of traffic shifted to your updated application version in the first increment. Then the interval, specified in minutes, indicates when the remaining traffic is shifted in the second increment.&lt;br&gt;
Linear - Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment.&lt;br&gt;
All-at-once - All traffic is shifted from the original environment to the updated environment at once.&lt;/p&gt;

&lt;p&gt;An AppSpec file is a YAML or JSON file used in CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.&lt;/p&gt;

&lt;p&gt;Canary deployments are a type of segmented deployment. You deploy a small part of your application (called the canary) with the rest of the application following later. What makes a canary deployment different is you test your canary with live production traffic. With canary deployments in CodeDeploy, traffic is shifted in two increments. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first shifts some traffic to the canary.&lt;/li&gt;
&lt;li&gt;The next shifts all traffic to the new application at the end of the selected interval.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Will be one or two questions on ELB deployment options as per this table.&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Elastic Beanstalk Deployments
&lt;/h4&gt;

&lt;p&gt;It is worth understanding the different Elastic Beanstalk deployment options as per this table:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Orchestration
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codepipeline/" rel="noopener noreferrer"&gt;AWS CodePipeline&lt;/a&gt; is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n8fp53zlhhhfv8pv944.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n8fp53zlhhhfv8pv944.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Concepts
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Pipelines
&lt;/h5&gt;

&lt;p&gt;A pipeline is a workflow construct that describes how software changes go through a release process. Each pipeline is made up of a series of stages.&lt;/p&gt;

&lt;h5&gt;
  
  
  Stages
&lt;/h5&gt;

&lt;p&gt;A stage is a logical unit you can use to isolate an environment and to limit the number of concurrent changes in that environment. Each stage contains actions that are performed on the application artifacts. Your source code is an example of an artifact. A stage might be a build stage, where the source code is built and tests are run. It can also be a deployment stage, where code is deployed to runtime environments. Each stage is made up of a series of serial or parallel actions.&lt;/p&gt;

&lt;h5&gt;
  
  
  Actions
&lt;/h5&gt;

&lt;p&gt;An action is a set of operations performed on application code and configured so that the actions run in the pipeline at a specified point. Valid CodePipeline action types are source, build, test, deploy, approval, and invoke. Each action can be fulfilled by multiple different providers. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source&lt;/strong&gt; can be Amazon S3, ECR, CodeCommit or other version control sources like Bitbucket Cloud, GitHub, GitHub Enterprise Server, or GitLab.com actions.&lt;br&gt;
&lt;strong&gt;Build&lt;/strong&gt; can be CodeBuild, Custom CloudBees, Custom Jenkins or Custom TeamCity.&lt;br&gt;
&lt;strong&gt;Test&lt;/strong&gt; can be CodeBuild, AWS Device Farm, ThirdParty GhostInspector, Custom Jenkins, ThirdParty Micro Focus StormRunner Load or ThirdParty Nouvola.&lt;br&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; can S3, CloudFormation, CloudFormation Stacksets, CodeDeploy, ECS, ECS (Blue/Green), Elastic Beanstalk, AppConfig, OpsWorks, Service Catalog, Alexa or Custom XebiaLabs.&lt;br&gt;
&lt;strong&gt;Approval&lt;/strong&gt; can be Manual.&lt;br&gt;
&lt;strong&gt;Invoke&lt;/strong&gt; can be AWS Lambda or AWS Step Functions.&lt;/p&gt;

&lt;p&gt;A crucial point to note here is that CodePipeline can be used without any of the other 3 aforementioned CI/CD services, CodeCommit, CodeBuild or CodeDeploy. Each can be an action provider in one or more stages but you can build a pipeline without any of them. A good example would be CloudFormation. You can use S3 as a source action and CloudFormation as a Deploy action.&lt;br&gt;
Another would be ECR to ECS. ECR can be a source action and ECS a deploy action. It may use some of the 3 services in the background but you don't need to be aware of it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Monitoring deployments in CodePipeline
&lt;/h4&gt;

&lt;p&gt;EventBridge event bus events — You can monitor CodePipeline events in EventBridge, which detects changes in your pipeline, stage, or action execution status.&lt;br&gt;
Notifications for pipeline events in the Developer Tools console — You can monitor CodePipeline events with notifications that you set up in the console and then create an Amazon Simple Notification Service topic and subscription for.&lt;br&gt;
AWS CloudTrail — Use CloudTrail to capture API calls made by or on behalf of CodePipeline in your AWS account and deliver the log files to an Amazon S3 bucket.&lt;br&gt;
Note: no integration with CloudWatch Logs, Metrics or Alarms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 1.2: Integrate automated testing into CI/CD pipelines
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmn33zi242ziaxjhagr6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmn33zi242ziaxjhagr6w.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll need to know when to run different types of tests. For example, you should run your unit tests before you open a PR. This is where a service like &lt;a href="https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/welcome.html" rel="noopener noreferrer"&gt;Amazon CodeGuru Reviewer&lt;/a&gt; can be integrated. CodeGuru Reviewier provides automated code reviews for static code analysis. Works with Java and Python code. Also integrates with AWS Secrets Manager to use a secrets detector that finds unprotected secrets in your code.&lt;/p&gt;

&lt;p&gt;Do not confuse with &lt;a href="https://docs.aws.amazon.com/codeguru/latest/profiler-ug/what-is-codeguru-profiler.html" rel="noopener noreferrer"&gt;Amazon CodeGuru Profiler&lt;/a&gt; which provides visibility into and recommendations about application performance during runtime.&lt;/p&gt;

&lt;p&gt;CodeBuild can also be used to run tests and save the results in the location specified in the reports section of the buildspec.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 1.3: Build and manage artifacts
&lt;/h2&gt;

&lt;p&gt;Use CodeArtifact, S3 and ECR as artifact repositories.&lt;/p&gt;

&lt;p&gt;Know how to automate EC2 instances and container image build processes. &lt;/p&gt;

&lt;p&gt;EC2 Image Builder simplifies the building, testing, and deployment of Virtual Machine and container images for use on AWS or on-premises.&lt;br&gt;
It is a fully managed service to automate the creation, management and deployment of customized, secure and up-to-date server images that also integrates with AWS Resource Access Manager (RAM) and AWS Organisations to share the image within an account or organisation. Understand the concepts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;managed image&lt;/li&gt;
&lt;li&gt;image recipe&lt;/li&gt;
&lt;li&gt;container recipe&lt;/li&gt;
&lt;li&gt;base image&lt;/li&gt;
&lt;li&gt;component document&lt;/li&gt;
&lt;li&gt;runtime stages&lt;/li&gt;
&lt;li&gt;configuration phases&lt;/li&gt;
&lt;li&gt;build and test phases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;EC2 Image Builder in conjunction with AWS VM Import/Export (VMIE) allows you to create and maintain golden images for Amazon EC2 (AMI) as well as on-premises VM formats (VHDX, VMDK, and OVF).&lt;br&gt;
An Image Builder recipe is a file that represents the final state of the images produced by automation pipelines and enables you to deterministically repeat builds. Recipes can be shared, forked, and edited outside the Image Builder UI. You can use your recipes with your version control software to maintain version-controlled recipes that you can use to share and track changes.&lt;br&gt;
Images can be shared as AMIs and container images can be shared via ECR.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 1.4: Implement deployment strategies for instance, container, and serverless environments.
&lt;/h2&gt;

&lt;p&gt;I already covered a lot of this in the section on CodeDeploy but no harm to repeat myself.&lt;/p&gt;

&lt;p&gt;Understand these deployment strategies in theory. These terms or some variant of them often repeat themselves between services. Understand which are Mutable vs Immutable deployments.&lt;/p&gt;

&lt;p&gt;Blue/green&lt;br&gt;
Canary&lt;br&gt;
Immutable rolling&lt;br&gt;
Rolling with additional batch&lt;br&gt;
In-Place&lt;br&gt;
Linear&lt;br&gt;
All-at-Once&lt;/p&gt;

&lt;p&gt;Deployments for ECS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rolling update&lt;/li&gt;
&lt;li&gt;Blue/green deployment with CodeDeploy&lt;/li&gt;
&lt;li&gt;External deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  PreTraffic and PostTraffic hooks
&lt;/h3&gt;

&lt;p&gt;Understand how you can use lifecycle hooks during the deployments. These can be used to stop deployments and trigger a rollback. There are different lifecycle hooks depending on the service being deployed. EC2 vs Lambda vs ECS.&lt;/p&gt;

&lt;p&gt;Look at the BeforeAllowTraffic traffic hook in the appspec.yml file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment strategies for serverless
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;CloudFormation&lt;/li&gt;
&lt;li&gt;AWS Serverless Application Model (SAM)&lt;/li&gt;
&lt;li&gt;All-at-once&lt;/li&gt;
&lt;li&gt;Blue/green&lt;/li&gt;
&lt;li&gt;Canary&lt;/li&gt;
&lt;li&gt;Linear&lt;/li&gt;
&lt;li&gt;Lambda versions and aliases&lt;/li&gt;
&lt;li&gt;CodeDeploy&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Dive deeper in lambda versions and aliases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;every lambda can have a number of versions and aliases associated with them&lt;/li&gt;
&lt;li&gt;Versions are immutable snapshots of a function including codes and configuration&lt;/li&gt;
&lt;li&gt;Versions can be used effectively with an alias&lt;/li&gt;
&lt;li&gt;An alias is a pointer to a version&lt;/li&gt;
&lt;li&gt;An alias has a name and an arn&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/devops/continuous-integration/" rel="noopener noreferrer"&gt;https://aws.amazon.com/devops/continuous-integration/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/devops/continuous-delivery/" rel="noopener noreferrer"&gt;https://aws.amazon.com/devops/continuous-delivery/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codepipeline/latest/userguide/best-practices.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codepipeline/latest/userguide/best-practices.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-ecs" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-ecs&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html#deployment-steps-what-happens" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html#deployment-steps-what-happens&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-server" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-server&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 2 - Configuration Management and IaC (17%)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Services in scope
&lt;/h2&gt;

&lt;p&gt;AWS Serverless Application Model [AWS SAM]&lt;br&gt;
AWS CloudFormation&lt;br&gt;
AWS Cloud Development Kit [AWS CDK])&lt;br&gt;
AWS OpsWorks&lt;br&gt;
AWS Systems Manager&lt;br&gt;
AWS Config&lt;br&gt;
AWS AppConfig&lt;br&gt;
AWS Service Catalog&lt;br&gt;
AWS IAM Identity Centre (formerly known as SSO)&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 2.1: Define cloud infrastructure and reusable components to provision and manage systems throughout their lifecycle.
&lt;/h2&gt;

&lt;p&gt;Knowledge of Infrastructure as code is essential for anyone sitting the AWS exams. CloudFormation is AWS's foundational IaC option. And most others build on that. Therefore you need to understand the different sections in a CloudFormation template. The AWS recommended course will really help with that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://explore.skillbuilder.aws/learn/course/113/advanced-cloudformation-macros" rel="noopener noreferrer"&gt;Advanced CloudFormation: Macros&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS SAM is a less verbose way of deploying but it still uses CloudFormation in the background. When deploying a SAM template, you will still see it deployed via CloudFormation. And for services that SAM does not support, you can add CloudFormation yaml directly into the same file.&lt;br&gt;
AWS Cloud Development Kit (CDK) is an option for generating CloudFormation code using TypeScript, Python, Java or .NET.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 2.2: Deploy automation to create, onboard, and secure AWS accounts in a multi-account/multi-Region environment.
&lt;/h2&gt;

&lt;p&gt;You need a good understanding of how to manage a multi-account setup in AWS. AWS Organizations is a good place to start with as that is the overall container for all the accounts. Understand that an Organization Unit (OU) is a sub-division of accounts within an Organization. OUs can be setup in a hierarchy. Service Control Policies (SCPs) are essential to understand. These are policies set at the organization and OU level to manage security and permission that are available or not available to the account within the OU. SCPs never grant permissions, instead they specify the maximum permissions for the accounts in scope. If an SCP is set at a higher level in the OU hierarchy, it cannot be overridden at a lower level. For example if you deny access to AWS Redshift  at the root level, adding an Allow further down will never apply.&lt;br&gt;
AWS Control Tower is AWS's golden path to setting up a multi-account setup with AWS Organizations. It gives you out of the box governance to enforce best practises and standards.&lt;br&gt;
AWS Service Catalog can be used in a multi-account setup to provide these guardrails in child accounts via Cloudformation templates. These templates get published as product in service catalog that be utilised in accounts as a standard for deploying infrastructure via IAC. Understand the different constraints that apply. For example, a launch constraint can allow users to run the template without having permissions to the services involved. That way, a user can deploy the stack without using their own IAM credentials.&lt;br&gt;
And understand how to apply AWS CloudFormation StackSets across multiple accounts and AWS Regions.&lt;br&gt;
AWS IAM Identity Centre works well with AWS Organizations to manage multiple AWS accounts. You can define permission sets to limit the users’ permissions when they sign in to a role. You'll need to understand how it works with other IdPs like Microsoft Active Directory.&lt;br&gt;
AWS Config is a service that will come up a lot in the exam and is well worth going deep on. In the context of this domain, you need to understand how AWS Config works in a multi-account setup to continuously detect nonconformance with AWS Config rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 2.3: Design and build automated solutions for complex tasks and large-scale environments.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#reference-appspec-file-structure-environment-variable-availability" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#reference-appspec-file-structure-environment-variable-availability&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html#field-level-encryption-setting-up" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html#field-level-encryption-setting-up&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-global-database.advantages" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-global-database.advantages&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/solutions/implementations/real-time-web-analytics-with-kinesis/" rel="noopener noreferrer"&gt;https://aws.amazon.com/solutions/implementations/real-time-web-analytics-with-kinesis/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_ous.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_ous.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 3 - Resilient Cloud Solutions (15%)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Implement highly available solutions to meet resilience and business requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;multi-Availability zone and multi-region deployments&lt;/li&gt;
&lt;li&gt;replication and failover methods for your stateful services&lt;/li&gt;
&lt;li&gt;techniques to achieve high availability&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implement solutions that are scalable to meet business requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;determine appropriate metrics for scaling&lt;/li&gt;
&lt;li&gt;using loosely coupled and distributed architectures&lt;/li&gt;
&lt;li&gt;serverless architectures&lt;/li&gt;
&lt;li&gt;container platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Global Scalability
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Route 53&lt;/li&gt;
&lt;li&gt;CloudFront&lt;/li&gt;
&lt;li&gt;Secrets Manager&lt;/li&gt;
&lt;li&gt;CloudTrail&lt;/li&gt;
&lt;li&gt;Security Hub&lt;/li&gt;
&lt;li&gt;Amazon ECR&lt;/li&gt;
&lt;li&gt;AWS Transit Gateway&lt;/li&gt;
&lt;li&gt;AWS IAM&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implement automated recovery processes to meet RTO/RPO requirements
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3etj5c93ss1mu5yh5fyh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3etj5c93ss1mu5yh5fyh.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;disaster recovery concepts&lt;/li&gt;
&lt;li&gt;choose backup and recovery strategies&lt;/li&gt;
&lt;li&gt;and needed recovery procedures
This domain will focus on DynamoDB, Amazon RDS, Route 53, Amazon S3, CloudFront, load balancers, Amazon ECS, Amazon EKS, API Gateway, Lambda, Fargate, AWS Backup, Systems Manager&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Route 53 Application Recovery Controller can help you manage and coordinate failover for your application recovery across multiple Regions, Availability Zones and on-premises too. &lt;/p&gt;

&lt;p&gt;AWS Elastic Disaster Recovery&lt;/p&gt;

&lt;h2&gt;
  
  
  Auto-scaling - &lt;a href="https://aws.amazon.com/ec2/autoscaling/" rel="noopener noreferrer"&gt;https://aws.amazon.com/ec2/autoscaling/&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Scaling options
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Manual Scaling
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Dynamic Scaling
&lt;/h4&gt;

&lt;p&gt;Check out how to &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html" rel="noopener noreferrer"&gt;scaling based on Amazon SQS&lt;/a&gt;. The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain. &lt;/p&gt;

&lt;h5&gt;
  
  
  Step scaling
&lt;/h5&gt;

&lt;h5&gt;
  
  
  Target Tracking
&lt;/h5&gt;

&lt;h4&gt;
  
  
  Predictive Scaling
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Scheduled Scaling
&lt;/h4&gt;

&lt;p&gt;Scale out agressively, scale back in slowly. Will stop thrashing.&lt;/p&gt;

&lt;p&gt;Lifecycle hooks to interrupt scale-in process.&lt;br&gt;
Pending --&amp;gt; Pending Wait --&amp;gt; Pending proceed&lt;br&gt;
--&amp;gt; Inservice&lt;br&gt;
--&amp;gt; Terminating --&amp;gt; Terminating Wait --&amp;gt; Terminating proceed&lt;br&gt;
--&amp;gt; Terminated&lt;/p&gt;

&lt;p&gt;Scaling cooldowns - &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scaling-cooldowns.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scaling-cooldowns.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ASG Warm Pools&lt;br&gt;
Termination policies&lt;/p&gt;

&lt;h2&gt;
  
  
  Load-balancing
&lt;/h2&gt;

&lt;h2&gt;
  
  
  DynamoDB Global Tables
&lt;/h2&gt;

&lt;p&gt;RTO and RPO&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/config/latest/developerguide/aws-config-landing-page.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/config/latest/developerguide/aws-config-landing-page.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/dynamodb/global-tables/" rel="noopener noreferrer"&gt;https://aws.amazon.com/dynamodb/global-tables/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 4 - Monitoring and Logging (15%)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Configure the collection, aggregation, and storage of logs and metrics
&lt;/h2&gt;

&lt;p&gt;CloudWatch - Metric, logs and events&lt;br&gt;
CW collects metrics, monitors those metrics and takes actions based on those metrics.&lt;br&gt;
Custom metrics&lt;br&gt;
Namespaces - each namespace contains data, each different namespace holds different data. All AWS data is contained in a namespace named AWS/service. So for EC2, it would be AWS/EC2. &lt;br&gt;
Default namespace for cloudwatch agent is CW Agent.&lt;br&gt;
Cannot push custom metrics to cloudwatch events.&lt;br&gt;
--statistic-values parameter&lt;br&gt;
AWS Logs Log Driver - passes logs from docker to cloudwatch logs.&lt;br&gt;
Cloudwatch logs subscriptions.&lt;br&gt;
CloudWatch Events cannot match API call events, that is why you need a CloudTrail trail to receive events.&lt;br&gt;
CloudWatch Logs Insights can be used to search CloudWatch Logs.&lt;br&gt;
CloudWatch Log Group retention.&lt;br&gt;
Cloudtrail&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjj73ktefp7307rds3or.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjj73ktefp7307rds3or.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All activity in your AWS account is logged by CloudTrail as a CloudTrail event. By default, CloudTrail stores the last 90 days of events in the event history. There are two types of events that CloudTrail logs: Management events and data events. By default, CloudTrail only logs management events, but you can choose to start logging data events if needed. There is an additional cost for storing data events. You can setup an S3 bucket to store your CloudTrail trail logs to store more than 90 days of the event history. You can also store these logs in CloudWatch logs. And with AWS Organizations, you can create an organizational trail from your AWS Organization's main account. This will log all events for the organization. Cloudtrail only logs events for the AWS region that the trail was created in.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8wqqhc3mutk3cktpt5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8wqqhc3mutk3cktpt5r.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Audit, monitor, and analyze logs and metrics to detect issues
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl39ydj16g5fgnr5ihub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl39ydj16g5fgnr5ihub.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudWatch ServiceLens&lt;br&gt;
DevOps monitoring dashboard automates the dashboard for monitoring and visualising CI/CD metrics&lt;br&gt;
CloudWatch Anomaly Detection&lt;/p&gt;

&lt;p&gt;X-Ray for distributed tracing&lt;br&gt;
Install x-ray daemon to capture tracing on ECS containers. X-ray daemon listens for traffic on UDP port 2000, uses the default network mode of bridge, gathers raw data and sends it to the X-Ray API.&lt;/p&gt;

&lt;p&gt;For end-to-end views, Amazon CloudWatch ServiceLens.&lt;/p&gt;

&lt;p&gt;To monitor sites, api endpoints, and web workflows, check out Amazon CloudWatch Synthetics.&lt;/p&gt;

&lt;p&gt;Ensure you know which CloudWatch metrics to track for different AWS services. For example, if you use Route 53 health checks, &lt;/p&gt;

&lt;p&gt;Exit codes - use systems manager, specifically using Run Command to specify exit codes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automate monitoring and event management of complex environments.
&lt;/h2&gt;

&lt;p&gt;AWS CloudTrail Log file integrity. Digest file&lt;/p&gt;

&lt;p&gt;Cloudwatch, Eventbridge, Kinesis, cloudtrail&lt;br&gt;
You can only create a metric filter for CloudWatch log groups.&lt;br&gt;
AWS Config&lt;/p&gt;

&lt;p&gt;Cloudwatch logs agent to receive logs from on-premise servers&lt;br&gt;
AWS Systems manager agent to manage on-premise servers&lt;/p&gt;

&lt;p&gt;Ensure you know how to automate health checks for your applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load balancers determine the health before sending traffic.&lt;/li&gt;
&lt;li&gt;SQS checks the health before it pulls more work from the queue.

&lt;ul&gt;
&lt;li&gt;Route 53 checks the health of your instance, status of other health checks, status of any CloudWatch alarms and health of your endpoint.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;CodeDeploy can use the rollback when alarm thresholds are met.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#LambdaFunctionExample" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#LambdaFunctionExample&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/health/latest/ug/cloudwatch-events-health.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/health/latest/ug/cloudwatch-events-health.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-cloudwatch-events.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-cloudwatch-events.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 5 - Incident and Event Response (14%)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Manage event sources to process, notify, and take action in response to events.
&lt;/h2&gt;

&lt;p&gt;AWS Health, CloudTrail and EventBridge&lt;/p&gt;

&lt;h2&gt;
  
  
  Implement configuration changes in response to events.
&lt;/h2&gt;

&lt;p&gt;AWS Systems manager, AWS Auto Scaling&lt;/p&gt;

&lt;p&gt;RDS Event notifications - get notified about database instances that are being created, restarted, deleted, but also notifications for low storage, multi-az failovers and configuration changes. &lt;br&gt;
AWS Health --&amp;gt; CloudWatch&lt;br&gt;
AWS Config --&amp;gt; CloudWatch&lt;/p&gt;

&lt;p&gt;Auto scaling events for EC2 Instance-launch lifecycle action:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 Instance Launch Successful&lt;/li&gt;
&lt;li&gt;EC2 Instance Launch Unsuccessful&lt;/li&gt;
&lt;li&gt;EC2 Instance-terminate Lifecycle Action&lt;/li&gt;
&lt;li&gt;EC2 Instance Terminate Successful&lt;/li&gt;
&lt;li&gt;EC2 Instance Terminate Unsuccessful&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Systems Manager Fleet Manager - remotely manage your nodes running on AWS or on premises.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshoot system and application failures.
&lt;/h2&gt;

&lt;p&gt;Systems Manager, Kinesis, SNS, Lambda, Cloudwatch (especially alarms), eventbridge, Xray&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vocf5kavnj1xzuwger8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vocf5kavnj1xzuwger8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can use Amazon CloudWatch Synthetics to create canaries, (configurable scripts that run on a schedule, to monitor your endpoints and APIs). They follow the same routes and perform the same actions as a customer. By using canaries, you can discover issues before your customers do. CloudWatch Synthetics' canaries are Node.js scripts. They create Lambda functions in your account that use Node.js as a framework.&lt;/p&gt;

&lt;p&gt;CloudWatch Synthetic Monitoring and how it integrates with CloudWatch ServiceLens to trace the cause of impacts. Understand how ServiceLens integrates with AWS X-ray to provide an end-to-end view of your application.&lt;/p&gt;

&lt;p&gt;How to implement real-time logs to CloudWatch using subscription filters can deliver logs to an Amazon Kinesis Stream or Kinesis Data Firehose Stream or Lambda for processing and analysis.&lt;/p&gt;

&lt;p&gt;Need to be able to analyze incidents regarding failed processes for ECS and EKS. &lt;/p&gt;

&lt;p&gt;ECS &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service event log&lt;/li&gt;
&lt;li&gt;Fargate tasks&lt;/li&gt;
&lt;li&gt;Other ECS tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;EKS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Container insights to collect, aggregate, and summarize metrics and logs from your containerized applications and microservices.&lt;/li&gt;
&lt;li&gt;CloudWatch alarms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can view AWS log container logs in CloudWatch Logs if you have a log driver configured. For fargate tasks, the AWSlogs log driver passes these logs from Docker to CloudWatch Logs.&lt;/p&gt;

&lt;p&gt;Container Insights are available for ECS, EKS and Kubernetes platform on EC2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvkg7mgv4ivrlike9jks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvkg7mgv4ivrlike9jks.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Systems Manager OpsCenter is used to investigate and remediate an operational issue or interruption. These are known as OpsItems. And then you can run Systems Manager Automation runbooks to resolve any OpsItems.&lt;/p&gt;

&lt;p&gt;Use a Guard Custom policy to create an AWS config custom rule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html#deployment-steps-what-happens" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html#deployment-steps-what-happens&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/aws-best-practices-ddos-resiliency/welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/aws-best-practices-ddos-resiliency/welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/storagegateway/latest/APIReference/API_RefreshCache.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/storagegateway/latest/APIReference/API_RefreshCache.html&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 6 - Security and Compliance (17%)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Services in scope
&lt;/h2&gt;

&lt;p&gt;IAM&lt;br&gt;
AWS IAM Identity Center&lt;br&gt;
Organis&lt;br&gt;
Security Hub&lt;br&gt;
AWS WAF&lt;br&gt;
VPC Flow Logs&lt;br&gt;
Certificate Manager&lt;br&gt;
AWS Config&lt;br&gt;
Amazon Inspector&lt;br&gt;
Guardduty&lt;br&gt;
Macie&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM, managed policies, AWS Security Token Service, KMS, Organizations, AWS Config, AWS Control Tower, AWS Service Catalog, Systems Manager, AWS WAF, Security Hub, GuardDuty, security groups, network ACLs, Amazon Detective, Network Firewall, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implement techniques for identity and access management at scale
&lt;/h2&gt;

&lt;p&gt;Secrets manager&lt;br&gt;
Permission boundaries&lt;br&gt;
Predictive scaling for service-linked roles&lt;/p&gt;

&lt;h2&gt;
  
  
  Apply automation for security controls and data protection.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kky0qi87ycrb4kj7g7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kky0qi87ycrb4kj7g7o.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Security Hub sends all findings to Eventbridge.&lt;/p&gt;

&lt;p&gt;Control Tower integrates with Organizations, Service Catalog and IAM Identity Center. &lt;/p&gt;

&lt;h2&gt;
  
  
  Implement security monitoring and auditing solutions.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z3409wk6lufl2ulmur7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z3409wk6lufl2ulmur7.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnodanh9pi9937n9jp854.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnodanh9pi9937n9jp854.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PII = keyword for Macie&lt;br&gt;
Audit or Auditing = keyword for AWS Config&lt;/p&gt;

&lt;p&gt;AWS Trusted Advisor&lt;/p&gt;

&lt;p&gt;AWS Systems Manager services and features&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-compliance-about.html#sysman-compliance-custom" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-compliance-about.html#sysman-compliance-custom&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_constraints_template-constraints.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_constraints_template-constraints.html&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Additional resources
&lt;/h1&gt;

&lt;p&gt;These are links to additional resources that AWS recommends to pass the exam. They are definitely worth going through. &lt;/p&gt;

&lt;p&gt;Whitepapers&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/running-containerized-microservices/introduction.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/running-containerized-microservices/introduction.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-workloads-on-aws.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-workloads-on-aws.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/aws-multi-region-fundamentals/aws-multi-region-fundamentals.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/aws-multi-region-fundamentals/aws-multi-region-fundamentals.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FAQs&lt;br&gt;
&lt;a href="https://aws.amazon.com/ec2/autoscaling/faqs/?devops=sec&amp;amp;sec=prep" rel="noopener noreferrer"&gt;https://aws.amazon.com/ec2/autoscaling/faqs/?devops=sec&amp;amp;sec=prep&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/elasticloadbalancing/faqs/?devops=sec&amp;amp;sec=prep" rel="noopener noreferrer"&gt;https://aws.amazon.com/elasticloadbalancing/faqs/?devops=sec&amp;amp;sec=prep&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/elasticbeanstalk/faqs/?devops=sec&amp;amp;sec=prep" rel="noopener noreferrer"&gt;https://aws.amazon.com/elasticbeanstalk/faqs/?devops=sec&amp;amp;sec=prep&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/cloudwatch/faqs/" rel="noopener noreferrer"&gt;https://aws.amazon.com/cloudwatch/faqs/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/eventbridge/faqs/" rel="noopener noreferrer"&gt;https://aws.amazon.com/eventbridge/faqs/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other resources&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/apn/implementing-serverless-tiering-strategies-with-aws-lambda-reserved-concurrency/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/apn/implementing-serverless-tiering-strategies-with-aws-lambda-reserved-concurrency/&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;I am glad I sat and passed the exam but it may not be for everyone. I have been working with AWS for over 5 years now and have sat and passed 8 different certification exams. Of those exams, I found this exam to be the toughest. It covers a lot of services (58 at last count) that I do not work with and probably never will. It covers a huge of services so it can be difficult to know where to focus. Taking the practise exams really helped me to find the areas I needed to drill into and put more structure around my revision.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>certification</category>
    </item>
    <item>
      <title>How I passed the AWS Certified DevOps Engineer - Professional exam</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Mon, 27 Nov 2023 18:58:14 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-i-passed-the-aws-certified-devops-engineer-professional-exam-3kdj</link>
      <guid>https://dev.to/aws-builders/how-i-passed-the-aws-certified-devops-engineer-professional-exam-3kdj</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;I recently passed the AWS Certified DevOps Engineer - Professional exam and I've put this post together to outline how I prepared for the exam and notes I took along the way. My primary motivation to take the exam was that my AWS Certified Developer - Associate certification was expiring in February 2024. Passing the DevOps Engineer - Professional exam would renew this and my AWS Certified SysOps Administrator - Associate certification for another 3 years. I like this approach to certifications as it allows to build on your experience and training as you progress with your AWS experience. Check out a previous article that I wrote on this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/aws-builders/which-aws-certification-exam-should-i-sit-hah"&gt;https://dev.to/aws-builders/which-aws-certification-exam-should-i-sit-hah&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In effect, passing this one exam would gain me one new certification and renew my Developer - Associate, SysOps Administrator - Associate and Cloud Practitioner certifications for another 3 years. &lt;/p&gt;

&lt;h1&gt;
  
  
  Where to begin
&lt;/h1&gt;

&lt;p&gt;Every AWS certification has a page on the AWS certification website and I always find it the best place to start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/certification/certified-devops-engineer-professional/" rel="noopener noreferrer"&gt;https://aws.amazon.com/certification/certified-devops-engineer-professional/&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You will find details here about the exam with a study guide and sample questions. You'll also find links to the FAQs for the services and white-papers to read. You might be tempted to ignore these but they are worth reading.&lt;/p&gt;

&lt;p&gt;From there, you can attend the free &lt;a href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/16352/exam-prep-standard-course-aws-certified-devops-engineer-professional-dop-c02-english" rel="noopener noreferrer"&gt;Exam Prep Standard Course: AWS Certified DevOps Engineer - Professional (DOP-C02 - English)&lt;/a&gt; provided by AWS on their skillbuilder site.&lt;/p&gt;

&lt;p&gt;This course is not a complete guide to passing but it is useful in giving you an outline of where you should focus. The exam breaks down into following 6 domains and the exam readiness course goes through the services that you need to study in each domain.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Domain&lt;/th&gt;
&lt;th&gt;% of Exam&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1.0  SDLC Automation&lt;/td&gt;
&lt;td&gt;22%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2.0  Configuration Management and IaC&lt;/td&gt;
&lt;td&gt;17%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3.0  Resilient Cloud Solutions&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4.0  Monitoring and Logging&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5.0  Incident and Event Response&lt;/td&gt;
&lt;td&gt;14%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6.0  Security and Compliance&lt;/td&gt;
&lt;td&gt;17%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Most of the links posted below come from this course.&lt;/p&gt;

&lt;p&gt;The certification page also links to &lt;a href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/14673/aws-certified-devops-engineer-professional-official-practice-question-set-dop-c02-english" rel="noopener noreferrer"&gt;sample exam questions&lt;/a&gt;. Getting some practise with these questions will really help with your preparation for the exam.&lt;br&gt;
And the &lt;a href="https://aws.amazon.com/certification/certified-devops-engineer-professional/" rel="noopener noreferrer"&gt;AWS Certified DevOps Engineer - Professional Exam Guide&lt;/a&gt; also recommends these two courses. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://explore.skillbuilder.aws/learn/course/5741/advanced-testing-practices-using-aws-devops-tools" rel="noopener noreferrer"&gt;Advanced Testing Practices Using AWS DevOps Tools&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://explore.skillbuilder.aws/learn/course/113/advanced-cloudformation-macros" rel="noopener noreferrer"&gt;Advanced CloudFormation: Macros&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I found the first one really good if you are looking for an overview of the different AWS CI/CD services with some hands on exercises at the end.&lt;/p&gt;

&lt;p&gt;And if you are lucky enough to have a subscription to Skillbuilder, this practise exam is really good. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/14810/aws-certified-devops-engineer-professional-official-practice-exam-dop-c02-english" rel="noopener noreferrer"&gt;Exam Prep Official Practice Exam: AWS Certified DevOps Engineer - Professional (DOP-C02 - English)&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's 75 questions with a timer of 3 hours to help mimic the exam. I sat it 2 weeks before my exam date and it gave a good indication of where I needed to focus more time. You can review your answers and there are links to AWS from the answers where you dive into what you got wrong.&lt;/p&gt;

&lt;p&gt;What follows are my notes and links to resources provided by AWS to help you pass. It is not a definitive guide to passing  and should only be considered an aid by the reader. &lt;br&gt;
The most important of what follows are the links. These come recommended from AWS training materials. I believe if you have relevant hands on experience with AWS services and use the links below to go deep into the theory of these, you should have enough to pass. &lt;br&gt;
Everything outside the links are my own rough notes that I took as I studied. I have tidied them up as well as I can but apologies if you find them confusing.&lt;br&gt;
My article covering the &lt;a href="https://aws.amazon.com/certification/certified-sysops-admin-associate/" rel="noopener noreferrer"&gt;AWS Certified SysOps Administrator - Associate&lt;/a&gt; is still very relevant to a lot of areas covered in this professional exam.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/aws-builders/how-i-passed-the-aws-certified-sysops-exam-3if"&gt;https://dev.to/aws-builders/how-i-passed-the-aws-certified-sysops-exam-3if&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  Domain 1 - SDLC Automation (22%)
&lt;/h1&gt;
&lt;h2&gt;
  
  
  Services in scope
&lt;/h2&gt;

&lt;p&gt;AWS Codepipeline - understand each stage&lt;br&gt;
AWS CodeBuild&lt;br&gt;
AWS CodeDeploy&lt;br&gt;
AWS CodeCommit&lt;br&gt;
AWS CodeArtifact&lt;br&gt;
Amazon S3&lt;br&gt;
Amazon Elastic Container Registry [Amazon ECR]&lt;br&gt;
AWS Lambda&lt;br&gt;
EC2 Image Builder&lt;br&gt;
AWS Codestar&lt;br&gt;
AWS Secrets Manager&lt;br&gt;
AWS Systems Manager Parameter Store&lt;/p&gt;
&lt;h2&gt;
  
  
  Task Statement 1.1: Implement CI/CD pipelines
&lt;/h2&gt;

&lt;p&gt;This domain is all about getting a good handle on the Amazon CI/CD offerings. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5amz4cxnozu5qm0f6zz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5amz4cxnozu5qm0f6zz1.png" alt=" " width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Source
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codecommit/" rel="noopener noreferrer"&gt;AWS CodeCommit&lt;/a&gt; is AWS's hosted GitHub in the cloud. It is integrated with IAM and you should understand how different levels of permissions that can be granted to users of the service. Know what the AWSCodeCommitPowerUser policy is for.&lt;br&gt;
You can receive notifications via SNS to invoke downstream actions in AWS CodeBuild, AWS CodePipeline or other competing services.&lt;/p&gt;

&lt;p&gt;S3 and ECR can also be sources for CodePipeline.&lt;/p&gt;
&lt;h3&gt;
  
  
  Build &amp;amp; Test
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codebuild" rel="noopener noreferrer"&gt;AWS CodeBuild&lt;/a&gt; is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. It is important to point out that you can use CodeBuild for running tests. Test results can be captured via report groups which can be accessed via the CodeBuild API or the AWS CodeBuild console. You can also export test results to S3.&lt;br&gt;
If there is any build output, the build environment uploads its output to an S3 bucket.&lt;br&gt;
CodeBuild can integrate with CodeCommit, GitHub, BitBucket, GitHub Enterprise and S3 as sources.&lt;/p&gt;

&lt;p&gt;CodeBuild needs a build project to define how to gather application dependencies, run tests, and build the output to be used in preparing the deployment. A project includes information such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;source code location&lt;/li&gt;
&lt;li&gt;build environment to use&lt;/li&gt;
&lt;li&gt;build commands to run&lt;/li&gt;
&lt;li&gt;storage of build output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A build environment is the combination of operating system, programming language runtime, and tools used by CodeBuild to run a build. &lt;br&gt;
You use buildspec.yml file to specify build commands. The buildspec file is a YAML-formatted file used by CodeBuild that includes a collection of build commands and related settings that CodeBuild uses to run a build.&lt;br&gt;
You can include a buildspec as part of the source code or define a buildspec when you create a build project.&lt;br&gt;
When included as part of the source code, the buildspec file is named buildspec.yml and is located in the root of the source directory. It can be overridden with create-project or update-project commands. There are several sections of a buildspec file:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Version&lt;/li&gt;
&lt;li&gt;Phases

&lt;ul&gt;
&lt;li&gt;Install&lt;/li&gt;
&lt;li&gt;Pre-Build&lt;/li&gt;
&lt;li&gt;Build&lt;/li&gt;
&lt;li&gt;Post-build&lt;/li&gt;
&lt;li&gt;Finally blocks&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Artifacts&lt;/li&gt;
&lt;li&gt;Reports &lt;/li&gt;
&lt;li&gt;Cache&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here is an example buildspec.yml file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2

env:
  variables:
    JAVA_HOME: "/usr/lib/jvm/java-8-openjdk-amd64"
  parameter-store:
    LOGIN_PASSWORD: /CodeBuild/dockerLoginPassword

phases:
  install:
    commands:
      - echo Entered the install phase...
      - apt-get update -y
      - apt-get install -y maven
    finally:
      - echo This always runs even if the update or install command fails 
  pre_build:
    commands:
      - echo Entered the pre_build phase...
      - docker login –u User –p $LOGIN_PASSWORD
    finally:
      - echo This always runs even if the login command fails 
  build:
    commands:
      - echo Entered the build phase...
      - echo Build started on `date`
      - mvn install
    finally:
      - echo This always runs even if the install command fails
  post_build:
    commands:
      - echo Entered the post_build phase...
      - echo Build completed on `date`

reports:
  arn:aws:codebuild:your-region:your-aws-account-id:report-group/report-group-name-1:
    files:
      - "**/*"
    base-directory: 'target/tests/reports'
    discard-paths: no
  reportGroupCucumberJson:
    files:
      - 'cucumber/target/cucumber-tests.xml'
    discard-paths: yes
    file-format: CucumberJson # default is JunitXml
artifacts:
  files:
    - target/messageUtil-1.0.jar
  discard-paths: yes
  secondary-artifacts:
    artifact1:
      files:
        - target/artifact-1.0.jar
      discard-paths: yes
    artifact2:
      files:
        - target/artifact-2.0.jar
      discard-paths: yes
cache:
  paths:
    - '/root/.m2/**/*'    
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can use CloudWatch to monitor and troubleshoot progress of your CodeBuild project. And EventBridge and SNS for notifications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/codebuild/latest/userguide/concepts.html" rel="noopener noreferrer"&gt;CodeBuild concepts&lt;/a&gt; is a good resource to find out more.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1760xw3qgnf0sl8x1muf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1760xw3qgnf0sl8x1muf.png" alt="CodeBuild Concepts" width="663" height="415"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codeartifact" rel="noopener noreferrer"&gt;AWS CodeArtifact&lt;/a&gt; is a fully managed artifact repository service that can be used by organizations to securely store, publish, and share software packages used in their software development process. CodeArtifact can be configured to automatically fetch software packages and dependencies from public artifact repositories so developers have access to the latest versions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deploy
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codedeploy/" rel="noopener noreferrer"&gt;AWS CodeDeploy&lt;/a&gt; is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services. A compute platform is a platform on which CodeDeploy deploys an application. There are three compute platforms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2/On-Premises&lt;/li&gt;
&lt;li&gt;AWS Lambda&lt;/li&gt;
&lt;li&gt;Amazon ECS&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodeDeploy runs deployments under an &lt;strong&gt;application&lt;/strong&gt; which functions as a container for the correct combination of revision, deployment configuration, and deployment group are referenced during a deployment.&lt;br&gt;
A &lt;strong&gt;deployment configuration&lt;/strong&gt; is a set of deployment rules and deployment success and failure conditions used by CodeDeploy during a deployment. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Canary10Percent30Minutes&lt;/li&gt;
&lt;li&gt;Canary10Percent5Minutes&lt;/li&gt;
&lt;li&gt;Canary10Percent10Minutes&lt;/li&gt;
&lt;li&gt;Linear10PercentEvery10Minutes&lt;/li&gt;
&lt;li&gt;Linear10PercentEvery1Minute&lt;/li&gt;
&lt;li&gt;Linear10PercentEvery2Minutes&lt;/li&gt;
&lt;li&gt;Linear10PercentEvery3Minutes&lt;/li&gt;
&lt;li&gt;AllAtOnce&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A &lt;strong&gt;revision&lt;/strong&gt; is a version of your application.&lt;/p&gt;

&lt;p&gt;The storage location for files required by CodeDeploy is called a &lt;strong&gt;repository&lt;/strong&gt;. Use of a repository depends on which compute platform your deployment uses. You can use S3 for all 3 compute platforms and GitHub and Bitbucket for for EC2/On-Premises deployments.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;deployment group&lt;/strong&gt; is a set of individual instances. This can be EC2 or on-premise instances in the case of deployment for the EC2/On-Premises compute platform, an ECS cluster for ECS deployments. &lt;br&gt;
In an Amazon ECS deployment, a deployment group specifies the Amazon ECS service, load balancer, optional test listener, and two target groups.&lt;br&gt;
In an AWS Lambda deployment, a deployment group defines a set of CodeDeploy configurations for future deployments of an AWS Lambda function.&lt;br&gt;
In an EC2/On-Premises deployment, a deployment group is a set of individual instances targeted for a deployment. A deployment group contains individually tagged instances, Amazon EC2 instances in Amazon EC2 Auto Scaling groups, or both. For both EC2 and on-premise instances, the CodeDeploy agent needs to be installed and tagged.&lt;br&gt;
For EC2 instances, an IAM role must be attached to the instance and it must have the correct access permissions.&lt;br&gt;
For on-premises instances, you can use an IAM role or user to register an on-premises instance with CodeDeploy with the role being the preferred method. Use of a role with AWS Security Token Service (AWS STS) is the preferred. Once access is in place the AWS CLI installed, you need to register the instance with CodeDeploy.&lt;br&gt;
It is an involved process so worth reading over the official AWS instructions:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/on-premises-instances-register.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/on-premises-instances-register.html&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Monitoring deployments in CodeDeploy
&lt;/h4&gt;

&lt;p&gt;You can monitor CodeDeploy deployments using the following CloudWatch tools: Amazon CloudWatch Events, CloudWatch alarms, and Amazon CloudWatch Logs.&lt;br&gt;
You can create a CloudWatch alarm for an instance or Amazon EC2 Auto Scaling group you are using in your CodeDeploy operations. An alarm watches a single metric over a time period you specify and performs one or more actions based on the value of the metric relative to a given threshold over a number of time periods.&lt;br&gt;
You can use Amazon EventBridge to detect and react to changes in the state of an instance or a deployment (an "event") in your CodeDeploy operations. Then, based on rules you create, EventBridge will invoke one or more target actions when a deployment or instance enters the state you specify in a rule.&lt;br&gt;
CodeDeploy is integrated with CloudTrail, a service that captures API calls made by or on behalf of CodeDeploy in your AWS account and delivers the log files to an Amazon S3 bucket you specify. CloudTrail captures API calls from the CodeDeploy console, from CodeDeploy commands through the AWS CLI, or from the CodeDeploy APIs directly. &lt;br&gt;
You can add triggers to a CodeDeploy deployment group to receive notifications about events related to deployments or instances in that deployment group. These notifications are sent to recipients who are subscribed to an Amazon SNS topic you have made part of the trigger's action.&lt;/p&gt;

&lt;p&gt;CodeDeploy Rolling Deployments&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Oneatatime&lt;/li&gt;
&lt;li&gt;Halfatatime&lt;/li&gt;
&lt;li&gt;Custom&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;CodeDeploy can deploy your application on Amazon EC2 instances, Amazon Elastic Container Service (Amazon ECS) containers, Lambda functions, and even an on-premises environment.&lt;br&gt;
When using CodeDeploy, there are two types of deployments available to you: in-place and blue/green. All Lambda and Amazon ECS deployments are blue/green. An Amazon EC2 or on-premises deployment can be in-place or blue/green. &lt;br&gt;
When using a blue/green deployment, you have several options for shifting traffic to the new green environment. &lt;/p&gt;

&lt;p&gt;Canary - You can choose from predefined canary options that specify the percentage of traffic shifted to your updated application version in the first increment. Then the interval, specified in minutes, indicates when the remaining traffic is shifted in the second increment.&lt;br&gt;
Linear - Traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment.&lt;br&gt;
All-at-once - All traffic is shifted from the original environment to the updated environment at once.&lt;/p&gt;

&lt;p&gt;An AppSpec file is a YAML or JSON file used in CodeDeploy. The AppSpec file is used to manage each deployment as a series of lifecycle event hooks, which are defined in the file.&lt;/p&gt;

&lt;p&gt;Canary deployments are a type of segmented deployment. You deploy a small part of your application (called the canary) with the rest of the application following later. What makes a canary deployment different is you test your canary with live production traffic. With canary deployments in CodeDeploy, traffic is shifted in two increments. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first shifts some traffic to the canary.&lt;/li&gt;
&lt;li&gt;The next shifts all traffic to the new application at the end of the selected interval.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Will be one or two questions on ELB deployment options as per this table.&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Elastic Beanstalk Deployments
&lt;/h4&gt;

&lt;p&gt;It is worth understanding the different Elastic Beanstalk deployment options as per this table:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.deploy-existing-version.html&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Orchestration
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/codepipeline/" rel="noopener noreferrer"&gt;AWS CodePipeline&lt;/a&gt; is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n8fp53zlhhhfv8pv944.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n8fp53zlhhhfv8pv944.png" alt=" " width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Concepts
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Pipelines
&lt;/h5&gt;

&lt;p&gt;A pipeline is a workflow construct that describes how software changes go through a release process. Each pipeline is made up of a series of stages.&lt;/p&gt;

&lt;h5&gt;
  
  
  Stages
&lt;/h5&gt;

&lt;p&gt;A stage is a logical unit you can use to isolate an environment and to limit the number of concurrent changes in that environment. Each stage contains actions that are performed on the application artifacts. Your source code is an example of an artifact. A stage might be a build stage, where the source code is built and tests are run. It can also be a deployment stage, where code is deployed to runtime environments. Each stage is made up of a series of serial or parallel actions.&lt;/p&gt;

&lt;h5&gt;
  
  
  Actions
&lt;/h5&gt;

&lt;p&gt;An action is a set of operations performed on application code and configured so that the actions run in the pipeline at a specified point. Valid CodePipeline action types are source, build, test, deploy, approval, and invoke. Each action can be fulfilled by multiple different providers. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source&lt;/strong&gt; can be Amazon S3, ECR, CodeCommit or other version control sources like Bitbucket Cloud, GitHub, GitHub Enterprise Server, or GitLab.com actions.&lt;br&gt;
&lt;strong&gt;Build&lt;/strong&gt; can be CodeBuild, Custom CloudBees, Custom Jenkins or Custom TeamCity.&lt;br&gt;
&lt;strong&gt;Test&lt;/strong&gt; can be CodeBuild, AWS Device Farm, ThirdParty GhostInspector, Custom Jenkins, ThirdParty Micro Focus StormRunner Load or ThirdParty Nouvola.&lt;br&gt;
&lt;strong&gt;Deploy&lt;/strong&gt; can S3, CloudFormation, CloudFormation Stacksets, CodeDeploy, ECS, ECS (Blue/Green), Elastic Beanstalk, AppConfig, OpsWorks, Service Catalog, Alexa or Custom XebiaLabs.&lt;br&gt;
&lt;strong&gt;Approval&lt;/strong&gt; can be Manual.&lt;br&gt;
&lt;strong&gt;Invoke&lt;/strong&gt; can be AWS Lambda or AWS Step Functions.&lt;/p&gt;

&lt;p&gt;A crucial point to note here is that CodePipeline can be used without any of the other 3 aforementioned CI/CD services, CodeCommit, CodeBuild or CodeDeploy. Each can be an action provider in one or more stages but you can build a pipeline without any of them. A good example would be CloudFormation. You can use S3 as a source action and CloudFormation as a Deploy action.&lt;br&gt;
Another would be ECR to ECS. ECR can be a source action and ECS a deploy action. It may use some of the 3 services in the background but you don't need to be aware of it.&lt;/p&gt;

&lt;h4&gt;
  
  
  Monitoring deployments in CodePipeline
&lt;/h4&gt;

&lt;p&gt;EventBridge event bus events — You can monitor CodePipeline events in EventBridge, which detects changes in your pipeline, stage, or action execution status.&lt;br&gt;
Notifications for pipeline events in the Developer Tools console — You can monitor CodePipeline events with notifications that you set up in the console and then create an Amazon Simple Notification Service topic and subscription for.&lt;br&gt;
AWS CloudTrail — Use CloudTrail to capture API calls made by or on behalf of CodePipeline in your AWS account and deliver the log files to an Amazon S3 bucket.&lt;br&gt;
Note: no integration with CloudWatch Logs, Metrics or Alarms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 1.2: Integrate automated testing into CI/CD pipelines
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmn33zi242ziaxjhagr6w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmn33zi242ziaxjhagr6w.png" alt=" " width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You'll need to know when to run different types of tests. For example, you should run your unit tests before you open a PR. This is where a service like &lt;a href="https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/welcome.html" rel="noopener noreferrer"&gt;Amazon CodeGuru Reviewer&lt;/a&gt; can be integrated. CodeGuru Reviewier provides automated code reviews for static code analysis. Works with Java and Python code. Also integrates with AWS Secrets Manager to use a secrets detector that finds unprotected secrets in your code.&lt;/p&gt;

&lt;p&gt;Do not confuse with &lt;a href="https://docs.aws.amazon.com/codeguru/latest/profiler-ug/what-is-codeguru-profiler.html" rel="noopener noreferrer"&gt;Amazon CodeGuru Profiler&lt;/a&gt; which provides visibility into and recommendations about application performance during runtime.&lt;/p&gt;

&lt;p&gt;CodeBuild can also be used to run tests and save the results in the location specified in the reports section of the buildspec.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 1.3: Build and manage artifacts
&lt;/h2&gt;

&lt;p&gt;Use CodeArtifact, S3 and ECR as artifact repositories.&lt;/p&gt;

&lt;p&gt;Know how to automate EC2 instances and container image build processes. &lt;/p&gt;

&lt;p&gt;EC2 Image Builder simplifies the building, testing, and deployment of Virtual Machine and container images for use on AWS or on-premises.&lt;br&gt;
It is a fully managed service to automate the creation, management and deployment of customized, secure and up-to-date server images that also integrates with AWS Resource Access Manager (RAM) and AWS Organisations to share the image within an account or organisation. Understand the concepts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;managed image&lt;/li&gt;
&lt;li&gt;image recipe&lt;/li&gt;
&lt;li&gt;container recipe&lt;/li&gt;
&lt;li&gt;base image&lt;/li&gt;
&lt;li&gt;component document&lt;/li&gt;
&lt;li&gt;runtime stages&lt;/li&gt;
&lt;li&gt;configuration phases&lt;/li&gt;
&lt;li&gt;build and test phases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;EC2 Image Builder in conjunction with AWS VM Import/Export (VMIE) allows you to create and maintain golden images for Amazon EC2 (AMI) as well as on-premises VM formats (VHDX, VMDK, and OVF).&lt;br&gt;
An Image Builder recipe is a file that represents the final state of the images produced by automation pipelines and enables you to deterministically repeat builds. Recipes can be shared, forked, and edited outside the Image Builder UI. You can use your recipes with your version control software to maintain version-controlled recipes that you can use to share and track changes.&lt;br&gt;
Images can be shared as AMIs and container images can be shared via ECR.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 1.4: Implement deployment strategies for instance, container, and serverless environments.
&lt;/h2&gt;

&lt;p&gt;I already covered a lot of this in the section on CodeDeploy but no harm to repeat myself.&lt;/p&gt;

&lt;p&gt;Understand these deployment strategies in theory. These terms or some variant of them often repeat themselves between services. Understand which are Mutable vs Immutable deployments.&lt;/p&gt;

&lt;p&gt;Blue/green&lt;br&gt;
Canary&lt;br&gt;
Immutable rolling&lt;br&gt;
Rolling with additional batch&lt;br&gt;
In-Place&lt;br&gt;
Linear&lt;br&gt;
All-at-Once&lt;/p&gt;

&lt;p&gt;Deployments for ECS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rolling update&lt;/li&gt;
&lt;li&gt;Blue/green deployment with CodeDeploy&lt;/li&gt;
&lt;li&gt;External deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  PreTraffic and PostTraffic hooks
&lt;/h3&gt;

&lt;p&gt;Understand how you can use lifecycle hooks during the deployments. These can be used to stop deployments and trigger a rollback. There are different lifecycle hooks depending on the service being deployed. EC2 vs Lambda vs ECS.&lt;/p&gt;

&lt;p&gt;Look at the BeforeAllowTraffic traffic hook in the appspec.yml file.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deployment strategies for serverless
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;CloudFormation&lt;/li&gt;
&lt;li&gt;AWS Serverless Application Model (SAM)&lt;/li&gt;
&lt;li&gt;All-at-once&lt;/li&gt;
&lt;li&gt;Blue/green&lt;/li&gt;
&lt;li&gt;Canary&lt;/li&gt;
&lt;li&gt;Linear&lt;/li&gt;
&lt;li&gt;Lambda versions and aliases&lt;/li&gt;
&lt;li&gt;CodeDeploy&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Dive deeper in lambda versions and aliases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;every lambda can have a number of versions and aliases associated with them&lt;/li&gt;
&lt;li&gt;Versions are immutable snapshots of a function including codes and configuration&lt;/li&gt;
&lt;li&gt;Versions can be used effectively with an alias&lt;/li&gt;
&lt;li&gt;An alias is a pointer to a version&lt;/li&gt;
&lt;li&gt;An alias has a name and an arn&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/devops/continuous-integration/" rel="noopener noreferrer"&gt;https://aws.amazon.com/devops/continuous-integration/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/devops/continuous-delivery/" rel="noopener noreferrer"&gt;https://aws.amazon.com/devops/continuous-delivery/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codepipeline/latest/userguide/best-practices.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codepipeline/latest/userguide/best-practices.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/continuous-delivery-codepipeline.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codecommit/latest/userguide/welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-ecs" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-ecs&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html#deployment-steps-what-happens" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html#deployment-steps-what-happens&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-server" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-server&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 2 - Configuration Management and IaC (17%)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Services in scope
&lt;/h2&gt;

&lt;p&gt;AWS Serverless Application Model [AWS SAM]&lt;br&gt;
AWS CloudFormation&lt;br&gt;
AWS Cloud Development Kit [AWS CDK])&lt;br&gt;
AWS OpsWorks&lt;br&gt;
AWS Systems Manager&lt;br&gt;
AWS Config&lt;br&gt;
AWS AppConfig&lt;br&gt;
AWS Service Catalog&lt;br&gt;
AWS IAM Identity Centre (formerly known as SSO)&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 2.1: Define cloud infrastructure and reusable components to provision and manage systems throughout their lifecycle.
&lt;/h2&gt;

&lt;p&gt;Knowledge of Infrastructure as code is essential for anyone sitting the AWS exams. CloudFormation is AWS's foundational IaC option. And most others build on that. Therefore you need to understand the different sections in a CloudFormation template. The AWS recommended course will really help with that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://explore.skillbuilder.aws/learn/course/113/advanced-cloudformation-macros" rel="noopener noreferrer"&gt;Advanced CloudFormation: Macros&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS SAM is a less verbose way of deploying but it still uses CloudFormation in the background. When deploying a SAM template, you will still see it deployed via CloudFormation. And for services that SAM does not support, you can add CloudFormation yaml directly into the same file.&lt;br&gt;
AWS Cloud Development Kit (CDK) is an option for generating CloudFormation code using TypeScript, Python, Java or .NET.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 2.2: Deploy automation to create, onboard, and secure AWS accounts in a multi-account/multi-Region environment.
&lt;/h2&gt;

&lt;p&gt;You need a good understanding of how to manage a multi-account setup in AWS. AWS Organizations is a good place to start with as that is the overall container for all the accounts. Understand that an Organization Unit (OU) is a sub-division of accounts within an Organization. OUs can be setup in a hierarchy. Service Control Policies (SCPs) are essential to understand. These are policies set at the organization and OU level to manage security and permission that are available or not available to the account within the OU. SCPs never grant permissions, instead they specify the maximum permissions for the accounts in scope. If an SCP is set at a higher level in the OU hierarchy, it cannot be overridden at a lower level. For example if you deny access to AWS Redshift  at the root level, adding an Allow further down will never apply.&lt;br&gt;
AWS Control Tower is AWS's golden path to setting up a multi-account setup with AWS Organizations. It gives you out of the box governance to enforce best practises and standards.&lt;br&gt;
AWS Service Catalog can be used in a multi-account setup to provide these guardrails in child accounts via Cloudformation templates. These templates get published as product in service catalog that be utilised in accounts as a standard for deploying infrastructure via IAC. Understand the different constraints that apply. For example, a launch constraint can allow users to run the template without having permissions to the services involved. That way, a user can deploy the stack without using their own IAM credentials.&lt;br&gt;
And understand how to apply AWS CloudFormation StackSets across multiple accounts and AWS Regions.&lt;br&gt;
AWS IAM Identity Centre works well with AWS Organizations to manage multiple AWS accounts. You can define permission sets to limit the users’ permissions when they sign in to a role. You'll need to understand how it works with other IdPs like Microsoft Active Directory.&lt;br&gt;
AWS Config is a service that will come up a lot in the exam and is well worth going deep on. In the context of this domain, you need to understand how AWS Config works in a multi-account setup to continuously detect nonconformance with AWS Config rules.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task Statement 2.3: Design and build automated solutions for complex tasks and large-scale environments.
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#reference-appspec-file-structure-environment-variable-availability" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#reference-appspec-file-structure-environment-variable-availability&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html#field-level-encryption-setting-up" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-level-encryption.html#field-level-encryption-setting-up&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/lambda/latest/dg/configuration-aliases.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-managedinstances.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/application-discovery/latest/userguide/what-is-appdiscovery.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-global-database.advantages" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html#aurora-global-database.advantages&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/solutions/implementations/real-time-web-analytics-with-kinesis/" rel="noopener noreferrer"&gt;https://aws.amazon.com/solutions/implementations/real-time-web-analytics-with-kinesis/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_ous.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_ous.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 3 - Resilient Cloud Solutions (15%)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Implement highly available solutions to meet resilience and business requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;multi-Availability zone and multi-region deployments&lt;/li&gt;
&lt;li&gt;replication and failover methods for your stateful services&lt;/li&gt;
&lt;li&gt;techniques to achieve high availability&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implement solutions that are scalable to meet business requirements
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;determine appropriate metrics for scaling&lt;/li&gt;
&lt;li&gt;using loosely coupled and distributed architectures&lt;/li&gt;
&lt;li&gt;serverless architectures&lt;/li&gt;
&lt;li&gt;container platforms&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Global Scalability
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Route 53&lt;/li&gt;
&lt;li&gt;CloudFront&lt;/li&gt;
&lt;li&gt;Secrets Manager&lt;/li&gt;
&lt;li&gt;CloudTrail&lt;/li&gt;
&lt;li&gt;Security Hub&lt;/li&gt;
&lt;li&gt;Amazon ECR&lt;/li&gt;
&lt;li&gt;AWS Transit Gateway&lt;/li&gt;
&lt;li&gt;AWS IAM&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implement automated recovery processes to meet RTO/RPO requirements
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3etj5c93ss1mu5yh5fyh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3etj5c93ss1mu5yh5fyh.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;disaster recovery concepts&lt;/li&gt;
&lt;li&gt;choose backup and recovery strategies&lt;/li&gt;
&lt;li&gt;and needed recovery procedures
This domain will focus on DynamoDB, Amazon RDS, Route 53, Amazon S3, CloudFront, load balancers, Amazon ECS, Amazon EKS, API Gateway, Lambda, Fargate, AWS Backup, Systems Manager&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Route 53 Application Recovery Controller can help you manage and coordinate failover for your application recovery across multiple Regions, Availability Zones and on-premises too. &lt;/p&gt;

&lt;p&gt;AWS Elastic Disaster Recovery&lt;/p&gt;

&lt;h2&gt;
  
  
  Auto-scaling - &lt;a href="https://aws.amazon.com/ec2/autoscaling/" rel="noopener noreferrer"&gt;https://aws.amazon.com/ec2/autoscaling/&lt;/a&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Scaling options
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Manual Scaling
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Dynamic Scaling
&lt;/h4&gt;

&lt;p&gt;Check out how to &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-using-sqs-queue.html" rel="noopener noreferrer"&gt;scaling based on Amazon SQS&lt;/a&gt;. The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain. &lt;/p&gt;

&lt;h5&gt;
  
  
  Step scaling
&lt;/h5&gt;

&lt;h5&gt;
  
  
  Target Tracking
&lt;/h5&gt;

&lt;h4&gt;
  
  
  Predictive Scaling
&lt;/h4&gt;

&lt;h4&gt;
  
  
  Scheduled Scaling
&lt;/h4&gt;

&lt;p&gt;Scale out agressively, scale back in slowly. Will stop thrashing.&lt;/p&gt;

&lt;p&gt;Lifecycle hooks to interrupt scale-in process.&lt;br&gt;
Pending --&amp;gt; Pending Wait --&amp;gt; Pending proceed&lt;br&gt;
--&amp;gt; Inservice&lt;br&gt;
--&amp;gt; Terminating --&amp;gt; Terminating Wait --&amp;gt; Terminating proceed&lt;br&gt;
--&amp;gt; Terminated&lt;/p&gt;

&lt;p&gt;Scaling cooldowns - &lt;a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scaling-cooldowns.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-scaling-cooldowns.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;ASG Warm Pools&lt;br&gt;
Termination policies&lt;/p&gt;

&lt;h2&gt;
  
  
  Load-balancing
&lt;/h2&gt;

&lt;h2&gt;
  
  
  DynamoDB Global Tables
&lt;/h2&gt;

&lt;p&gt;RTO and RPO&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional Resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/config/latest/developerguide/aws-config-landing-page.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/config/latest/developerguide/aws-config-landing-page.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Replication.CrossRegion.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/dynamodb/global-tables/" rel="noopener noreferrer"&gt;https://aws.amazon.com/dynamodb/global-tables/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover-configuring.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 4 - Monitoring and Logging (15%)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Configure the collection, aggregation, and storage of logs and metrics
&lt;/h2&gt;

&lt;p&gt;CloudWatch - Metric, logs and events&lt;br&gt;
CW collects metrics, monitors those metrics and takes actions based on those metrics.&lt;br&gt;
Custom metrics&lt;br&gt;
Namespaces - each namespace contains data, each different namespace holds different data. All AWS data is contained in a namespace named AWS/service. So for EC2, it would be AWS/EC2. &lt;br&gt;
Default namespace for cloudwatch agent is CW Agent.&lt;br&gt;
Cannot push custom metrics to cloudwatch events.&lt;br&gt;
--statistic-values parameter&lt;br&gt;
AWS Logs Log Driver - passes logs from docker to cloudwatch logs.&lt;br&gt;
Cloudwatch logs subscriptions.&lt;br&gt;
CloudWatch Events cannot match API call events, that is why you need a CloudTrail trail to receive events.&lt;br&gt;
CloudWatch Logs Insights can be used to search CloudWatch Logs.&lt;br&gt;
CloudWatch Log Group retention.&lt;br&gt;
Cloudtrail&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjj73ktefp7307rds3or.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjj73ktefp7307rds3or.png" alt=" " width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All activity in your AWS account is logged by CloudTrail as a CloudTrail event. By default, CloudTrail stores the last 90 days of events in the event history. There are two types of events that CloudTrail logs: Management events and data events. By default, CloudTrail only logs management events, but you can choose to start logging data events if needed. There is an additional cost for storing data events. You can setup an S3 bucket to store your CloudTrail trail logs to store more than 90 days of the event history. You can also store these logs in CloudWatch logs. And with AWS Organizations, you can create an organizational trail from your AWS Organization's main account. This will log all events for the organization. Cloudtrail only logs events for the AWS region that the trail was created in.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8wqqhc3mutk3cktpt5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff8wqqhc3mutk3cktpt5r.png" alt=" " width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Audit, monitor, and analyze logs and metrics to detect issues
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl39ydj16g5fgnr5ihub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzl39ydj16g5fgnr5ihub.png" alt=" " width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudWatch ServiceLens&lt;br&gt;
DevOps monitoring dashboard automates the dashboard for monitoring and visualising CI/CD metrics&lt;br&gt;
CloudWatch Anomaly Detection&lt;/p&gt;

&lt;p&gt;X-Ray for distributed tracing&lt;br&gt;
Install x-ray daemon to capture tracing on ECS containers. X-ray daemon listens for traffic on UDP port 2000, uses the default network mode of bridge, gathers raw data and sends it to the X-Ray API.&lt;/p&gt;

&lt;p&gt;For end-to-end views, Amazon CloudWatch ServiceLens.&lt;/p&gt;

&lt;p&gt;To monitor sites, api endpoints, and web workflows, check out Amazon CloudWatch Synthetics.&lt;/p&gt;

&lt;p&gt;Ensure you know which CloudWatch metrics to track for different AWS services. For example, if you use Route 53 health checks, &lt;/p&gt;

&lt;p&gt;Exit codes - use systems manager, specifically using Run Command to specify exit codes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automate monitoring and event management of complex environments.
&lt;/h2&gt;

&lt;p&gt;AWS CloudTrail Log file integrity. Digest file&lt;/p&gt;

&lt;p&gt;Cloudwatch, Eventbridge, Kinesis, cloudtrail&lt;br&gt;
You can only create a metric filter for CloudWatch log groups.&lt;br&gt;
AWS Config&lt;/p&gt;

&lt;p&gt;Cloudwatch logs agent to receive logs from on-premise servers&lt;br&gt;
AWS Systems manager agent to manage on-premise servers&lt;/p&gt;

&lt;p&gt;Ensure you know how to automate health checks for your applications.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Load balancers determine the health before sending traffic.&lt;/li&gt;
&lt;li&gt;SQS checks the health before it pulls more work from the queue.

&lt;ul&gt;
&lt;li&gt;Route 53 checks the health of your instance, status of other health checks, status of any CloudWatch alarms and health of your endpoint.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;CodeDeploy can use the rollback when alarm thresholds are met.&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Additional resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#LambdaFunctionExample" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#LambdaFunctionExample&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/health/latest/ug/cloudwatch-events-health.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/health/latest/ug/cloudwatch-events-health.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-cloudwatch-events.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/monitoring-cloudwatch-events.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 5 - Incident and Event Response (14%)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Manage event sources to process, notify, and take action in response to events.
&lt;/h2&gt;

&lt;p&gt;AWS Health, CloudTrail and EventBridge&lt;/p&gt;

&lt;h2&gt;
  
  
  Implement configuration changes in response to events.
&lt;/h2&gt;

&lt;p&gt;AWS Systems manager, AWS Auto Scaling&lt;/p&gt;

&lt;p&gt;RDS Event notifications - get notified about database instances that are being created, restarted, deleted, but also notifications for low storage, multi-az failovers and configuration changes. &lt;br&gt;
AWS Health --&amp;gt; CloudWatch&lt;br&gt;
AWS Config --&amp;gt; CloudWatch&lt;/p&gt;

&lt;p&gt;Auto scaling events for EC2 Instance-launch lifecycle action:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EC2 Instance Launch Successful&lt;/li&gt;
&lt;li&gt;EC2 Instance Launch Unsuccessful&lt;/li&gt;
&lt;li&gt;EC2 Instance-terminate Lifecycle Action&lt;/li&gt;
&lt;li&gt;EC2 Instance Terminate Successful&lt;/li&gt;
&lt;li&gt;EC2 Instance Terminate Unsuccessful&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Systems Manager Fleet Manager - remotely manage your nodes running on AWS or on premises.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshoot system and application failures.
&lt;/h2&gt;

&lt;p&gt;Systems Manager, Kinesis, SNS, Lambda, Cloudwatch (especially alarms), eventbridge, Xray&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vocf5kavnj1xzuwger8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vocf5kavnj1xzuwger8.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can use Amazon CloudWatch Synthetics to create canaries, (configurable scripts that run on a schedule, to monitor your endpoints and APIs). They follow the same routes and perform the same actions as a customer. By using canaries, you can discover issues before your customers do. CloudWatch Synthetics' canaries are Node.js scripts. They create Lambda functions in your account that use Node.js as a framework.&lt;/p&gt;

&lt;p&gt;CloudWatch Synthetic Monitoring and how it integrates with CloudWatch ServiceLens to trace the cause of impacts. Understand how ServiceLens integrates with AWS X-ray to provide an end-to-end view of your application.&lt;/p&gt;

&lt;p&gt;How to implement real-time logs to CloudWatch using subscription filters can deliver logs to an Amazon Kinesis Stream or Kinesis Data Firehose Stream or Lambda for processing and analysis.&lt;/p&gt;

&lt;p&gt;Need to be able to analyze incidents regarding failed processes for ECS and EKS. &lt;/p&gt;

&lt;p&gt;ECS &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Service event log&lt;/li&gt;
&lt;li&gt;Fargate tasks&lt;/li&gt;
&lt;li&gt;Other ECS tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;EKS&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Container insights to collect, aggregate, and summarize metrics and logs from your containerized applications and microservices.&lt;/li&gt;
&lt;li&gt;CloudWatch alarms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can view AWS log container logs in CloudWatch Logs if you have a log driver configured. For fargate tasks, the AWSlogs log driver passes these logs from Docker to CloudWatch Logs.&lt;/p&gt;

&lt;p&gt;Container Insights are available for ECS, EKS and Kubernetes platform on EC2.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvkg7mgv4ivrlike9jks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvkg7mgv4ivrlike9jks.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Systems Manager OpsCenter is used to investigate and remediate an operational issue or interruption. These are known as OpsItems. And then you can run Systems Manager Automation runbooks to resolve any OpsItems.&lt;/p&gt;

&lt;p&gt;Use a Guard Custom policy to create an AWS config custom rule.&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/lambda/latest/dg/services-cloudwatchevents.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html#deployment-steps-what-happens" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codedeploy/latest/userguide/deployment-steps.html#deployment-steps-what-happens&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/aws-best-practices-ddos-resiliency/welcome.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/aws-best-practices-ddos-resiliency/welcome.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/storagegateway/latest/APIReference/API_RefreshCache.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/storagegateway/latest/APIReference/API_RefreshCache.html&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 6 - Security and Compliance (17%)
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Services in scope
&lt;/h2&gt;

&lt;p&gt;IAM&lt;br&gt;
AWS IAM Identity Center&lt;br&gt;
Organis&lt;br&gt;
Security Hub&lt;br&gt;
AWS WAF&lt;br&gt;
VPC Flow Logs&lt;br&gt;
Certificate Manager&lt;br&gt;
AWS Config&lt;br&gt;
Amazon Inspector&lt;br&gt;
Guardduty&lt;br&gt;
Macie&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM, managed policies, AWS Security Token Service, KMS, Organizations, AWS Config, AWS Control Tower, AWS Service Catalog, Systems Manager, AWS WAF, Security Hub, GuardDuty, security groups, network ACLs, Amazon Detective, Network Firewall, and more.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Implement techniques for identity and access management at scale
&lt;/h2&gt;

&lt;p&gt;Secrets manager&lt;br&gt;
Permission boundaries&lt;br&gt;
Predictive scaling for service-linked roles&lt;/p&gt;

&lt;h2&gt;
  
  
  Apply automation for security controls and data protection.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kky0qi87ycrb4kj7g7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1kky0qi87ycrb4kj7g7o.png" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Security Hub sends all findings to Eventbridge.&lt;/p&gt;

&lt;p&gt;Control Tower integrates with Organizations, Service Catalog and IAM Identity Center. &lt;/p&gt;

&lt;h2&gt;
  
  
  Implement security monitoring and auditing solutions.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z3409wk6lufl2ulmur7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1z3409wk6lufl2ulmur7.png" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnodanh9pi9937n9jp854.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnodanh9pi9937n9jp854.png" alt=" " width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PII = keyword for Macie&lt;br&gt;
Audit or Auditing = keyword for AWS Config&lt;/p&gt;

&lt;p&gt;AWS Trusted Advisor&lt;/p&gt;

&lt;p&gt;AWS Systems Manager services and features&lt;/p&gt;

&lt;h2&gt;
  
  
  Additional resources
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-compliance-about.html#sysman-compliance-custom" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-compliance-about.html#sysman-compliance-custom&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_constraints_template-constraints.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/servicecatalog/latest/adminguide/catalogs_constraints_template-constraints.html&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Additional resources
&lt;/h1&gt;

&lt;p&gt;These are links to additional resources that AWS recommends to pass the exam. They are definitely worth going through. &lt;/p&gt;

&lt;p&gt;Whitepapers&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/running-containerized-microservices/introduction.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/running-containerized-microservices/introduction.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/infrastructure-as-code.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-workloads-on-aws.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/disaster-recovery-workloads-on-aws.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/aws-multi-region-fundamentals/aws-multi-region-fundamentals.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/aws-multi-region-fundamentals/aws-multi-region-fundamentals.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;FAQs&lt;br&gt;
&lt;a href="https://aws.amazon.com/ec2/autoscaling/faqs/?devops=sec&amp;amp;sec=prep" rel="noopener noreferrer"&gt;https://aws.amazon.com/ec2/autoscaling/faqs/?devops=sec&amp;amp;sec=prep&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/elasticloadbalancing/faqs/?devops=sec&amp;amp;sec=prep" rel="noopener noreferrer"&gt;https://aws.amazon.com/elasticloadbalancing/faqs/?devops=sec&amp;amp;sec=prep&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/elasticbeanstalk/faqs/?devops=sec&amp;amp;sec=prep" rel="noopener noreferrer"&gt;https://aws.amazon.com/elasticbeanstalk/faqs/?devops=sec&amp;amp;sec=prep&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/cloudwatch/faqs/" rel="noopener noreferrer"&gt;https://aws.amazon.com/cloudwatch/faqs/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/eventbridge/faqs/" rel="noopener noreferrer"&gt;https://aws.amazon.com/eventbridge/faqs/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other resources&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/apn/implementing-serverless-tiering-strategies-with-aws-lambda-reserved-concurrency/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/apn/implementing-serverless-tiering-strategies-with-aws-lambda-reserved-concurrency/&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;I am glad I sat and passed the exam but it may not be for everyone. I have been working with AWS for over 5 years now and have sat and passed 8 different certification exams. Of those exams, I found this exam to be the toughest. It covers a lot of services (58 at last count) that I do not work with and probably never will. It covers a huge of services so it can be difficult to know where to focus. Taking the practise exams really helped me to find the areas I needed to drill into and put more structure around my revision.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Which AWS Certification exam should I sit?</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Mon, 20 Feb 2023 21:21:15 +0000</pubDate>
      <link>https://dev.to/aws-builders/which-aws-certification-exam-should-i-sit-hah</link>
      <guid>https://dev.to/aws-builders/which-aws-certification-exam-should-i-sit-hah</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;I often get asked which is the best AWS certification to take and while I have a qualitative opinion on it, I wanted to use data to get to a better answer. I felt it would make a good blog post to help share my methodology and the outcome.&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS Certifications
&lt;/h1&gt;

&lt;p&gt;AWS provides 12 different certifications for customers and IT professionals to learn and showcase their knowledge of AWS. There are 6 core certifications and 6 speciality certifications. The 6 core certifications are split between foundational, associate and professional. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/certification/exams/?nc2=sb_ce_exm" rel="noopener noreferrer"&gt;https://aws.amazon.com/certification/exams/?nc2=sb_ce_exm&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  A bit about myself.
&lt;/h1&gt;

&lt;p&gt;I hold 6 current AWS certifications. Personally, I have found them useful as an aid to support my own journey into AWS. I agree that certifications are not for everyone and they definitely don't indicate working knowledge of AWS by themselves. I think of them as scaffolding, useful to get the building up but the same building should be able to stand by itself. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.credly.com/badges/16129ed5-c168-4d5d-ab8e-af7f5399d1d1/public_url" rel="noopener noreferrer"&gt;AWS Certified Cloud Practitioner&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.credly.com/badges/10e89c73-9083-4c98-b20b-9f8b48ea5fea/public_url" rel="noopener noreferrer"&gt;AWS Certified Solutions Architect – Associate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.credly.com/badges/a0c991e4-b007-421b-a00b-403aa2c85903/public_url" rel="noopener noreferrer"&gt;AWS Certified Developer – Associate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.credly.com/badges/52523c37-c300-4695-89e8-1e4baece2819/public_url" rel="noopener noreferrer"&gt;AWS Certified SysOps Administrator – Associate&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.credly.com/badges/1479079d-1489-4163-9709-d2bfbfe35370/public_url" rel="noopener noreferrer"&gt;AWS Certified Solutions Architect – Professional&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.credly.com/badges/429e9873-1fd4-43df-b3c1-6b2205e01442/public_url" rel="noopener noreferrer"&gt;AWS Certified Data Analytics – Specialty&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Certification Overlap &amp;amp; Progression
&lt;/h1&gt;

&lt;p&gt;My experience of the AWS certification exams is that they do not exist in isolation and there is a lot of overlap between the services covered in each. The main purpose of this article is to quantify that overlap and help direct the student in creating a study plan. Another premise is that the student will do more than one certification and it is important to identify a path through them.&lt;/p&gt;

&lt;h1&gt;
  
  
  Methodology
&lt;/h1&gt;

&lt;p&gt;Each certification study guide lists the services that could appear on the exam. They are not exhaustive lists in that others could appear in the exam but it does give a very good basis for study. I have gone through each study guide and pulled out the list of services so that they can be compared between certifications. From this view, we can see that the Solutions Architect – Professional exam covers the most technologies at 98 while the Security - Specialty exam covers the least.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Certification&lt;/th&gt;
&lt;th&gt;Study Guide&lt;/th&gt;
&lt;th&gt;# Services&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Cloud Practitioner&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-cloud-practitioner/AWS-Certified-Cloud-Practitioner_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;63&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Solutions Architect – Associate&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-sa-assoc/AWS-Certified-Solutions-Architect-Associate_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;63&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Solutions Architect – Professional&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-sa-pro/AWS-Certified-Solutions-Architect-Professional_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;98&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Developer – Associate&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-dev-associate/AWS-Certified-Developer-Associate_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;33&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SysOps Administrator – Associate&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-sysops-associate/AWS-Certified-SysOps-Administrator-Associate_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;65&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;DevOps Engineer - Professional&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-devops-pro/AWS-Certified-DevOps-Engineer-Professional_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;58&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Advanced Networking - Specialty&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-advnetworking-spec/AWS-Certified-Advanced-Networking-Specialty_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;43&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data Analytics – Specialty&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-data-analytics-specialty/AWS-Certified-Data-Analytics-Specialty_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Database - Specialty&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-database-specialty/AWS-Certified-Database-Specialty_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;41&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Machine Learning – Specialty&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-ml/AWS-Certified-Machine-Learning-Specialty_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;33&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security - Specialty&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-security-spec/AWS-Certified-Security-Specialty_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SAP on AWS - Specialty&lt;/td&gt;
&lt;td&gt;&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-sap-on-aws-specialty/SAP-on-AWS-Specialty_Exam-Guide.pdf" rel="noopener noreferrer"&gt;link&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;60&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This view does not tell us much by itself and we need to analyse how much services overlap between exams. To display this visually, I created the heatmap below. If you read each column down, each cell shows the number of services that overlap with the services listed in first column. For example, we can see that 46 of services covered in the Cloud Practitioner exam also appear in the Solution Architect - Associate exam. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3yfkknvajibfh85cjj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3yfkknvajibfh85cjj8.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Converting these to percentages gives us a better insight into the overlap. For the same cell, we now see that 73.02% of the services covered in the Cloud Practitioner exam also appear in the Solution Architect - Associate exam. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7nnlj9mpbdsrs1fl32q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7nnlj9mpbdsrs1fl32q.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Ranking them by percentage of average overlap, we can see that the core certification exams rank higher. SAP on AWS - Specialty is the one exception among the specialty exams.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxham9kjm1zs05mrsw8fi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxham9kjm1zs05mrsw8fi.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As a final view, it is worth focusing in on the 6 core exams to emphasis the higher level overlap between those.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxdkgtgqmla5kt6fyyzg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxdkgtgqmla5kt6fyyzg.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before going any further, it is worth keeping in mind that each exam will cover the services listed at different levels. If the student is sitting either of the Professional or Specialty exams they will need to know any service listed in the relevant guide to a much more detailed level than for the Associate exams. Similarly for the Associate exams versus the Cloud Practitioner. &lt;/p&gt;

&lt;h1&gt;
  
  
  Findings
&lt;/h1&gt;

&lt;p&gt;There are a few clear insights here. &lt;/p&gt;

&lt;p&gt;1) There is a clear synergy between the Cloud Practitioner, Solutions Architect – Associate and Solutions Architect – Professional exam. This illustrates my point best about progression and a path through the certification exams. Studying and passing the Cloud Practitioner will give you a good foundation to pass the Solution Architect - Associate. My certification journey started with the Cloud Practitioner exam and I have always recommend it to anyone starting out. Primarily because it is the easiest and cheapest exam to start with. Most of the material for the exam can be covered in a couple of hours. For myself, I had not sat an exam in over 10 years and I found that the Cloud Practitioner exam was a good on ramp to the AWS exams and certification process in general. Success on this exam gave me the confidence to go forward.&lt;/p&gt;

&lt;p&gt;2) The core certification with the least amount of overlap is the Developer - Associate. However, I still think it is worth attempting. You can check out a previous &lt;a href="https://dev.to/aws-builders/how-i-passed-the-aws-certified-developer-associate-exam-1336"&gt;article&lt;/a&gt; I wrote when I sat and passed this exam. It is a great exam showcasing the best of developer services on AWS.&lt;/p&gt;

&lt;p&gt;3) The Specialty exams stand on their own. Besides from the SAP on AWS - Specialty, the overlap between these and other exams is very low. In my experience of the Data Analytics – Specialty exam, you need to go very deep on the services listed. It was one of the toughest exams I sat and I believe harder than the Solutions Architect – Professional exam.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
    </item>
    <item>
      <title>Amazon Neptune Serverless - The Graph DB for Greek Gods</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Tue, 03 Jan 2023 11:50:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/amazon-neptune-serverless-the-graph-db-for-greek-gods-1j59</link>
      <guid>https://dev.to/aws-builders/amazon-neptune-serverless-the-graph-db-for-greek-gods-1j59</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;I have been looking around for a project to try and learn about graph databases, specifically Amazon Neptune for a while now. My ten year old is obsessed with Greek gods and their families and I was watching him to trace out a Greek god family tree. It was so convoluted that I thought it could be a worthwhile project and also something we could work on together.&lt;/p&gt;

&lt;h1&gt;
  
  
  Graph databases
&lt;/h1&gt;

&lt;p&gt;Data in a graph database is primarily stored as 3 different objects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nodes - a node is the thing you are storing in the database. Thinking relationally, this is similar to a record in a table. If the table holds details on Gods in this case, each record about a single God will be their own node. A node can be an instance of any entity, a person, a place, a thing, etc and the same graph database can hold instances of multiple types of these entities.&lt;/li&gt;
&lt;li&gt;Edges - an edge is the relationship between nodes. Again, thinking relationally, they are similar to a foreign key between nodes. Relationships are not mandatory but they can be many-to-many. The same nodes can be related to each other in multiple different ways.&lt;/li&gt;
&lt;li&gt;Properties - extra non-mandatory attributes that can be added to either a node or an edge.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Amazon Neptune supports both RDF and Property Graphs. The simplest explanation I can offer for the differences between the two is that with RDF, everything is a node. Every property you add to a node is another node related to the original node. For example if the original node was a God, the name of the God would be another node. In a property graph, properties can be saved on the node. For the sake of this article, I am going to stick with a Property Graph.&lt;/p&gt;

&lt;h1&gt;
  
  
  Amazon Neptune
&lt;/h1&gt;

&lt;blockquote&gt;
&lt;p&gt;Amazon Neptune is a fully managed database service built for the cloud that makes it easier to build and run graph applications.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A lot has been written lately about Amazon Neptune that I won't try to replicate here. &lt;br&gt;
Fellow &lt;a href="https://dev.to/aws-builders"&gt;AWS Community Builders&lt;/a&gt; &lt;a class="mentioned-user" href="https://dev.to/abc_wendsss"&gt;@abc_wendsss&lt;/a&gt; and &lt;a class="mentioned-user" href="https://dev.to/ymwjbxxq"&gt;@ymwjbxxq&lt;/a&gt; have written great resources on how to get started with the service.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://dev.to/abc_wendsss/series/19655"&gt;Getting started with Amazon Neptune graph database Series' Articles&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://dev.to/aws-builders/amazon-neptune-4j65"&gt;Amazon Neptune&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Since the launch of &lt;a href="https://aws.amazon.com/blogs/aws/introducing-amazon-neptune-serverless-a-fully-managed-graph-database-that-adjusts-capacity-for-your-workloads/" rel="noopener noreferrer"&gt;Neptune Serverless&lt;/a&gt;, it is now even easier to get started. While there is debate as to whether this is truly serverless (my take, it isn't), it does make it easier to get started with the service. See Jeremy Daly's article, &lt;a href="https://www.jeremydaly.com/not-so-serverless-neptune/" rel="noopener noreferrer"&gt;Not so serverless Neptune&lt;/a&gt; for more detail on the debate.&lt;br&gt;
I will be using Neptune Serverless for this exercise.&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting Started
&lt;/h1&gt;

&lt;p&gt;1) Go to &lt;a href="https://eu-west-1.console.aws.amazon.com/neptune/home?region=eu-west-1#databases:" rel="noopener noreferrer"&gt;https://eu-west-1.console.aws.amazon.com/neptune/home?region=eu-west-1#databases:&lt;/a&gt; and click Create database&lt;br&gt;
2) Choose Serverless as your Engine type&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed42sis01v108bb3o7b2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fed42sis01v108bb3o7b2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3) Be careful with the Templates. The default seems to be Production but I picked the Development and Testing option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb7i02xrsdnqd73oawry.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgb7i02xrsdnqd73oawry.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4) A Jupyter notebook is created by default to help you run your queries against the database. For efficiency's sake, I have chosen to use this but you can turn it off if you're looking to save costs. You'll have to specify a name for the notebook and also for the IAM role for the notebook to have access.&lt;br&gt;
5) I have left all the others on the default settings. &lt;br&gt;
6) Click Create database&lt;br&gt;
7) This will take a few minutes to launch both the database and the notebook.&lt;/p&gt;

&lt;h1&gt;
  
  
  Loading data
&lt;/h1&gt;

&lt;p&gt;Data can be manually inserted into directly into the database using openCypher statements like this to create a node&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

%%oc
CREATE  (g1:Uranus { name:"Uranus", branch: ""});
CREATE  (g2:Gaia { name:"Gaia", branch: ""});
CREATE  (g3:Cronus { name:"Cronus", branch: "Titan"});


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;or this to create a relationship between nodes&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

%%oc
MATCH (a),(b),(c) WHERE a.name = "Uranus" 
AND b.name = "Cronus" AND c.name = "Gaia" 
create (a)-[r:parentOf]-&amp;gt;(b),(c)-[r1:parentOf]-&amp;gt;(b) 
RETURN type(r);


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can then use another query to return the relationship&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

%%oc
MATCH p = (a {name: 'Cronus'})-[:parentOf*1..2]-(b)
RETURN *;


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F156li9jb3w8jklprkrjd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F156li9jb3w8jklprkrjd.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bulk Loader
&lt;/h2&gt;

&lt;p&gt;However if you have anything more than a handful of records, using the &lt;a href="https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load.html" rel="noopener noreferrer"&gt;Neptune Bulk Loader&lt;/a&gt; should work out quicker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdctf1rybfhwv62kb8aub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdctf1rybfhwv62kb8aub.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To make this work, you need an IAM role and S3 VPC Endpoint. The AWS documentation does a good job of detailing the steps needed&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-tutorial-IAM.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data formats
&lt;/h2&gt;

&lt;p&gt;I created two files to be loaded via the bulk loader, one for the nodes and one for edges.&lt;/p&gt;

&lt;p&gt;Nodes.csv&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

:ID,name:String,branch:String,:LABEL
g1,"Uranus","",Uranus
g2,"Gaia","",Gaia
g3,"Cronus","",Cronus


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Edges.csv&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

:ID,:START_ID,:END_ID,:TYPE
e1,g1,g3,parentOf
e2,g2,g3,parentOf


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can add more properties as column headers and these will be loaded onto the node or edge in the database.&lt;/p&gt;

&lt;p&gt;Once you have the required policies and endpoints in place, the bulk loader is far easier to operate. I spun a small t2.micro instance and used EC2 Instance Connect to execute the curl command to run the loader.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

curl -X POST \
    -H 'Content-Type: application/json' \
    https://database-1.cluster-cryicaski1uo.eu-west-1.neptune.amazonaws.com:8182/loader -d '
    {
      "source" : "s3://neptune-greek-gods/initial/nodes.csv",
      "format" : "opencypher",
      "userProvidedEdgeIds": "TRUE",
      "iamRoleArn" : "arn:aws:iam::565877345391:role/GreekGodsUploadfromS3",
      "region" : "eu-west-1",
      "failOnError" : "FALSE",
      "parallelism" : "MEDIUM"
    }'


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h1&gt;
  
  
  Querying
&lt;/h1&gt;

&lt;p&gt;Amazon Neptune supports Gremlin, openCypher and SPARQL for querying data. For me, I have had some exposure to Neo4j and openCypher in my work and it's intuitive to me. It's a declarative query language like SQL and if you have experience with SQL, it is easy to pick up the basics of it. Here are a few examples that I found useful.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Get count of all nodes
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;%%oc&lt;br&gt;
MATCH (n)&lt;br&gt;
RETURN COUNT(*);&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
- Return all nodes or a limited number. You can leave out the last line if you want to see all nodes.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;%%oc &lt;br&gt;
MATCH (n)&lt;br&gt;
RETURN n&lt;br&gt;
LIMIT 10;&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
- Delete all nodes (useful if you need to clear down database before loading)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;%%oc&lt;br&gt;
MATCH (n)&lt;br&gt;
DETACH DELETE n&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
- Traverse the nodes to show relationships. The following query shows all of Zeus's immediate children.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;%%oc&lt;br&gt;
MATCH p = (a {name: 'Zeus'})-[:parentOf*1..1]-&amp;gt;(b)&lt;br&gt;
RETURN *&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8a60xionupabdffq7h3p.png)

- The parameters `1..1` set the number of hops to traverse the graph. So by changing these you can show more relationships between nodes beyond those to the original node. The following query shows all of Zeus's immediate children and then their children.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;%%oc&lt;br&gt;
MATCH p = (a {name: 'Zeus'})-[:parentOf*1..2]-&amp;gt;(b)&lt;br&gt;
RETURN *&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cpnrpo9pia9fjn7n09u8.png)

- The parameters `-&amp;gt;(b)` indicates to only return relationships that go one way, in this case from Zeus down. You can remove the `&amp;gt;` to return the other parents of Zeus' children. For example, Hera now appears as the mother of several gods with this query.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;%%oc&lt;br&gt;
MATCH p = (a {name: 'Zeus'})-[:parentOf*1..2]-(b)&lt;br&gt;
RETURN *&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vrh3wtl4lhyrqsbsdqjr.png)

#Closing thoughts
While working on this article, a few thoughts keep coming into head.

##Relationships
One of the great things about SQL and the relational data model is that you can use the data after it has been loaded to find relationships. In a graph database, you need to know those relationships beforehand. The power of a graph database is the ability to traverse the graph and find relationships between nodes through other nodes. It is far easier to do this in a graph database than in a normal relational database.

##Multi-tenant
I've worked on a number of data platform that can support multi-tenant patterns off a single database instance. This can be done by a number of ways, separate schemas, adjusting the primary keys on tables. However, I'm struggling to see how to do this on a single graph database instance.

##Graphical analysis
I always thought that the graphical analysis capabilities are an incredible selling point of Graph databases but after looking at 500 Greek god dots and how they relate to each other, now I'm not so sure. I guess I thought the answers would just jump out without asking. However, you still need to know your data, the questions to ask and how to interpret the results.

##Serverless
Why did AWS choose RDS to build a graph database? They have an excellent product in DynamoDB that I would think would be a better fit for graph data. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>graphdb</category>
      <category>neptune</category>
    </item>
    <item>
      <title>Passing the AWS Certified Solutions Architect - Professional exam</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Mon, 01 Aug 2022 15:40:05 +0000</pubDate>
      <link>https://dev.to/aws-builders/passing-the-aws-certified-solutions-architect-professional-exam-1d3j</link>
      <guid>https://dev.to/aws-builders/passing-the-aws-certified-solutions-architect-professional-exam-1d3j</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;I recently passed the AWS Certified Solutions Architect - Professional exam. It is a tough one and I'd like to share my thoughts on what I did to pass and some notes I kept along the way. Before beginning, it's worth considering what this exam is about.&lt;br&gt;
It is an architect exam so you are not always asked the lower level implementation or configuration details of a service. You need to know more about how services integrate natively. You'll almost never be asked about a single service in a question. Nearly all questions will be about how two or more services can work together to solve a customer problem. However, two areas I did find I needed to know in-depth were for enabling cross-account access and for hybrid networking. As always with architecture, it depends.&lt;/p&gt;

&lt;h1&gt;
  
  
  Before you start
&lt;/h1&gt;

&lt;p&gt;The professional level certs are the pinnacle of the AWS certs, the top of 3 levels, foundational, associate and professional.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s946x9e8xyiwrmdpzpq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8s946x9e8xyiwrmdpzpq.png" alt="AWS Certifications" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It used to be recommended that you have the related associate certs before attempting the professional cert but that requirement seems to be gone. Personally I would still recommend sitting the associate exams before attempting the professional. They will give you a good idea of where your level of knowledge is and experience with the AWS certification process. &lt;br&gt;
Plus, passing one exam gives you a 50% discount on your next exam so the passing of an associate exam ($150 fee) will entitle you to a 50% discount on the $300 fee for a professional exam. Therefore you'll have 2 exams for the price of one.&lt;br&gt;
All certifications need to be re-certified every 3 years by re-sitting the exam. Achieving the professional cert after the associate will automatically renew your associate for another 3 years. This is what spurred me to attempt the SA professional cert. My SA associate cert was due to be renewed and I knew that by passing the professional, both would be safe for another 3 years.&lt;/p&gt;

&lt;h1&gt;
  
  
  Getting Started
&lt;/h1&gt;

&lt;p&gt;AWS provides good resources to get you started on your certification journey. You should start &lt;a href="https://aws.amazon.com/certification/certified-solutions-architect-professional/"&gt;here&lt;/a&gt; where you will find the &lt;a href="https://d1.awsstatic.com/training-and-certification/docs-sa-pro/AWS-Certified-Solutions-Architect-Professional_Exam-Guide.pdf"&gt;official study guide&lt;/a&gt;, sample questions, links to white-papers and a link to their free &lt;a href="https://explore.skillbuilder.aws/learn/course/34/play/493/exam-readiness-aws-certified-solutions-architect-professional"&gt;Exam Readiness Course&lt;/a&gt;. &lt;/p&gt;

&lt;h1&gt;
  
  
  Exam Readiness course
&lt;/h1&gt;

&lt;p&gt;This course provides great detail on how and what to study broken by each domain. The exam questions are broken out by the 5 domains below.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Domain&lt;/th&gt;
&lt;th&gt;% of Exam&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1.0  Design for Organizational Complexity&lt;/td&gt;
&lt;td&gt;12.5%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2.0  Design for New Solutions&lt;/td&gt;
&lt;td&gt;31%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3.0  Migration Planning&lt;/td&gt;
&lt;td&gt;15%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4.0  Cost Control&lt;/td&gt;
&lt;td&gt;12.5%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5.0  Continuous Improvement for Existing Solutions&lt;/td&gt;
&lt;td&gt;29%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I recommend to start with the AWS resources first as they are the ones setting the exam and the resources above give their perspective on the exam. There are sample questions on the certification page and through the exam readiness course. How you rate yourself against these questions can give you a good indication of your readiness for the exam.&lt;br&gt;
At this stage, you'll probably need another resource to help you prepare for the exam. &lt;a href="https://learn.acloud.guru/course/aws-certified-solutions-architect-professional/dashboard"&gt;A Cloud Guru&lt;/a&gt; and &lt;a href="https://cloudacademy.com/learning-paths/solutions-architect-professional-certification-preparation-for-aws-2019-377/"&gt;Cloud Academy&lt;/a&gt; provide good courses to help you prepare. I went for Stephane Marek's course on &lt;a href="https://tenable.udemy.com/course/aws-solutions-architect-professional/"&gt;Udemy&lt;/a&gt; which I found was up to date and engaging.&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain Breakdown
&lt;/h1&gt;

&lt;p&gt;As stated there are 5 domains that you are assessed upon in the exam. The &lt;a href="https://d1.awsstatic.com/training-and-certification/docs-sa-pro/AWS-Certified-Solutions-Architect-Professional_Exam-Guide.pdf"&gt;official study guide&lt;/a&gt; lists 64 tools and technologies that could appear on the exam. I found it difficult to get a mapping of tools and technologies to each domain and the remainder is my attempt to break them down to each domain. Where possible, I will link to the official AWS service page or equivalent.&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 1.0 - Design for Organizational Complexity
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Need to know;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/organizations/"&gt;AWS Organizations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudtrail/?org_product_ow_cloudtrail"&gt;AWS CloudTrail&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/backup/?org_product_ow_backup"&gt;AWS Backup&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/ram/?org_product_ou_ram"&gt;AWS Resource Access Manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/guardduty/?org_product_ow_guardduty"&gt;Amazon GuardDuty&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/?org_product_ow_costexplorer"&gt;AWS Cost Explorer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html"&gt;AWS CloudFormation StackSets&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/config/?org_product_ow_config"&gt;AWS Config&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/servicecatalog/?org_product_ow_servicecatalog"&gt;AWS Service Catalog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/compute-optimizer/?org_product_ow_computeoptimizer"&gt;AWS Compute Optimizer&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/license-manager/?org_product_ow_licensemanager"&gt;AWS License Manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/iam/"&gt;AWS IAM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_simple_ad.html"&gt;Simple AD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_ad_connector.html"&gt;AD Connector&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/directoryservice/"&gt;AWS Managed Microsoft AD&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/storagegateway/"&gt;AWS Storage Gateway&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/storagegateway/file/?nc=sn&amp;amp;loc=2&amp;amp;dn=2"&gt;File Gateway&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/storagegateway/vtl/?nc=sn&amp;amp;loc=2&amp;amp;dn=3"&gt;Tape Gateway&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/storagegateway/volume/?nc=sn&amp;amp;loc=2&amp;amp;dn=4"&gt;Volume Gateway&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html?ref=wellarchitected"&gt;VPC Endpoints&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hybrid Networks
&lt;/h2&gt;

&lt;p&gt;You do need to know lower level details about how to set up a hybrid network between on-premise and cloud networks. AWS calls this out specifically on the &lt;a href="https://aws.amazon.com/certification/certified-solutions-architect-professional/?ch=tile&amp;amp;tile=getstarted"&gt;exam homepage&lt;/a&gt; and they mean it.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Ability to design a hybrid architecture using key AWS technologies (e.g., VPN, AWS Direct Connect)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This &lt;a href="https://d1.awsstatic.com/whitepapers/aws-amazon-vpc-connectivity-options.pdf"&gt;whitepaper&lt;/a&gt; is helpful to understand the different options for VPC connectivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Organizations
&lt;/h2&gt;

&lt;p&gt;You need to understand AWS Organizations and how you can use all the associated services and feature to provide a solution that a customer can use to manage multi accounts. Bear in mind that AWS recommends a multi-account approach and you need to understand this. You will need to understand organizational units (OUs) and service control policies (SCPs). I found this &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/organizations-security.html"&gt;article&lt;/a&gt; helpful in understanding how AWS thinks about security in the context of an Organization, OUs and SCPs. One helpful tip to remember is that an SCP does not apply at the node of the Organization it is applied, rather it is applied to all the child accounts.&lt;br&gt;
In addition you should know how existing AWS services like CloudTrail, Backup, Resource Manager, GuardDuty, Cost Explorer, CloudFormation StackSets, Config, Service Catalog, Compute Optimizer, License Manager work in conjunction with Organizations.&lt;br&gt;
One tip to remember is the differences between CloudFormation Stacksets and Service Catalog. Both help to maintain a consistent infrastructure across all accounts but Service Catalog has the ability to potentially do this in a more secure way with launch constraints. These enable a user to launch a stack in an account without having the enhanced permissions that would be needed to apply a CloudFormation stack.&lt;/p&gt;

&lt;h2&gt;
  
  
  IAM
&lt;/h2&gt;

&lt;p&gt;You must understand how IAM evaluates if a user has authorisation to perform an action. This &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic.html"&gt;article&lt;/a&gt; was very helpful for me, specifically this diagram.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjmk1xthuzc1mgdgp3771.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjmk1xthuzc1mgdgp3771.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should understand the difference between Identity-based policies and Resource-based policies.&lt;/p&gt;

&lt;p&gt;'You can control access to resources using an identity-based policy or a resource-based policy. In an identity-based policy, you attach the policy to an identity and specify what resources that identity can access. In a resource-based policy, you attach a policy to the resource that you want to control. In the policy, you specify which principals can access that resource. '&lt;/p&gt;

&lt;p&gt;For example, S3 bucket policies lets a user have access to a bucket outside of their IAM role.&lt;/p&gt;

&lt;p&gt;And how they work in conjunction with IAM permissions boundaries, AWS Organizations service control policies (SCPs) and Session policies.&lt;/p&gt;

&lt;h3&gt;
  
  
  User federation (saml 2.0 or openid connection)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;federated users are users (or applications) who do not have AWS accounts. With roles, you can give federated users access to your AWS resources for a limited amount of time.
You must understand how to enable cross-account authentication and access strategies. This &lt;a href="https://dev.toCross-account%20authentication%20and%20access%20strategies"&gt;article&lt;/a&gt; should help.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F394ob7jiuwn8lul0pmo6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F394ob7jiuwn8lul0pmo6.png" alt="Image description" width="624" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When working with cross-account access, remember that an IAM user can only assume one role at a time. As soon as a user assumes a role in another account, they loose the access that their current role provides them.&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/access_controlling.html#access_controlling-resources"&gt;Resource based access controls&lt;/a&gt; allow a user to have access outside their role but without assuming another role. &lt;/p&gt;

&lt;h2&gt;
  
  
  Directory Services
&lt;/h2&gt;

&lt;p&gt;Options for working with AD in AWS are also worth studying under this domain. You should understand the difference between each of these and their appropriate use cases.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Simple AD - provides low-scale, low-cost basic AD capability. It is the simplest way to get an AD experience on AWS but it does not connect with other AD domains.&lt;/li&gt;
&lt;li&gt;AD Connector - enables on-premises users to access AWS services via AD.&lt;/li&gt;
&lt;li&gt;Managed Microsoft AD - Enables use of managed AD in the AWS cloud.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Storage Gateway
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;File Gateway enables the storage and retrieval of objects in S3 and Glacier using file protocols such as NFS. Configure file shares that are mapped to selected S3 buckets using IAM roles.&lt;/li&gt;
&lt;li&gt;Tape Gateway provides backup application with an iSCSI VTL interface consisting of a virtual media changer, virtual tape drives, and virtual tapes. 

&lt;ul&gt;
&lt;li&gt;virtual tape data is stored in Amazon S3 or can be archived to S3 Glacier.&lt;/li&gt;
&lt;li&gt;Monitor the status of data transfer and storage interfaces through the aws management console.&lt;/li&gt;
&lt;li&gt;additionally, use the API or SDK to programmatically manage an application's interaction with the gateway.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Volume Gateway 

&lt;ul&gt;
&lt;li&gt;Stored-Volume Gateway: data written to a stored volume gateway is saved on on-premises storage hardware and asynchronously backed up to a S3 in the form of EBS snapshots. Storage volumes can be created up to 16 TB in size and are mounted as iSCSI devices from on-premises application servers.&lt;/li&gt;
&lt;li&gt;Cached-Volume Gateway: with the cached volume gateway, you can create storage volumes up to 32 TB in size and mount them as iSCSI devices from on-premises application servers.&lt;/li&gt;
&lt;li&gt;data written to these volumes is stored in S3, with only a cache of recently written and recently read data stored locally on on-premises storage hardware. &lt;/li&gt;
&lt;li&gt;point-in-time snapshots can be taken of volume data in S3 in the form of EBS snapshots. This provides space-efficient versioned copies of volumes for data protection and various data reuse needs.&lt;/li&gt;
&lt;li&gt;to prepare for upload to S3, a gateway stores incoming data in a staging area called an upload buffer.
Know what these are, the differences and the value of each.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  VPC Endpoints
&lt;/h2&gt;

&lt;p&gt;Virtual devices that enable instances using private IPs to connect to services without an internet or virtual gateway.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;An interface VPC endpoint is an elastic network interface with a private IP address that serves as an entry point for traffic destined to services powered by AWS PrivateLink.&lt;/li&gt;
&lt;li&gt;A gateway endpoint is a gateway that is a target for a specified route in your route table. This type of endpoint is used for traffic destined to a supported AWS service, such as Amazon S3 or Amazon DynamoDB.
Access to VPC endpoints is managed via IAM Policies.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Domain 2.0 - Design for New Solutions
&lt;/h1&gt;

&lt;p&gt;This domain constitutes the largest part of the exam at 31% so it well worth digging into. This is the domain where the vast bulk of those 64 services listed become more relevant.&lt;/p&gt;

&lt;h2&gt;
  
  
  Need to know
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/ec2/autoscaling/"&gt;Amazon EC2 Auto Scaling&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/elasticache/"&gt;Amazon ElastiCache&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/elasticache/redis/"&gt;Amazon ElastiCache for Redis&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/elasticache/memcached/"&gt;Amazon ElastiCache for Memcached&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/sns/"&gt;Amazon Simple Notification Service (SNS)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudformation/"&gt;AWS CloudFormation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/elasticloadbalancing/"&gt;Elastic Load Balancing (ELB)&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Internal load balancer&lt;/li&gt;
&lt;li&gt;how to register targets with an ELB&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/sqs/"&gt;Amazon Simple Queue Service (SQS)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/kinesis/"&gt;Amazon Kinesis&lt;/a&gt; (many flavours)

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/kinesis/data-streams/?nc=sn&amp;amp;loc=2&amp;amp;dn=2"&gt;Kinesis Data Streams&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/kinesis/data-firehose/?nc=sn&amp;amp;loc=2&amp;amp;dn=3"&gt;Kinesis Data Firehose&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudwatch/"&gt;Amazon CloudWatch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudfront/"&gt;Amazon CloudFront&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/route53/"&gt;Amazon Route 53&lt;/a&gt; and routing options&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/s3/"&gt;Amazon S3&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/redshift/"&gt;Amazon Redshift&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/opensearch-service/"&gt;Amazon Elasticsearch Service (Amazon ES)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/ebs/"&gt;Amazon EBS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/rds/"&gt;Amazon RDS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/dynamodb/"&gt;Amazon DynamoDB&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/dynamodb/dax/"&gt;Amazon DynamoDB Accelerator (DAX)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/iam/"&gt;AWS IAM&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html"&gt;AWS Secure Token Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cognito/"&gt;Amazon Cognito&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/directoryservice/"&gt;AWS Directory Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/config/"&gt;AWS Config&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudtrail/"&gt;AWS CloudTrail&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudformation/"&gt;AWS CloudFormation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/ecs/"&gt;Amazon ECS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/codedeploy/"&gt;AWS CodeDeploy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/opsworks/"&gt;AWS OpsWorks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/elasticbeanstalk/"&gt;AWS Elastic Beanstalk&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understand how these services can interact with each other&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ELB and Auto Scaling&lt;/li&gt;
&lt;li&gt;CloudFront and ELB&lt;/li&gt;
&lt;li&gt;Route 53 and routing options&lt;/li&gt;
&lt;li&gt;SQS (assume standard over fifo unless specified)

&lt;ul&gt;
&lt;li&gt;Asynchronous tasks&lt;/li&gt;
&lt;li&gt;Single direction only&lt;/li&gt;
&lt;li&gt;Unordered &lt;/li&gt;
&lt;li&gt;"At least once" delivery&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;SNS

&lt;ul&gt;
&lt;li&gt;Fan out to SQS&lt;/li&gt;
&lt;li&gt;Asynchronous&lt;/li&gt;
&lt;li&gt;Batch Processing&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Data Streams

&lt;ul&gt;
&lt;li&gt;Asynchronous tasks&lt;/li&gt;
&lt;li&gt;Single direction only&lt;/li&gt;
&lt;li&gt;ordered within a shard&lt;/li&gt;
&lt;li&gt;"at least once" semantics&lt;/li&gt;
&lt;li&gt;independent stream position
Know the difference between Kinesis Data Streams vs Kinesis Data Firehose.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Data streams has

&lt;ul&gt;
&lt;li&gt;Customer processing per incoming record&lt;/li&gt;
&lt;li&gt;Sub-1 second processing latency&lt;/li&gt;
&lt;li&gt;choice of stream processing frameworks&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Firehose has

&lt;ul&gt;
&lt;li&gt;Zero administration&lt;/li&gt;
&lt;li&gt;Processing latency of 60 seconds or higher&lt;/li&gt;
&lt;li&gt;Ability to use existing analytics tools based on S3, Redshift and Elasticsearch Service (Amazon ES).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Using SQS for responses&lt;/li&gt;
&lt;li&gt;Data scaling using S3

&lt;ul&gt;
&lt;li&gt;Put static content in S3&lt;/li&gt;
&lt;li&gt;Randomize key names&lt;/li&gt;
&lt;li&gt;Use appropriate storage classes&lt;/li&gt;
&lt;li&gt;Larger objects results in fewer Get and Put operations&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Data scaling using CloudFront

&lt;ul&gt;
&lt;li&gt;Reduce traffic costs&lt;/li&gt;
&lt;li&gt;Increase performance&lt;/li&gt;
&lt;li&gt;Origin can be from AWS or from an on-premises data center&lt;/li&gt;
&lt;li&gt;Access to S3 buckets can be restricted to Origin Access Identities&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Data scaling with EBS and instance stores

&lt;ul&gt;
&lt;li&gt;EBS volume size can be increased while attached to an EC2 instance&lt;/li&gt;
&lt;li&gt;EBS volume type and throughput can be changed while attached to an instance&lt;/li&gt;
&lt;li&gt;EC2 instances have a maximum EBS throughput rate&lt;/li&gt;
&lt;li&gt;Consider OS-based RAID sets&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Scaling and RDS

&lt;ul&gt;
&lt;li&gt;Two options, increase instance size or increase storage&lt;/li&gt;
&lt;li&gt;improves read performance only&lt;/li&gt;
&lt;li&gt;asynchronous&lt;/li&gt;
&lt;li&gt;unique/different endpoints&lt;/li&gt;
&lt;li&gt;cross-region&lt;/li&gt;
&lt;li&gt;CloudWatch metric: ReplicaLag&lt;/li&gt;
&lt;li&gt;(watch out for differences between RDS flavours)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;ElastiCache: Redis and Memcached

&lt;ul&gt;
&lt;li&gt;ElastiCache for Redis

&lt;ul&gt;
&lt;li&gt;Advanced data structures&lt;/li&gt;
&lt;li&gt;Persistent&lt;/li&gt;
&lt;li&gt;Automatic failover with Multi-AZ deployments&lt;/li&gt;
&lt;li&gt;Can scale using read replicas&lt;/li&gt;
&lt;li&gt;Can scale up, but not out. Once scaled up, cannot scale down. &lt;/li&gt;
&lt;li&gt;Supports backup and restore operations&lt;/li&gt;
&lt;li&gt;AOF (Append Only File) log can be enabled for recovery of nodes.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;ElastiCache for Memcached

&lt;ul&gt;
&lt;li&gt;Simple key-value storage&lt;/li&gt;
&lt;li&gt;Non-persistent, pure cache&lt;/li&gt;
&lt;li&gt;Can scale both up and out&lt;/li&gt;
&lt;li&gt;Scales out using multiple nodes&lt;/li&gt;
&lt;li&gt;Does not support backup and restore operations&lt;/li&gt;
&lt;li&gt;Supports multi-threaded operations&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Cache common requests

&lt;ul&gt;
&lt;li&gt;Applications read from and write to the cache&lt;/li&gt;
&lt;li&gt;Create appropriate cache timeouts&lt;/li&gt;
&lt;li&gt;Redis replication groups (read replicas)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;DynamoDB

&lt;ul&gt;
&lt;li&gt;NoSQL database great for unstructured data&lt;/li&gt;
&lt;li&gt;Don't need the same level of DBA oversight&lt;/li&gt;
&lt;li&gt;Caching for Dynamodb

&lt;ul&gt;
&lt;li&gt;Elasticache&lt;/li&gt;
&lt;li&gt;DynamoDB Accelerator (DAX) Read-Through Cache&lt;/li&gt;
&lt;li&gt;DynamoDB Accelerator (DAX) Write-Through Cache&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Throughput

&lt;ul&gt;
&lt;li&gt;Using SQS with a queue draining application to throttle writes to DynamoDB if application can handle latency&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;DynamoDB and AWS Auto Scaling

&lt;ul&gt;
&lt;li&gt;Use a CloudWatch alarm to alert when it's time to scale&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;DynamoDB Global Tables

&lt;ul&gt;
&lt;li&gt;Fully managed replication&lt;/li&gt;
&lt;li&gt;Globally distributed&lt;/li&gt;
&lt;li&gt;Low latency reads/writes&lt;/li&gt;
&lt;li&gt;Multi-region redundancy&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Components of loosely coupled architectures
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;ELB

&lt;ul&gt;
&lt;li&gt;Two-way traffic&lt;/li&gt;
&lt;li&gt;Immediate request handling&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;SQS

&lt;ul&gt;
&lt;li&gt;Clients poll SQS&lt;/li&gt;
&lt;li&gt;Persistent task storage&lt;/li&gt;
&lt;li&gt;Controlled completion mechanism&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;SNS

&lt;ul&gt;
&lt;li&gt;SNS pushes to subscribers&lt;/li&gt;
&lt;li&gt;Bulk notification&lt;/li&gt;
&lt;li&gt;Mobile push capability&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Kinesis

&lt;ul&gt;
&lt;li&gt;Scalable event streaming&lt;/li&gt;
&lt;li&gt;Clients read and track stream position&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Auto Scaling

&lt;ul&gt;
&lt;li&gt;Scalable resources&lt;/li&gt;
&lt;li&gt;Manage cost&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  CloudFront
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;What is a CloudFront behaviour?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Identity and access controls
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;IAM Users and Groups&lt;/li&gt;
&lt;li&gt;STS - services interacting with the account &lt;/li&gt;
&lt;li&gt;Policies and Roles&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  AWS Service Roles
&lt;/h3&gt;

&lt;p&gt;AWS services interacting with the account&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS Lambda&lt;/li&gt;
&lt;li&gt;Amazon EC2&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Identity Providers
&lt;/h3&gt;

&lt;p&gt;SAML 2.0, single sign-on, OpenID Connect&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Cognito&lt;/li&gt;
&lt;li&gt;AWS Directory Service&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Security and compliance controls
&lt;/h3&gt;

&lt;p&gt;Assuming a role&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cross-account&lt;/li&gt;
&lt;li&gt;in an account&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Security logging
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Config&lt;/li&gt;
&lt;li&gt;AWS Cloudtrail&lt;/li&gt;
&lt;li&gt;Segregated bucket&lt;/li&gt;
&lt;li&gt;Dedicated account&lt;/li&gt;
&lt;li&gt;Understand how to centralise logging into a single account or S3 bucket in a separate account
(Understand &lt;a href="https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html"&gt;CloudTrail log file integrity&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Amazon Cognito
&lt;/h3&gt;

&lt;p&gt;A fully managed solution providing access control and authentication for web/mobile apps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Supports MFA&lt;/li&gt;
&lt;li&gt;Data at-rest and in-transit encryption&lt;/li&gt;
&lt;li&gt;Log in via social identity providers&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Support for SAML&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;User Pools&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provides a directory profile for all users which you can access through an SDK.&lt;/li&gt;
&lt;li&gt;Supports user federation through a third-party identity provider.&lt;/li&gt;
&lt;li&gt;Signed users receive authentication tokens.&lt;/li&gt;
&lt;li&gt;Tokens can be exchanged for AWS access via Amazon Cognito identity pools.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Identity Pools&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Authenticates users with web identity providers, including Amazon Cognito user pools.&lt;/li&gt;
&lt;li&gt;Assigns temporary AWS credentials via AWS STS.&lt;/li&gt;
&lt;li&gt;Supports anonymous guest users.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understand the difference between User Pools and Identity Pools and how they can work together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr83pby60k5ezfwxotfyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr83pby60k5ezfwxotfyg.png" alt="Image description" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deployment strategies for business requirements
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwy1qkanx4zrtieue3ibp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwy1qkanx4zrtieue3ibp.png" alt="Image description" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runtime/container

&lt;ul&gt;
&lt;li&gt;Amazon ECS deploys Docker containers and provides container management and scheduling.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Application deployment

&lt;ul&gt;
&lt;li&gt;AWS CodeDeploy handles deployment of application artifacts to target systems. It can deploy to both Amazon EC2 instances and external systems. CodeDeploy can store multiple application versions and has powerful, customizable logic to control deployment behavior.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Code/deployment management

&lt;ul&gt;
&lt;li&gt;AWS CodeCommit is a managed Git code repository; the service can store multiple versions of code and deployment artifacts. CodeCommit doesn't compile or deploy code. It relies on other services or system to do this.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Infrastructure deployment

&lt;ul&gt;
&lt;li&gt;AWS CloudFormation deploys environments based on a template. AWS CloudFormation doesn't have the ongoing configuration management capabilities of OpsWorks. AWS CloudFormation supports most or all AWS services for deployment.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You'll also need to know when OpsWorks or Elastic BeanStalk are the better options for deploying your infrastructure or application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Development, testing, and staging environments
&lt;/h3&gt;

&lt;p&gt;Know how to setup different environments. You will have different requirements for availability, performance and cost depending on the environment type. RDS can be interesting area for a question here. AWS provides different RDS templates for &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Availability

&lt;ul&gt;
&lt;li&gt;Typically lower requirements&lt;/li&gt;
&lt;li&gt;May still need HA&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Performance

&lt;ul&gt;
&lt;li&gt;Smoke testing&lt;/li&gt;
&lt;li&gt;Load testing&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Similarity

&lt;ul&gt;
&lt;li&gt;Deployment process&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Cost&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Domain 3.0 - Migration Planning
&lt;/h1&gt;

&lt;p&gt;Existing workloads and processes for potential migration to the cloud&lt;/p&gt;

&lt;h2&gt;
  
  
  Need to know
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/application-discovery/"&gt;AWS Application Discovery Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/application-migration-service/"&gt;AWS Application Migration Service&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/disaster-recovery/"&gt;AWS Elastic Disaster Recovery&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/snow/"&gt;AWS Snow&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/snowcone/?nc=sn&amp;amp;loc=3"&gt;Snowcone&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/snowball/?nc=sn&amp;amp;loc=4"&gt;Snowball&lt;/a&gt;
 – Compute Optimized or Storage Optimized&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/snowmobile/?nc=sn&amp;amp;loc=5"&gt;Snowmobile&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/dms/"&gt;AWS Database Migration Service&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/dms/schema-conversion-tool/"&gt;Schema Conversion Tool&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6 Rs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Retain&lt;/strong&gt; - Leave it alone and revisit it in the future.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Re-host&lt;/strong&gt; - Lift and shift&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Refactor&lt;/strong&gt; - Architect applications to be cloud native.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Re-platform&lt;/strong&gt; - Lift, modify and shift&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Replace&lt;/strong&gt; - Buy/purchase solutions that already exist in the cloud&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retire&lt;/strong&gt; - Evaluate if an application/system provides value.
See this &lt;a href="https://aws.amazon.com/blogs/enterprise-strategy/6-strategies-for-migrating-applications-to-the-cloud/"&gt;blog post&lt;/a&gt; for more detail.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Migration tools or services for new and migrated solutions based on detailed AWS knowledge
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Strategies for migrating existing on-premises workloads to the cloud
&lt;/h2&gt;

&lt;h2&gt;
  
  
  New cloud architectures for existing solutions
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Application Migration process
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Plan, build and run
&lt;/h3&gt;

&lt;h2&gt;
  
  
  Tools for migration assistance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Application Discovery Service&lt;/li&gt;
&lt;li&gt;AWS Database Migration Service&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The AWS storage portfolio
&lt;/h2&gt;

&lt;p&gt;There is no single storage solution that solves every problem.&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0f5hryelmbhcg45q0ev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg0f5hryelmbhcg45q0ev.png" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Data migration
&lt;/h2&gt;

&lt;p&gt;Consider downtime and orchestration&lt;br&gt;
Methods&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Image backup/restore&lt;/li&gt;
&lt;li&gt;File copy&lt;/li&gt;
&lt;li&gt;Replication&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Hybrid networks
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g8hdqek4s4mgpg76man.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g8hdqek4s4mgpg76man.png" alt="Image description" width="800" height="438"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost
&lt;/h2&gt;

&lt;p&gt;Cloud can be a variable cost as opposed to on-premise which is generally a fixed cost. Cloud is pay as you go whereas on-premise can generally take the form of a large upfront investment.&lt;/p&gt;

&lt;p&gt;Be mindful how you can use reserved, on-demand and spot instances to give the most cost-efficient solution.&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 4.0 - Cost Control
&lt;/h1&gt;

&lt;p&gt;"Paying for what you think you need to paying for what you actually need."&lt;br&gt;
Be careful on over-provisioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Need to know
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/aws-cost-management/aws-cost-explorer/"&gt;AWS Cost Explorer&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Types of Tags
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Resource Tags&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide the ability to organise and search within and across resources&lt;/li&gt;
&lt;li&gt;Filterable and searchable&lt;/li&gt;
&lt;li&gt;Do not appear in detailed billing report&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Cost Allocation Tags&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Map AWS charges to organizational attributes for accounting purposes&lt;/li&gt;
&lt;li&gt;Information presented in the detailed billing report and Cost Explorer (must be explicitly selected)&lt;/li&gt;
&lt;li&gt;Only available on certain services or limited to components within a service (for example, S3 bucket but not objects)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best practises of cost management
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Only allow specific groups or teams to deploy chosen AWS resources.&lt;/li&gt;
&lt;li&gt;Create policies for each environment.&lt;/li&gt;
&lt;li&gt;Require tags in order to instantiate resources.&lt;/li&gt;
&lt;li&gt;Monitor and send alerts or shut down instances that are improperly tagged.&lt;/li&gt;
&lt;li&gt;Use CloudWatch to send alerts when billing thresholds are ment.&lt;/li&gt;
&lt;li&gt;Analyze spend using AWS or partner tools.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Domain 5.0 - Continuous Improvement for Existing Solutions
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Need to know
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html"&gt;Amazon S3 Server Access Logs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html"&gt;Amazon ELB Access Logs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudtrail/"&gt;AWS CloudTrail&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html"&gt;Amazon VPC Flow Logs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html"&gt;Amazon CloudWatch Logs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/config/"&gt;AWS Config&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/premiumsupport/technology/trusted-advisor/"&gt;AWS Trusted Advisor&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudformation/"&gt;AWS CloudFormation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/systems-manager/"&gt;AWS Systems Manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/cloudwatch/"&gt;AWS CloudWatch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/lambda/"&gt;AWS Lambda&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/opensearch-service/"&gt;Amazon Elasticsearch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/codecommit/"&gt;AWS CodeCommit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/certificate-manager/"&gt;AWS Certificate Manager&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/codedeploy/"&gt;AWS Code Deploy&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/elasticbeanstalk/"&gt;AWS Elastic Beanstalk&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/opsworks/"&gt;AWS OpsWorks&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/ecs/"&gt;Amazon ECS&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Troubleshooting solution architectures
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Amazon S3 Server Access Logs&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Description: Contains details about data requests, such as the request type, the resources requested, and the date and time the request was made.&lt;/li&gt;
&lt;li&gt;When to use: Troubleshoot bucket access issues and data requests.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Amazon ELB Access Logs&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Description: Capture detailed information about each request sent to your load balancer, including client's IP address, latencies, and server responses.&lt;/li&gt;
&lt;li&gt;When to use: Analyze traffic patterns and troubleshoot network issues.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Amazon CloudTrail&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Description: Provides a history of API calls to your account made via the AWS Management Console, AWS CLI, AWS SDKs, or other AWS services.&lt;/li&gt;
&lt;li&gt;When to use: Audit and determine who did what, when, and from where.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Amazon VPC Flow Logs&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Description: Capture information about the IP traffic going into or out of your network interfaces and subnets.&lt;/li&gt;
&lt;li&gt;When to use: Verify network access rules are properly configured and troubleshoot connectivity and security issues.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Amazon CloudWatch Logs&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Description: Monitor, store, and access applications and systems using log data from Amazon EC2 instances and on-premise servers.&lt;/li&gt;
&lt;li&gt;When to use: Monitor and troubleshoot OS and applications running in your AWS environment.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;AWS Config&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Description: Provides an inventory of your AWS resources and records changes to the configuration of those resources.&lt;/li&gt;
&lt;li&gt;When to use: Troubleshoot outages and conduct security attack analyses.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Determining a strategy to improve an existing solution for operational excellence
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Well-Architected Framework&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understand business and customer needs&lt;/li&gt;
&lt;li&gt;Make frequent, small, and reversible changes&lt;/li&gt;
&lt;li&gt;Create and use procedures to respond to operational events&lt;/li&gt;
&lt;li&gt;Continuously improve supporting processes and procedures&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;AWS Trusted Advisor&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define Operational Priorities&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;AWS CloudFormation&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Design for Operations&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;AWS Systems Manager&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Operational Readiness&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;AWS CloudWatch&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Operational Health&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;AWS Lambda&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Event Response&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;Amazon Elasticsearch&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use analytics to learn from experience&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;

&lt;p&gt;AWS CodeCommit&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Share learnings with libraries, scripts, and documentation&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Determining a strategy to improve the reliability of an existing solution
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Architect for High Availability
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Appropriate level of availability

&lt;ul&gt;
&lt;li&gt;Availability levels are met&lt;/li&gt;
&lt;li&gt;Minimize cost and complexity&lt;/li&gt;
&lt;li&gt;Auto Scaling groups&lt;/li&gt;
&lt;li&gt;Instance auto recovery&lt;/li&gt;
&lt;li&gt;Route 53 resource record sets&lt;/li&gt;
&lt;li&gt;Amazon RDS Multi-AZ
&lt;/li&gt;
&lt;li&gt;Amazon EBS snapshots&lt;/li&gt;
&lt;li&gt;Amazon EFS&lt;/li&gt;
&lt;li&gt;Replicated ElastiCache Redis&lt;/li&gt;
&lt;li&gt;Automate recovery steps&lt;/li&gt;
&lt;li&gt;Understand impact of a loss at peak load&lt;/li&gt;
&lt;li&gt;Beware of capacity constraints&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Best practises

&lt;ul&gt;
&lt;li&gt;Use Multi-AZ services (S3, DDB, SQS)&lt;/li&gt;
&lt;li&gt;Similar to multiple component failure, but plan for capacity constraints&lt;/li&gt;
&lt;li&gt;Use Reserved Instances for critical systems&lt;/li&gt;
&lt;li&gt;Identify all Availability Zone-specific services, noting which are regional/global&lt;/li&gt;
&lt;li&gt;Amazon EBS snapshots help minimize the Recovery Point Objective&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Determine a strategy to improve the Performance of an existing solution
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Shorten response times&lt;/li&gt;
&lt;li&gt;Increase throughput&lt;/li&gt;
&lt;li&gt;Lower the utilization of resources (efficiency)&lt;/li&gt;
&lt;li&gt;Scalability for workloads that burst&lt;/li&gt;
&lt;li&gt;Amazon S3 performance

&lt;ul&gt;
&lt;li&gt;Move static content to S3 buckets&lt;/li&gt;
&lt;li&gt;Use IA for infrequently accessed data&lt;/li&gt;
&lt;li&gt;Larger objects reduce PUT/GET requests&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Amazon EBS performance considerations

&lt;ul&gt;
&lt;li&gt;GP2 for system disks and SC1 for cold storage&lt;/li&gt;
&lt;li&gt;PIOPS (Provisioned IOPS) for high performance random I/O

&lt;ul&gt;
&lt;li&gt;ST1 for high performance sequential I/O&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Amazon RDS performance considerations

&lt;ul&gt;
&lt;li&gt;Scale up instance size&lt;/li&gt;
&lt;li&gt;Increase storage size online&lt;/li&gt;
&lt;li&gt;Read replicas are

&lt;ul&gt;
&lt;li&gt;asynchronous &lt;/li&gt;
&lt;li&gt;application must direct queries &lt;/li&gt;
&lt;li&gt;cross-region (for some RDS flavours)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Amazon Elasticache performance considerations

&lt;ul&gt;
&lt;li&gt;App reads and writes from cache&lt;/li&gt;
&lt;li&gt;Cache timeouts/TTL&lt;/li&gt;
&lt;li&gt;Redis replication groups for availability&lt;/li&gt;
&lt;li&gt;Write-through for write spikes&lt;/li&gt;
&lt;li&gt;Memcached is single AZ and does not support encryption at rest&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Amazon DynamoDB performance considerations

&lt;ul&gt;
&lt;li&gt;Alter read/write capacity units&lt;/li&gt;
&lt;li&gt;Global or local secondary indexes&lt;/li&gt;
&lt;li&gt;Use SQS to 

&lt;ul&gt;
&lt;li&gt;handle write spikes &lt;/li&gt;
&lt;li&gt;write data in quiet periods. Must understand the data.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Determine a strategy to improve the Security of an existing solution
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Restrict access to resources (least privilege)
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;User-based policies

&lt;ul&gt;
&lt;li&gt;What does a particular entity have access to?&lt;/li&gt;
&lt;li&gt;Attached to an IAM user&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Resource-based policies

&lt;ul&gt;
&lt;li&gt;Who has access to a particular resource?&lt;/li&gt;
&lt;li&gt;Grant access directly on the resource&lt;/li&gt;
&lt;li&gt;Not all services support resource-based&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Policy conditions for more control

&lt;ul&gt;
&lt;li&gt;Specify the conditions for when a policy is in effect&lt;/li&gt;
&lt;li&gt;Dates or IP addresses are examples for further restricting user access with conditions&lt;/li&gt;
&lt;li&gt;MFA can be enforced via policy conditions&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Data encryption
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7x5s8d91nxcts6qahjnm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7x5s8d91nxcts6qahjnm.png" alt="Image description" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;
TDE (Transparent Data Encryption) is only available for some flavours of RDS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Protect data in-transit
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;SSL termination at the load balancer

&lt;ul&gt;
&lt;li&gt;Certificates are stored in IAM&lt;/li&gt;
&lt;li&gt;Single certificate per load balancer&lt;/li&gt;
&lt;li&gt;Offload decryption work to the load balancer&lt;/li&gt;
&lt;li&gt;Re-encryption between load balancer and instances&lt;/li&gt;
&lt;li&gt;Application load balancer and classic load balancer&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;SSL termination in CloudFront

&lt;ul&gt;
&lt;li&gt;SNI or non-SNI certificates (Server Name Indication (SNI) allows the server to safely host multiple TLS Certificates for multiple sites, all under a single IP address.)&lt;/li&gt;
&lt;li&gt;SSL connections to load balancer&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;AWS Certificate Manager

&lt;ul&gt;
&lt;li&gt;Manages and deploys public/private certificates&lt;/li&gt;
&lt;li&gt;Establish website identity&lt;/li&gt;
&lt;li&gt;Verify identity of resources within a company&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Improve network traffic security
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Network perimeter controls

&lt;ul&gt;
&lt;li&gt;Security groups

&lt;ul&gt;
&lt;li&gt;Per-ENI granularity. Can have multiple ENIs attached to an instance with a separate SG for each.&lt;/li&gt;
&lt;li&gt;Stateful = simpler to apply rules&lt;/li&gt;
&lt;li&gt;Inter-service communication&lt;/li&gt;
&lt;li&gt;Deny is not a part of security groups&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Network ACLs

&lt;ul&gt;
&lt;li&gt;Subnet boundaries only&lt;/li&gt;
&lt;li&gt;ALLOW and DENY rules&lt;/li&gt;
&lt;li&gt;IP ranges only&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Host firewalls

&lt;ul&gt;
&lt;li&gt;Central or distributed control&lt;/li&gt;
&lt;li&gt;Intrusion detection systems (IDS) and intrusion prevention systems (IPS) &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Determine a strategy to improve the Deployment of an existing solution
&lt;/h2&gt;

&lt;p&gt;Understand the differences between these differences, where they are used and why you would use one instead of another service to deploy a solution.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CloudFormation&lt;/li&gt;
&lt;li&gt;AWS Code Deploy&lt;/li&gt;
&lt;li&gt;AWS Elastic Beanstalk&lt;/li&gt;
&lt;li&gt;AWS OpsWorks&lt;/li&gt;
&lt;li&gt;Amazon ECS&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Additional Resources
&lt;/h1&gt;

&lt;p&gt;&lt;del&gt;&lt;a href="https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf?refid=em_"&gt;Whitepaper: AWS Security Best Practices&lt;/a&gt;&lt;/del&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html"&gt;AWS Well-Architected Framework&lt;/a&gt;&lt;br&gt;
&lt;a href="https://d1.awsstatic.com/whitepapers/DevOps/practicing-continuous-integration-continuous-delivery-on-AWS.pdf"&gt;Whitepaper: Practicing Continuous Integration and Continuous Delivery on AWS: Accelerating Software Delivery with DevOps&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/microservices-on-aws/microservices-on-aws.pdf"&gt;Whitepaper: Microservices on AWS&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/architecture/security-identity-compliance/?cards-all.sort-by=item.additionalFields.sortDate&amp;amp;cards-all.sort-order=desc&amp;amp;awsf.content-type=*all&amp;amp;awsf.methodology=*all"&gt;Best Practices for Security, Identity, &amp;amp; Compliance&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/"&gt;AWS Documentation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/architecture/"&gt;AWS Architecture Center&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
      <category>architecture</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How I passed the AWS Certified SysOps Administrator - Associate exam</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Sun, 10 Apr 2022 22:15:44 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-i-passed-the-aws-certified-sysops-exam-3if</link>
      <guid>https://dev.to/aws-builders/how-i-passed-the-aws-certified-sysops-exam-3if</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;I recently passed the AWS Certified SysOps Administrator - Associate exam and I've put together this post to outline how I prepared for the exam and notes I took along the way. If you want to learn more about the certification, check out this &lt;a href="https://aws.amazon.com/certification/certified-sysops-admin-associate" rel="noopener noreferrer"&gt;link&lt;/a&gt; from AWS.&lt;/p&gt;

&lt;h1&gt;
  
  
  About me
&lt;/h1&gt;

&lt;p&gt;Everyone who starts their preparation for this exam will have arrived there by their own unique journey. For myself, this is my 6th AWS certification having already achieved the other two associate certifications, the cloud practitioner and two data specialities. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3j6jv27nishsnl1vmec.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx3j6jv27nishsnl1vmec.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Of the exams I have studied for so far, the SysOps exam overlapped most with the Architect associate exam. This was mainly in the area of networking. Study for both certs covers all parts of cloud networking extensively. Study for the other certifications has also prepared me for questions on services such as S3 and DynamoDB. However, the SysOps covers a range of services that I had never encountered before. It is a tough exam to prepare for, requiring you to study a lot of different services. There are 65 services listed on the study guide.&lt;/p&gt;

&lt;h1&gt;
  
  
  Where to begin
&lt;/h1&gt;

&lt;p&gt;Every AWS certification has a page on the AWS certification website and I always find it the best place to start.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/certification/certified-sysops-admin-associate" rel="noopener noreferrer"&gt;https://aws.amazon.com/certification/certified-sysops-admin-associate&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You will find details here about the exam with a study guide and sample questions. You'll also find links to the FAQs for the services and white-papers to read. You might be tempted to ignore these but they are worth reading.&lt;/p&gt;

&lt;p&gt;From there, you can attend the free &lt;a href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/1328/exam-readiness-aws-certified-sysops-administrator-associate-digital?sysops=sec&amp;amp;sec=prep" rel="noopener noreferrer"&gt;Exam Readiness course&lt;/a&gt; provided by AWS on their skillbuilder site.&lt;/p&gt;

&lt;p&gt;This course is useful in that it you gives an outline of where you should focus. The exam breaks down into following 6 domains and the exam readiness course goes through the services that you need to study in each domain.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Domain&lt;/th&gt;
&lt;th&gt;% of Examination&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Domain 1: Monitoring, Logging, and Remediation&lt;/td&gt;
&lt;td&gt;20%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain 2: Reliability and Business Continuity&lt;/td&gt;
&lt;td&gt;16%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain 3: Deployment, Provisioning, and Automation&lt;/td&gt;
&lt;td&gt;18%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain 4: Security and Compliance&lt;/td&gt;
&lt;td&gt;16%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain 5: Networking and Content Delivery&lt;/td&gt;
&lt;td&gt;18%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain 6: Cost and Performance Optimization&lt;/td&gt;
&lt;td&gt;18%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The instructors also cover more sample questions and walk through the answers. One big plus is the sample labs they run you through as well. The SysOps exam is different from all other AWS certification exams in that it includes a practical component.&lt;/p&gt;

&lt;p&gt;Most of the remainder of this article is built around these domains and the services outlined in the Exam Readiness course.&lt;/p&gt;

&lt;p&gt;Unless you're using all 65 services everyday in your role, you are going to need an additional course to help you pass. I used &lt;a href="https://www.udemy.com/course/ultimate-aws-certified-sysops-administrator-associate/" rel="noopener noreferrer"&gt;Stephane Marek's Udemy&lt;/a&gt; course and found the content excellent. It doesn't have a sandbox or labs that the &lt;a href="https://cloudacademy.com/learning-paths/aws-sysops-administrator-associate-soa-c02-certification-preparation-for-aws-2876/" rel="noopener noreferrer"&gt;CloudAcademy&lt;/a&gt; or &lt;a href="https://acloudguru.com/course/aws-certified-sysops-administrator-associate-8Lkj" rel="noopener noreferrer"&gt;A Cloud Guru&lt;/a&gt; courses provide but I liked the content and Stephane's style. He does a lot of walk throughs and provides you with very detailed slides that you can go back to. If you have access to any of these courses or similar, they all cover much the same content and are essential to help with preparing for the exam.&lt;br&gt;
The excellent Andrew Brown makes his &lt;a href="https://www.youtube.com/watch?v=KX_AfyrhlgQ&amp;amp;t=3973s" rel="noopener noreferrer"&gt;course&lt;/a&gt; available under the Free Code Camp channel in Youtube. This course is just as good as the others mentioned and can be helpful if you're operating on a budget. If you can afford it though, please remember to donate. Free Code Camp is such a wonderful service and they appreciate all the help they can get.&lt;/p&gt;

&lt;p&gt;Before we dive in, it's worth considering what this exam is about. Obvious as it might seem, it is a SysOps exam, therefore services that are marketed as Serverless will feature a lot less in this exam than services than need Ops support. You will not be asked about the inner workings of Lambda but you will need to know EC2 very well. You will need to know a lot more about RDS than DynamoDB. I think it's worth keeping that frame of reference in your mind while studying for this exam. Personally, I have more experience with the managed and serverless services of AWS so this was my first time having to get to grips with services like AWS Systems Manager and Config while having to dust off my notes on VPCs.&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 1: Monitoring, Logging, and Remediation (20%)
&lt;/h1&gt;

&lt;p&gt;For domain 1, you need to be able to collect metrics and logs from your applications and infrastructure. You should be able to create alarms and notifications based on the logs and metrics collected. You should use all metrics, logs and alarms to monitor and troubleshoot your applications and infrastructure. And you also need to be able to fix any issues. Automation is also a big factor, you should be thinking about how you can remediate issues automatically before they become a bigger issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudWatch
&lt;/h2&gt;

&lt;p&gt;The primary monitoring service in AWS is Amazon CloudWatch and you should take a look at the &lt;a href="https://aws.amazon.com/cloudwatch/faqs" rel="noopener noreferrer"&gt;faqs&lt;/a&gt; to get started. Amazon CloudWatch consists of several individual services but the main ones you need to know for the exam are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CloudWatch Metrics&lt;/li&gt;
&lt;li&gt;CloudWatch Agent&lt;/li&gt;
&lt;li&gt;CloudWatch Logs&lt;/li&gt;
&lt;li&gt;CloudWatch Alarms&lt;/li&gt;
&lt;li&gt;CloudWatch Dashboards&lt;/li&gt;
&lt;li&gt;Amazon EventBridge (aka CloudWatch Events)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  CloudWatch Metrics
&lt;/h3&gt;

&lt;p&gt;Most services push metrics to CloudWatch by default. Logging of EC2 metrics is 5 minutes by default and you can enable detailed logging (at a cost) down to every minute. CloudWatch can monitor CPU, Network and a Status Check of an EC2 instance by default but it does not monitor memory of your instance by default. If you need custom metrics or more fine grained detail from your EC2 instances, you need to install the CloudWatch agent on your instance. Namespaces are used for storing metrics. CloudWatch metrics cannot be deleted and will be expired after 15 months.&lt;/p&gt;

&lt;h3&gt;
  
  
  CloudWatch Agent
&lt;/h3&gt;

&lt;p&gt;You should also understand about the CloudWatch agent. The CloudWatch agent allows you to collect metrics and logs from Amazon EC2 instances and on-premises servers. If you want that, you'll need to use the CloudWatch agent. If using the CloudWatch Agent, make sure you attach an IAM role to your instance with permissions to push to CloudWatch. Using the CloudWatch agent, you can capture metrics at a maximum frequency of 1 second. &lt;/p&gt;

&lt;h3&gt;
  
  
  CloudWatch Logs
&lt;/h3&gt;

&lt;p&gt;You can think of CloudWatch Logs as the data store for your application logs. You should understand how data gets in here either by default or how it can be pushed in. This isn't just for AWS services, you can also push logs your on-premise applications and infrastructure using the SDK.&lt;/p&gt;

&lt;p&gt;A log event is a single line item detailing the event. This is what is pushed to CloudWatch. A log stream is a sequence of log events that share the same source. Each separate source of logs in CloudWatch Logs makes up a separate log stream. And a log group is a group of log streams that share the same retention, monitoring, and access control settings. You can define log groups and specify which streams to put into each group. There is no limit on the number of log streams that can belong to one log group.&lt;/p&gt;

&lt;p&gt;You can use CloudWatch Logs Insights to query your logs with a AWS custom query language. Each log comes with 3 system fields &lt;a class="mentioned-user" href="https://dev.to/message"&gt;@message&lt;/a&gt;, @logStream, and @timestamp for all logs sent to CloudWatch. &lt;a class="mentioned-user" href="https://dev.to/message"&gt;@message&lt;/a&gt; contains the raw unparsed log event, @logStream contains the name of the source that generated the log event, and @timestamp contains the time at which the log event was added to CloudWatch. Logs Insights can also generate visualizations such as bar charts, line charts, and stacked area charts from the output of your queries.&lt;/p&gt;

&lt;p&gt;You can monitor log events as they are sent to CloudWatch Logs by creating Metric Filters. Metric Filters turn log data into CloudWatch Metrics for graphing or alarming.&lt;/p&gt;

&lt;h3&gt;
  
  
  CloudWatch Dashboards
&lt;/h3&gt;

&lt;p&gt;CloudWatch Dashboards are a good way to get a visual overview of your metrics. You can centralise metrics from multiple regions into a single dashboard.&lt;/p&gt;

&lt;h3&gt;
  
  
  CloudWatch Alarms
&lt;/h3&gt;

&lt;p&gt;You can create a CloudWatch Alarm to monitor any CloudWatch metric in your account. This include custom metrics. When you create an alarm, you choose &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the metric you want it to monitor&lt;/li&gt;
&lt;li&gt;the evaluation period (e.g., five minutes or one hour) &lt;/li&gt;
&lt;li&gt;a statistical value to measure (e.g., Average or Maximum)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To set a threshold, set a target value and choose whether the alarm will trigger when the value is greater than (&amp;gt;), greater than or equal to (&amp;gt;=), less than (&amp;lt;), or less than or equal to (&amp;lt;=) that value.&lt;br&gt;
One thing to consider is that if you want your alarm to trigger at more than the default collection frequency of the metric, you may need to enable detailed logging of the metrics.&lt;br&gt;
An alarm can be in 3 states, OK, INSUFFICIENT_DATA and ALARM. You should understand how the state of an alarm moves between these 3 states.&lt;br&gt;
An alarm can be used to trigger an auto-scaling action, be sent to an SNS topic or trigger an EC2 action such as terminate, reboot or recover.&lt;br&gt;
A composite alarm is a combination of multiple alarms (and therefore metrics) into alarm hierarchies. You can choose then to integrate an action or notification at any level of the hierarchy.&lt;br&gt;
Alarm history is available for 14 days.&lt;/p&gt;

&lt;h3&gt;
  
  
  EventBridge/CloudWatch Events
&lt;/h3&gt;

&lt;p&gt;Amazon EventBridge used to be known as CloudWatch Events and EventBridge is CloudWatch Events repackaged and supercharged. In the exam, the two service names could be referenced interchangeably but generally it's referred to EventBridge. EventBridge is an event bus and is a great option for integrating different services and applications together. Some services will have direct integration like AWS Config and AWS Systems Manager but EventBridge can be a good catch all option for integrating services. In the context of the SysOps certification, all CloudWatch Alarm state changes will be sent to EventBridge. From there you can create an EventBridge rule to trigger a Lambda or Step Functions to remediate the issue.&lt;/p&gt;

&lt;h3&gt;
  
  
  SNS (for sending alerts)
&lt;/h3&gt;

&lt;p&gt;Use Amazon SNS to deliver emails or sms messages to people about a specific alarm state change. People and groups can subscribe to a SNS topic so that they will be notified when an alarm state changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Services
&lt;/h2&gt;

&lt;h3&gt;
  
  
  CloudTrail
&lt;/h3&gt;

&lt;p&gt;AWS CloudTrail provides visibility into user activity and API activity by recording actions taken on your account. Basically, if CloudWatch tracks what is happening in your system, CloudTrail tracks who is performing actions in your system.&lt;br&gt;
CloudTrail records information about each action, including who made the request, the services used, the actions performed, parameters for the actions, and the response elements returned by the AWS service. CloudTrail is enabled by default on your account for management events with create, modify, and delete API calls and account activity. If you need more detailed event, you'll need to create a Trail and save it to S3. You can choose which events you want to include in the trail.&lt;br&gt;
CloudTrail stores 90 days of activity. Again, if you need to store specific events for longer, you'll need to create a Trail and save it to S3.&lt;br&gt;
Logs from CloudTrail can be sent to CloudWatch Logs where CloudWatch metrics and alarms can be built against them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Config
&lt;/h2&gt;

&lt;p&gt;AWS Config can also be considered a monitoring service. Config enables you to assess, audit, and evaluate the configurations of your AWS resources. It continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. You configure Config by setting up rules with the desired configuration for a resource. Config will then track and record compliance with these rules over time. You can configure CloudWatch Events or SNS to alert you when a resource breaches a rule. AWS supplies pre-configured rules that you can use or you can write your own.&lt;br&gt;
For example, you can use Config to track changes to CloudFormation stacks, EC2 instances and EBS volumes.&lt;br&gt;
You can also use AWS Systems Manager Automation documents to take action based on AWS Config rules to remediate non-compliance with Config rules. This &lt;a href="https://aws.amazon.com/blogs/mt/remediate-noncompliant-aws-config-rules-with-aws-systems-manager-automation-runbooks/" rel="noopener noreferrer"&gt;AWS blog&lt;/a&gt; gives a detailed overview of how these two services work together.&lt;/p&gt;

&lt;h3&gt;
  
  
  Health
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://health.aws.amazon.com/health/status" rel="noopener noreferrer"&gt;AWS Health Dashboard&lt;/a&gt; is a single place to learn about the availability and operations of AWS services. You can view the overall status of AWS services, and you can sign in to view personalized communications about your particular AWS account or organization. While the AWS Health Dashboard is a good way to view the overall status of each AWS service, but provides little in terms of how the health of those services is impacting your resources. AWS Personal Health Dashboard provides a personalized view of the health of the specific services that are powering your workloads and applications. You can configure EventBridge to get notifications for events that might affect your services and resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Quotas
&lt;/h3&gt;

&lt;p&gt;Service Quotas is an AWS service that helps you manage your quotas for many AWS services, from one location. Along with looking up the quota values, you can also request a quota increase from the Service Quotas console.&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 2: Reliability and Business Continuity (16%)
&lt;/h1&gt;

&lt;p&gt;For this domain, AWS wants the student to show that they know how to use the features of individual services to build a system that can remain in operation after an incident or in response to extra load on the system. Having a system that can scale out during times of peak load and scale back in when the load subsides is important. This is where the courses like Stephane Marek really start to come into their own.&lt;br&gt;
You should also understand why it is important to architect your system so that it can run in a multi AZ configuration. Can your system remain online if an AWS availability zone goes offline within a region?&lt;/p&gt;

&lt;p&gt;EC2 Auto Scaling and Elastic Load Balancers are essential to know for the exam. You must understand them in theory and in practise. You should spend time in the console setting up and integrating the 2 services.&lt;/p&gt;

&lt;h2&gt;
  
  
  EC2 Auto Scaling
&lt;/h2&gt;

&lt;p&gt;ASGs helps you keep your application available by allowing you to automatically add or remove EC2 instances according to conditions you define. They also help with fault tolerance as an unhealthy EC2 instance can be terminated and replaced with a newer one. ASGs are composed of many elements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Launch configuration - AMI, Instance Type, Key pair, security groups, EBS volumes and EC2 User Data (if using).&lt;/li&gt;
&lt;li&gt;Auto scale group - minimum, maximum and desired number of instances&lt;/li&gt;
&lt;li&gt;Scaling plan - when and how to scale out and in. &lt;/li&gt;
&lt;li&gt;Network and Subnets. Use these to specify how many subnets you want to spread your instances across.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;ASGs scale based on CloudWatch alarms and you can set up the group to scale in or out based on a CloudWatch metric. Auto-scaling will always try to ensure capacity is balanced across AZs.&lt;br&gt;
You can set up simple scaling policies that add or remove instances when scaling in or out. You can also use target tracking to scale. This works by tracking a metric like CPU and works to keep all instances at a certain level, adding or removing nodes based on the CPU of the instances present in the group. It's definitely worth diving deeper into these for the exam.&lt;br&gt;
Termination Policy - which instances to terminate and in what order. You can terminate oldest instance and/or instances with the oldest launch configuration. Can use default termination policy or set your own.&lt;br&gt;
Instance protection does not protect instances from manual termination initiated via the console or via an api. &lt;br&gt;
Health checks are how instances get removed from the group. You can use an EC2 status check or if integrated with an ELB, use the ELB check.&lt;/p&gt;

&lt;h2&gt;
  
  
  Elastic Load Balancers
&lt;/h2&gt;

&lt;p&gt;An ELB distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones (AZs). It exposes a single DNS endpoint for clients to access your application. You can setup ELBs so that they service clients internal to your AWS network or clients that exist outside your network.&lt;/p&gt;

&lt;p&gt;There are 3 types of ELBs that you should know about for the exam. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Classic Load Balancer - older generation and are deprecated. You shouldn't get asked about these in the exam but good to know they exist.&lt;/li&gt;
&lt;li&gt;Network Load Balancer - operates at Layer 4 network level and at extremely low latency. If you see something calling for low latency and TCP traffic, think Network Load Balancer.&lt;/li&gt;
&lt;li&gt;Application Load Balancer - operates at Layer 7 for HTTPS and HTTP traffic. ALB sacrifices latency for features. It can provides SSL termination (using AWS Certificate Manager), stickiness to underlying nodes (with cookies) and can route traffic based on hostname and url path.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you use an ELB, the underlying application won't know where the request originated from unless they can reference the &lt;a href="https://docs.aws.amazon.com/elasticloadbalancing/latest/application/x-forwarded-headers.html#x-forwarded-for" rel="noopener noreferrer"&gt;x-forwarded-for&lt;/a&gt; header in the request. It's worth understanding this concept for the exam.&lt;/p&gt;

&lt;p&gt;You should also understand the error codes that an ELB can return.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;200 - all good&lt;/li&gt;
&lt;li&gt;4xx - client-side errors. The 4xx errors are front-end errors that pertain to access to the object.&lt;/li&gt;
&lt;li&gt;5xx -  server-side errors, not access or authorization errors.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Health Checks are an integral part of ELBs. They are used to decide if traffic can be routed to an instance. They work different to EC2 healthchecks in that they check if traffic is getting to the instance, not just that the instance is healthy. You can specify an ELB healthcheck on an ASG which might be more useful than the EC2 check.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caching
&lt;/h2&gt;

&lt;p&gt;Caching is also covered in this domain and it's worth understanding where you would use Elasticache and also caching with CloudFront. You could also be asked a question on caching for DynamoDB with DAX.&lt;/p&gt;

&lt;h2&gt;
  
  
  Route 53 routing
&lt;/h2&gt;

&lt;p&gt;While we cover Route 53 in more depth in Domain 5: Networking and Content Delivery, it is also relevant for this domain. Using Route 53, you can architect a multi-region solution to failover when an application in one region becomes unavailable. You can do this by combining a Route 53 health check and an Alias record. Alias records allow you to point to an AWS Resource (ELB, S3 hosted Website, CloudFront distribution, Elastic Beanstalk, API Gateway, VPC Endpoint). Queries to alias records are free of charge.&lt;/p&gt;

&lt;h2&gt;
  
  
  RDS
&lt;/h2&gt;

&lt;p&gt;RDS Read-replicas are for increased scalability. You can run read-only workloads against read-replicas. You can enable automatic backups on a MySQL or MariaDB read replica. You can't against SQL Server, Oracle or Postgres. You can take a manual snapshot of a Postgres read replica.&lt;br&gt;
A read-replica can be promoted to be it's own standalone database instance but once promoted, it won't be linked to the primary instance anymore.&lt;/p&gt;

&lt;p&gt;RDS Multi-AZ is for increased availability. The database copies created in a multi-az setup are not readable. However, automated backups and DB Snapshots are taken from the standby to avoid I/O suspension on the primary.&lt;/p&gt;

&lt;p&gt;In the event of a failure you should understand how you can restore your database and to which point you can recover it. RTO is Recovery Time Objective and is the amount of time you can take to restore your database. Recovery Point Objective is the state at a point of time to which you can restore your database. You should also understand how to use replication to enable a restore in another region. You should understand the difference between automated backups and snapshots. With automated backups, RDS take a full backup of your database at a regular frequency (generally daily) and then backs up the transaction logs (generated when updates are made on the database) a more regular frequency. Generally, your RPO is tied to how often you are backing up your transaction logs. A point-in-time recovery is generally getting you to an RPO between the latest full backup by applying all transaction logs up to that point. You'll never get this down to realtime and it's generally within minutes.&lt;br&gt;
Snapshots are taken manually and allow you to restore a copy of your data in a separate RDS instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3
&lt;/h2&gt;

&lt;p&gt;For S3, you should understand how you can protect accidental deletion of objects. To do this, you can use versioning, MFA Delete and/or Object lock. Cross-region replication can also help. This requires versioning to be enabled and deletes are not replicated to the secondary region. These are all features of S3 worth knowing for this part of the exam.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Lifecycle Manager
&lt;/h2&gt;

&lt;p&gt;You can use &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/snapshot-lifecycle.html" rel="noopener noreferrer"&gt;Amazon Data Lifecycle Manager&lt;/a&gt; to protect your EBS snapshots and EBS-backed AMIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Services
&lt;/h2&gt;

&lt;p&gt;While you won't need to know serverless services like Lambda, DynamoDB, SQS and others in depth, they may come up in the exam as they can help address different use cases. For this domain, you should know how to use an SQS to decouple domains and how a queue can be used as a buffer to handle extra load on an application.&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 3: Deployment, Provisioning and Automation (18%)
&lt;/h1&gt;

&lt;p&gt;With this domain, AWS is testing your knowledge on how to deploy and run your AWS systems hands free. You need to have a good understanding of the different services that can be used to deploy infrastructure across your accounts and also how you can keep them up to date with patches and changes.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudFormation
&lt;/h2&gt;

&lt;p&gt;CloudFormation is AWS's Infrastructure as Code service and will more than likely come up in the exam. You should understand how to create an EC2 instance in CloudFormation and how to specify the different networking attributes. You specify resources for CloudFormation to create within a template. A template can be either in json or yaml format and consists of several sections. The top 3 sections don't do much and you just need to know that they exist.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWSTemplateFormatVersion: always set to "2010-09-09"&lt;/li&gt;
&lt;li&gt;Description: a description of the template&lt;/li&gt;
&lt;li&gt;Metadata: template metadata&lt;/li&gt;
&lt;li&gt;Parameters: use this section to input custom values to your template each time you create or update a stack. This will require input each time you run the stack.&lt;/li&gt;
&lt;li&gt;Rules: you can use this section to validate input from the parameters section.&lt;/li&gt;
&lt;li&gt;Mappings: useful for making stacks regionally agnostic by mapping regional data like AMI keys in a set of named values.&lt;/li&gt;
&lt;li&gt;Conditions: use this section to generate a flag that can be referred to in later sections. For example, you might want to know want to know if your stack is running in a production account or not and this section can set a flag based on a passed in parameter.&lt;/li&gt;
&lt;li&gt;Transform: use this section to run macros within your stack.&lt;/li&gt;
&lt;li&gt;Resources: this section is the only mandatory section and where you specify details of the resources you wish to create.&lt;/li&gt;
&lt;li&gt;Outputs: output the ARNs of the resources created within the stack. Can be useful for passed information to nested stacks.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Nested Stacks are stacks that are used in other stacks. They enable you to standardise the creation of common resources within an account.&lt;/p&gt;

&lt;p&gt;StackSets work within an AWS Organisation to standardise resource creation across accounts within an organisation. You can define a stackset in an administration account and then use it as a basis for deployed resources in all target accounts. The important thing to remember is that the deployment is controlled from a single administration account and used to deploy resources in 1 or more target accounts.&lt;/p&gt;

&lt;p&gt;Before you deploy a stack, you can use a changeset to know what changes will happen before the stack is deployed.&lt;/p&gt;

&lt;p&gt;With the DeletionPolicy attribute you can preserve, and in some cases, backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by default.&lt;/p&gt;

&lt;p&gt;Using the UpdatePolicy attribute you can specify how AWS CloudFormation handles updates to a number of services including Auto Scaling Groups, Elasticache, OpenSearch, Elasticsearch and Lambda. For the exam, it's good to know how it works with ASGs. For ASGs, this attribute can be set to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AutoScalingReplacingUpdate&lt;/li&gt;
&lt;li&gt;AutoScalingRollingUpdate
You should know how to use the UpdatePolicy attribute to do blue/green deployments and rolling and canary environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A stack policy is a policy attached to a CloudFormation stack that controls if and how a resource can be updated. For example, if your stack creates a production database, you may want to prevent CloudFormation from changing the name of the database after it has been created. A stack policy can be added to prevent stack resources from being unintentionally updated or deleted during a stack update.&lt;/p&gt;

&lt;p&gt;And finally, you'll need to know how to troubleshoot if your Cloudformation stack fails.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Services
&lt;/h2&gt;

&lt;p&gt;There are other services within AWS that are also relevant when studying for this domain.&lt;br&gt;
You can use EC2 Image builder to create and manage AMIs. It can be used at an AWS organisation which helps with maintaining standards across all accounts.&lt;br&gt;
With AWS Opworks, you can use a managed Puppet or Chef service to manage your instances.&lt;br&gt;
And Elastic Beanstalk can be used to deploy your application code. By using Elastic Beanstalk, you push responsibility for managing the OS to AWS. As with CloudFormation, you should understand the different deployments for Elastic Beanstalk. You can perform much the same as CF with all at once, rolling, rolling with additional batches and immutable deployments all supported.&lt;br&gt;
You can automate patching across your EC2 instances with AWS Systems Manager Patch Manager. &lt;br&gt;
And to schedule automated updates or tasks you can utilise EventBridge or AWS Config.&lt;br&gt;
EC2 Image Builder simplifies the building, testing, and deployment of Virtual Machine and container images for use on AWS or on-premises. It integrates with AWS Resource Access Manager, AWS Organizations, and Amazon ECR to enable sharing of automation scripts, recipes, and images across AWS accounts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fix deployment issues
&lt;/h2&gt;

&lt;p&gt;You may get several questions in the exam concerning failed deployments. You will have to be able to troubleshoot these and pick the correct answer. Errors with deployments may not be related to anything to do with your application but could be AWS region specific in terms of service quotas. You should understand what are they, why are they and how they can be changed. For example the default limit for a number of instances of a particular type in each region is 20. If you get a InstanceLimitExceeded error when spinning up a new instance, it means that you are over your limit and will need to either terminate older instances or request an increase in your quota. If you get an InsufficientInstanceCapacity error, that means that AWS does not have enough instances of that type in the AZ you are trying to launch. &lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 4: Security and Compliance (16%)
&lt;/h1&gt;

&lt;p&gt;With this domain, AWS wants the student to show how they can utilise AWS services to implement polices to control access and ensure compliance. They also want to see how you can protect data at rest and in flight.&lt;br&gt;
There are a lot of separate security services in AWS and I found the table on this &lt;a href="https://aws.amazon.com/products/security/" rel="noopener noreferrer"&gt;page&lt;/a&gt; gives a very good overview of them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identity and Access Management (IAM)
&lt;/h2&gt;

&lt;p&gt;IAM is at the core of all AWS services, working to ensure only those who have been granted access can execute api calls and actions. Generally when you hear someone talking about implementing least privilege, it is with IAM. I cannot hope to cover it in this article and you will need to study this in depth to pass the exam. You should start by reading the IAM &lt;a href="https://aws.amazon.com/iam/faqs/" rel="noopener noreferrer"&gt;faqs&lt;/a&gt;. There is a lot of good material in here. The following videos also give a very good overview of the service.&lt;/p&gt;

&lt;p&gt;AWS re:Inforce 2019: The Fundamentals of AWS Cloud Security (FND209-R)&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=-ObImxw1PmI" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=-ObImxw1PmI&lt;/a&gt;&lt;br&gt;
AWS re:Invent 2019: [REPEAT 1] Getting started with AWS identity (SEC209-R1)&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=Zvz-qYYhvMk&amp;amp;secd_iam5" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=Zvz-qYYhvMk&amp;amp;secd_iam5&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the exam, you will need to be able to look at the json of an IAM policy and understand what it does. With the IAM policy simulator, you can test and troubleshoot identity-based policies, IAM permissions boundaries, Organizations service control policies (SCPs), and resource-based policies.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working with AWS Organizations
&lt;/h2&gt;

&lt;p&gt;You must understand AWS Organizations and they help centrally manage and govern a multi-account environment for the exam.&lt;br&gt;
The &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/organizations-security.html" rel="noopener noreferrer"&gt;Using AWS Organizations for security&lt;/a&gt; page gives a very good overview of how they work with Control Tower, service control policies and other services like CloudTrail, Config and CloudFormation StackSets.&lt;br&gt;
AWS Resource Access Manager (RAM) helps you securely share your resources across AWS accounts, within your organization or organizational units (OUs) in AWS Organizations, and with IAM roles and IAM users for supported resource types. While IAM Access Analyzer helps identify resources in your organization and accounts that are shared with an external entity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trusted Adviser
&lt;/h2&gt;

&lt;p&gt;The Trusted Adviser service includes several security checks. It's a very passive service, only reporting on what it sees but it can be useful. It can be configured to run at an organization so can be good for checking compliance across all accounts.&lt;/p&gt;

&lt;h2&gt;
  
  
  GuardDuty
&lt;/h2&gt;

&lt;p&gt;Amazon GuardDuty is a threat detection service that continuously monitors your AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility and remediation. Integrates with AWS Detective. You can also integrate GuardDuty with Amazon EventBridge to automate best practices for GuardDuty, such as automating responses to new GuardDuty findings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Detective
&lt;/h2&gt;

&lt;p&gt;Amazon Detective helps customers conduct security investigations by distilling and organizing data from source such as, AWS Cloudtrail, Amazon VPC Flow Logs, and Amazon GuardDuty, into a graph model that summarizes resource behaviors and interactions observed across a customer's AWS environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inspector
&lt;/h2&gt;

&lt;p&gt;Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It assesses applications for vulnerabilities or deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity. Inspector does not automatically remediate issues but could integrate with Systems Manager to do so. One important to note about Inspector is that it works primarily with EC2 instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Parameter Store
&lt;/h2&gt;

&lt;p&gt;AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration and secrets. You can store data such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can encrypt the parameters using KMS. It also integrates with CloudFormation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets Manager
&lt;/h2&gt;

&lt;p&gt;AWS Secrets Manager is the go to service to help you protect secrets needed to access your applications, services, and IT resources. It integrates with database services like RDS and Redshift to rotate credentials and keep them in sync.&lt;/p&gt;

&lt;p&gt;If you're wondering if you should use Parameter Store instead of Secrets Manager, have a read of this &lt;a href="https://www.lastweekinaws.com/blog/handling-secrets-with-aws/" rel="noopener noreferrer"&gt;Handling Secrets with AWS article&lt;/a&gt; from Corey Quinn.&lt;/p&gt;

&lt;h2&gt;
  
  
  KMS
&lt;/h2&gt;

&lt;p&gt;You must know this service for the exam. Generally questions will generally be on how you can use KMS with different services and not just on KMS itself. It makes sense when you think about it as it doesn't exist as a standalone service. KMS is the go to service for encryption in AWS so it will come up in the exam.&lt;br&gt;
Start with the &lt;a href="https://aws.amazon.com/kms/faqs/" rel="noopener noreferrer"&gt;faqs&lt;/a&gt; and I found this video very useful in understanding the service.&lt;/p&gt;

&lt;p&gt;AWS re:Inforce 2019: How Encryption Works in AWS (FND310-R)&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=plv7PQZICCM" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=plv7PQZICCM&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  S3 Encryption
&lt;/h2&gt;

&lt;p&gt;You should understand the 4 methods for encrypting data at rest in S3. These are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SSE-S3 - native encryption using keys managed by S3.&lt;/li&gt;
&lt;li&gt;SSE-KMS - use KMS to manage encryption.&lt;/li&gt;
&lt;li&gt;SSE-C - use your own encryption keys.&lt;/li&gt;
&lt;li&gt;Client side encryption - the client application controls encryption and decryption of the object.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Certificate Manager
&lt;/h2&gt;

&lt;p&gt;AWS Certificate Manager (ACM) handles the complexity of creating and managing public SSL/TLS certificates for your AWS based websites and applications. ACM can also be used to issue private SSL/TLS X.509 certificates that identify users, computers, applications, services, servers, and other devices internally. When you think of encrypting data in flight, ACM is a major part of it.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudHSM
&lt;/h2&gt;

&lt;p&gt;AWS CloudHSM provides hardware security modules (HSM) in the AWS Cloud. An HSM is a computing device that processes cryptographic operations and provides secure storage for cryptographic keys. CloudHSM allows you to generate, store, import, export, and manage cryptographic keys, including symmetric keys and asymmetric key pairs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Macie
&lt;/h2&gt;

&lt;p&gt;Amazon Macie is a service that will scan data in S3 and discover any PII or sensitive fields that may be contained therein. If you get a question about identifying PII data or sensitive fields, chances are that Macie will be an option and a worthy candidate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Firewall Manager
&lt;/h2&gt;

&lt;p&gt;AWS Firewall Manager is a security management service you use to centrally configure and manage firewall rules and other protections across the AWS accounts and applications in your organization. Using Firewall Manager, you can roll out AWS WAF rules, create AWS Shield Advanced protections, configure and audit Amazon Virtual Private Cloud (Amazon VPC) security groups, and deploy AWS Network Firewalls. Use Firewall Manager to set up your protections just once and have them automatically applied across all accounts and resources within your organization, even as new resources and accounts are added.&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 5: Network and Content Delivery
&lt;/h1&gt;

&lt;p&gt;This is a very important domain and you'll definitely need a separate resource to help study it. AWS is testing the student to see how they can implement networking features and connectivity, configure domains, DNS services, and content delivery and troubleshoot network connectivity issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  VPC Configuration
&lt;/h2&gt;

&lt;p&gt;This a very large topic and impossible to cover in detail. All of the aforementioned cloud training providers covered this in-depth. To get started, you could take a look at the VPC &lt;a href="https://aws.amazon.com/vpc/faqs/" rel="noopener noreferrer"&gt;FAQs&lt;/a&gt;. These are some of the better faqs on AWS.&lt;/p&gt;

&lt;p&gt;A VPC can span multiple Availability Zones but a subnet must reside within a single Availability Zone. When you launch an Amazon EC2 instance, you must specify the subnet in which to launch the instance. The instance will be launched in the Availability Zone associated with the specified subnet. You are initially limited to launching 20 Amazon EC2 instances at any one time and a maximum VPC size of /16 (65,536 IP addresses) and a minimum of /28 (16 IP addresses).&lt;/p&gt;

&lt;p&gt;Network ACLs are stateless. If you allow traffic in, you must also allow traffic out. NACLs operate at the subnet level.&lt;br&gt;
Security Groups are stateful. By default, if you allow traffic in, it will be allowed out unless you add an explicit deny. Security Groups operate at the instance level.&lt;br&gt;
You may get asked to troubleshoot a scenario where an EC2 instance is not reachable or communication between two components is blocked. Because they operate at different levels of the VPC, you will need to ensure that both the NACLs and Security Groups are working together.&lt;/p&gt;

&lt;h3&gt;
  
  
  CIDR blocks
&lt;/h3&gt;

&lt;p&gt;When you setup a VPC and subnets, you must specify a CIDR block of IP addresses for both. Your subnet CIDR block will be a subset of your VPC CIDR block. Remember that 256 is the maximum in an octet so if you need more than that, you need to shift left to the next octet.&lt;/p&gt;

&lt;p&gt;*/28 gives 16 IP Addresses&lt;br&gt;
 */24 gives 256 IP Addresses (1 octet)&lt;br&gt;
 */16 gives 65636 IP Addresses (2 octets)&lt;/p&gt;

&lt;p&gt;When assigning IPs to subnets, you can't overlap them. For each CIDR block assigned to a subnet, AWS reserves 5 IP addresses for it's own use.&lt;/p&gt;

&lt;h2&gt;
  
  
  VPC Connectivity Options
&lt;/h2&gt;

&lt;p&gt;You should also understand the different connectivity options from and to VPCs and this &lt;a href="https://d1.awsstatic.com/whitepapers/aws-amazon-vpc-connectivity-options.pdf" rel="noopener noreferrer"&gt;whitepaper&lt;/a&gt; gives you a good place to start. It covers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Network-to-Amazon VPC Connectivity - How to connect remote networks (such as an existing data center or office network) with your Amazon VPC environment.&lt;/li&gt;
&lt;li&gt;Amazon VPC-to-Amazon VPC Connectivity - How to connect VPCs together. These can be with your own account or other accounts.&lt;/li&gt;
&lt;li&gt;Internal User-to-Amazon VPC Connectivity - Allow users access to your VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You need to know the difference between a VPN, Direct Connect and a Transit VPC and also how they can complement each other. Also know the difference between a Virtual Private Gateway and a Customer Gateway.&lt;/p&gt;

&lt;p&gt;AWS PrivateLink is a highly available, scalable technology that enables you to privately connect your VPC to supported AWS services, services hosted by other AWS accounts (VPC endpoint services), and supported AWS Marketplace partner services. &lt;/p&gt;

&lt;h2&gt;
  
  
  Route 53
&lt;/h2&gt;

&lt;p&gt;Amazon Route 53 provides highly available and scalable Domain Name System (DNS), domain name registration, and health-checking web services. You could start with the &lt;a href="https://aws.amazon.com/route53/faqs/" rel="noopener noreferrer"&gt;faqs&lt;/a&gt;.&lt;br&gt;
A hosted zone in Route 53 is analogous to a traditional DNS zone file; it represents a collection of records that can be managed together, belonging to a single parent domain name. There are several record types that you can use within a hosted zone but these are the most important:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zone Apex - e.g. amazon.com or google.com&lt;/li&gt;
&lt;li&gt;A - URL to IPv4 address&lt;/li&gt;
&lt;li&gt;AAAA - URL to IPv6 address&lt;/li&gt;
&lt;li&gt;CNAME - URL to URL&lt;/li&gt;
&lt;li&gt;ALIAS - URL to AWS Resource (point an URL to an AWS resource (ELB, S3 hosted Website, CloudFront distribution, Elastic Beanstalk, API Gateway, VPC Endpoint). &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An alias record is a Route 53 extension to DNS. It's similar to a CNAME record, but you can create an alias record both for the root domain, such as example.com, and for subdomains, such as &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt;. You cannot create CNAME records for the zone apex. &lt;br&gt;
If you see a question, asking which type of Route 53 record should you use for pointing to an AWS service such as ELB, S3 hosted Website, CloudFront distribution, Elastic Beanstalk, API Gateway or VPC Endpoint, the correct option is an alias record. Route 53 doesn't charge for alias queries to ELB load balancers or other AWS resources. Also, an alias record has a native healthcheck.&lt;/p&gt;

&lt;h3&gt;
  
  
  Routing policies
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Simple Routing Policy maps a domain to a single URL. &lt;/li&gt;
&lt;li&gt;Weighted Routing Policy can route traffic to multiple resources in proportions that you specify.&lt;/li&gt;
&lt;li&gt;Latency Based Routing (LBR) can direct traffic to the region with the best latency based on network round-trip.&lt;/li&gt;
&lt;li&gt;Failover routing allows you to configure active-passive failover by enabling you to route traffic to a primary resource when the resource is healthy and to a secondary resource when the primary is unhealthy.&lt;/li&gt;
&lt;li&gt;Geolocation routing lets you choose the resources that serve your traffic based on the geographic location of your users, meaning from the location that the DNS queries originate. For example, you might want all queries from Europe to be routed to an ELB load balancer in the Frankfurt region. Geolocation routing policies route based on the physical location of a user, whereas latency-based routing selects the AWS Region with the lowest latency.&lt;/li&gt;
&lt;li&gt;Geoproximity routing enables you to route traffic to your resources based on the distance between your users and your resources. Route 53 calculates which resource is closer to the source of the query and routes requests accordingly.&lt;/li&gt;
&lt;li&gt;Multivalue answer routing policy is similar to simple routing but includes a health check.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  TTL
&lt;/h3&gt;

&lt;p&gt;The time for which a DNS resolver caches a response is set by a value called the time to live (TTL) associated with every record. Route 53 does not have a default TTL for any record type. You must always specify a TTL for each record so that caching DNS resolvers can cache your DNS records to the length of time specified through the TTL.&lt;/p&gt;

&lt;h3&gt;
  
  
  Healthchecks
&lt;/h3&gt;

&lt;p&gt;You can use Route 53 healthchecks to monitor the health of your resources and route traffic accordingly. You can put a healthcheck on the health of a specified resource, such as a web server, the status of other health checks and the status of an Amazon CloudWatch alarm.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudFront for content delivery
&lt;/h2&gt;

&lt;p&gt;Amazon CloudFront is a content delivery network (CDN) that can deliver your content across the globe using the AWS network. Your read only content can be cached at the edge, in any of AWS's local points of presence. &lt;br&gt;
Your content can come from (aka origin servers) an Amazon S3, Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), or a custom server outside of AWS.&lt;br&gt;
An Origin Access Identity (OAI) allows you to restrict access to the contents of the bucket so that all users must use the CloudFront URL instead of a direct S3 URL. An OAI is a special CloudFront user that can access the files in the bucket and serve them to users. You should block public access to the S3 bucket to prevent users from accessing the files directly.   &lt;br&gt;
You can use an origin request policy to configure CloudFront to include cookies and HTTP request headers in origin requests.&lt;br&gt;
You should understand how long it takes for new content to roll out to the different points of presence and what you can do to speed it up. Basically, you can flush the cache and that will speed it up.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3 static website hosting
&lt;/h2&gt;

&lt;p&gt;You can host a static website on S3. The url will be the bucket name. You must update the bucket policy to allow public reads. You should also understand CORS in relation to allowing access to files in your S3 bucket.&lt;/p&gt;

&lt;h2&gt;
  
  
  Troubleshooting networking and connectivity issues
&lt;/h2&gt;

&lt;p&gt;You can collect and evaluate logs from VPC flow logs, ELB access logs, AWS WAF web ACL logs and CloudFront logs to help troubleshoot network issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS WAF vs AWS Shield
&lt;/h2&gt;

&lt;p&gt;AWS WAF is a web application firewall that helps protect web applications or APIs against common web exploits that can affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by offering you the ability to create security rules that block common attack patterns, such as SQL injection or cross-site scripting. You also can create rules that filter out specific traffic patterns that you define.&lt;br&gt;
AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection. There are two tiers of AWS Shield - Standard and Advanced. Standard operates at layer 3/4 whereas Advanced can operate at layer 7. You can use AWS Shield Standard with Amazon CloudFront and Amazon Route 53. &lt;br&gt;
AWS Shield Advanced is available globally on all Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53 edge locations. &lt;br&gt;
AWS Shield Advanced also gives you 24x7 access to the AWS Shield Response Team (SRT) and protection against DDoS related spikes in your Amazon Elastic Compute Cloud (EC2), Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator and Amazon Route 53 charges.&lt;/p&gt;

&lt;h1&gt;
  
  
  Domain 6: Cost and Performance Optimisation
&lt;/h1&gt;

&lt;p&gt;I really like the way AWS puts both cost and performance optimisation in the same domain. It reinforces that performance optimisation in the cloud can lead to cost optimisation. Corey Quinn outlines this well in his article &lt;a href="https://www.lastweekinaws.com/blog/the-key-to-unlock-the-aws-billing-puzzle-is-architecture/" rel="noopener noreferrer"&gt;The Key to Unlock the AWS Billing Puzzle is Architecture&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The first part of this domain will focus on implementing cost optimization strategies and the first place to start is understanding your existing costs. To do this you need data and a report. Data is useful to categorise your resources so that you can report on their costs accurately. AWS uses tags to drive a lot of it's reporting. If you wish to use a tag for cost reporting, you must activate it as a cost allocation tag in the Billing and Cost Management console.&lt;/p&gt;

&lt;h2&gt;
  
  
  Resource Groups Tag Editor
&lt;/h2&gt;

&lt;p&gt;With AWS Resource Groups, you can create, maintain, and view a collection of resources that share common tags. Tag Editor manages tags across services and AWS Regions. Tag Editor can perform a global search and can edit a large number of tags at one time. Resource Groups work within an AWS Organization and you can use Tag Editor to tag resources across all accounts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Explorer
&lt;/h2&gt;

&lt;p&gt;AWS Cost Explorer is a reporting tool built into the AWS console that helps you view and analyze your costs over a time period. You can slice and dice your data by many dimensions including cost allocation tags. Within the cost explorer console, you will also get a forecast of how much you will spend in the next months and recommendations on how to cut costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost and usage report
&lt;/h2&gt;

&lt;p&gt;For more comprehensive cost and usage data, you can enable the cost and usage report to run and save it's output to S3. You can receive reports that break down your costs by the hour, day, or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format.&lt;/p&gt;

&lt;h2&gt;
  
  
  Budgets and billing alarms
&lt;/h2&gt;

&lt;p&gt;You can set custom budgets with the AWS Budgets service that alert you when you exceed your budgeted thresholds. You can also setup a billing alarm in CloudWatch if you breach a certain threshold within a period of time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trusted Advisor
&lt;/h2&gt;

&lt;p&gt;In addition to the security checks already mentioned, AWS Trusted Advisor also includes checks for costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Compute Optimizer
&lt;/h2&gt;

&lt;p&gt;AWS Compute Optimizer recommends optimal AWS resources for your EC2 instances, EBS volumes and Lambda functions based on their usage data. For example, Optimizer can tell if you have over-provisioned an EC2 instance and may recommend that you save costs by right-sizing the instance to a smaller one.&lt;/p&gt;

&lt;h2&gt;
  
  
  EC2 Spot instances
&lt;/h2&gt;

&lt;p&gt;If your workload allows it, EC2 spot instances can be a very cost efficient way to run then. If you can run your jobs at off-peak or start and stop them easily, you can pay a fraction of what it would cost for on-demand or even reserved instances. Spot can be a very good way to run background processes where latency is not so important.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3 Lifecycle Management
&lt;/h2&gt;

&lt;p&gt;You should understand the different storage classes in S3 and where it makes sense to use them. Generally the cheaper the storage, the more expensive it is to access. Therefore you need to align the access patterns to the correct storage. Use cheaper storage for data that is not accessed often and more expensive storage for data that is accessed regularly.&lt;/p&gt;

&lt;p&gt;The second part of this domain is to implement performance optimization strategies. This section is not about getting the cheapest performance but more about getting the best value for money. For example, if you need to reduce latency between applications running on separate EC2 instances, it may be worth paying for a placement group that ensures your instances are located close to one another. It is being aware that it is an option and understanding the costs associated with it.&lt;br&gt;
Other examples are utilising the right EBS volumes to match your use case. Would paying more for Provisioned IOPS over SSD be a better choice for your application? &lt;br&gt;
Turning on S3 Transfer Acceleration will cost you more but will deliver objects into your bucket at a much faster rate. Splitting an object into parts and using multipart uploads to send it to S3 can also increase the transfer rate and also make the transfer more fault-tolerant. If only one part fails, that part can be re-tried rather than the entire object.&lt;br&gt;
With RDS, you can use metrics to identify any processes that are consuming resources beyond what you expect. A badly performing query can consume a disproportionate amount of resources and reduce performance of the database. Use RDS Proxy to more efficiently re-use and balance open database connections across all clients.&lt;/p&gt;

&lt;h1&gt;
  
  
  Exam Labs
&lt;/h1&gt;

&lt;p&gt;The SysOps exam is the only AWS exam that has a practical element. I practised them and watched instructors going through them. If you're used to working with in the AWS console combined with the instructions for each lab, you should be able to handle this part of the exam. My exam had three labs and I had never touched the services involved in two of them before. I know AWS gets a hard time for inconsistent UX but I have a different perspective after the exam. There is a general consistency between services and practise with one will help you familiarise with others.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;I think this is the longest article I have ever written and it reflects the breadth of services that the exam covers. I enjoyed the exam and learned a lot from studying for it. I hope this article can help you with your own study. Please comment or ask questions. I'm generally pretty good at getting back to people.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>certification</category>
      <category>devops</category>
    </item>
    <item>
      <title>Wardley mapping the Modern Data Stack</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Sun, 09 Jan 2022 19:18:06 +0000</pubDate>
      <link>https://dev.to/aws-builders/wardley-mapping-the-modern-data-stack-1h6g</link>
      <guid>https://dev.to/aws-builders/wardley-mapping-the-modern-data-stack-1h6g</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;The last 10 to 15 years has seen an explosion in data volumes that traditional on-premise relational solutions have been unable to handle. No platform bound to a single machine could hope to do so and a system distributed across multiple compute and storage nodes was necessary. Hadoop emerged as the premier choice to tackle the problem and became synonymous with the term Big Data. However, it was primarily an on-premise solution and while organisations built their data stacks on Hadoop, AWS and Snowflake began to rebuild the data warehouse to take advantage of the distributed compute and storage of the cloud. The rise of the cloud data warehouses has seen a swing back to a SQL experience built on a relational storage engine. As these data warehouses became popular, vendors were incentivised to build new tools and services to work with these warehouses. Vendors of existing tools and services also added support for these new data warehouses. In aggregate, this stack of offerings is called the Modern Data Stack. Every week, a new company with a data solution competes to become the latest member of the stack. It's easy to see this as a new era in data but as I try to make sense of where it is going, I thought it was worthwhile looking at how we got here. Using Wardley maps, I will demonstrate how the landscape has changed in the last decade and how commoditisation of different components of the stack is allowing new patterns to evolve.&lt;/p&gt;

&lt;p&gt;If you don't know what Wardley maps are, you should stop reading this article and go and study them &lt;a href="https://github.com/wardley-maps-community/awesome-wardley-maps" rel="noopener noreferrer"&gt;here&lt;/a&gt;. You'll get more out of learning about maps than reading this article.&lt;/p&gt;

&lt;p&gt;Before continuing, I want to point out these changes are not laid out in chronological order. They may have happened at the same time or evolved closely together. The point of this article is to show that they happened and I've tried to show how they are linked together.&lt;/p&gt;

&lt;h2&gt;
  
  
  0) B.C. - Before Cloud
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnas0w522nhaaec85imb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxnas0w522nhaaec85imb.png" alt="Before Cloud"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Back in the ancient times before cloud, the data setup in most companies was very simple. The number of data sources was limited to an operational database or two that supported your companies bespoke applications. If you were lucky, you could pull data out of the backend of any purchased software packages that your company operated. However, a lot of these systems expressly forbade you to extract data as part of their licensing. If you could access the data, it was pulled into another database called a data warehouse. This could be a standard Oracle or SQL Server cluster or something more targeted at being a data warehouse like a Teradata or later a Vertica or Netezza appliance. &lt;br&gt;
Bash scripts could be used to move data around or there were ETL tools like Informatica, IBM Datastage or Oracle ODI. Microsoft SQL Server Integration Services came later. Streaming technologies were rare and primitive.&lt;br&gt;
Reports were built on top of the data warehouse with tools like Business Objects or MicroStrategy. These tools generally had their own data modelling layer that was used to define and compute KPIs and measures. These definitions were generally hard to access and couldn't be used by other applications. Your customer for these reports and dashboards were senior or executive level managers. Adoption and active users were generally low. These stacks were called Decision Support Systems (DSS) or Executive Support Systems (ESS) indicating that they were built for a small number of senior managers to support high level decision making.&lt;br&gt;
OLAP Cubes could also be built to extract and model data from these data warehouses. This also served the purpose of removing complicated and CPU intensive queries from the data warehouse. Excel was used extensively to interact with these OLAP cubes. Logic to calculate KPIs was generally duplicated between the OLAP cube and the data visualisation data models leading to lot of why is this number different to this number questions.&lt;br&gt;
Overall, your data architecture could look something like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjo652i7ej990n07u8ig2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjo652i7ej990n07u8ig2.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CPU and storage were expensive and difficult to scale in an on-premise environment. DBAs were protective of their databases and who could run what on them. Therefore access was generally restricted to a few trusted individuals and systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  1) Cloud data warehouses
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvemmt3220620mf29a5xy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvemmt3220620mf29a5xy.png" alt="Evolution1"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first major step in the Modern Data Stack was the cloud data warehouse. Amazon Redshift was the first but was followed closely by Snowflake. Snowflake was also hosted on AWS although they have recently expanded to include an Azure and GCP offering. Azure Synapse and GCP BigQuery also compete in this space. The unifying factor for all of these is a fully ACID compliant system with a SQL query engine on top of distributed data storage. More recently Databricks has started to pivot from the Spark company to Data Lakehouse company and is pushing their SQL interface also.&lt;br&gt;
The ability to provision a data warehouse in the cloud was/is a game changer. Previously, it would take months to order and install the necessary hardware and licenses before you write a SQL statement. Now, it could be done in minutes. With unlimited CPU and storage, data teams didn't have to be so protective of their hardware. As a result, access to the data started to open up to more and more people and use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  2) Data Visualisation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmuiubvteuuuvrwsas13.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmuiubvteuuuvrwsas13.png" alt="Evolution2"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Separately, data visualisation tools built in the cloud began to appear. Software vendors took two approaches to this. Newer players like Looker and Mode targeted the cloud data warehouses as their primary data source. Microsoft and Tableau looked to host their offerings in the cloud and allow customers to connect to a myriad of data sources, both cloud and on-premise. They treated cloud data warehouses as another data source to be supported. Power BI was built for the cloud and with its low cost entry pricing was a big step in democratising data analysis. Tableau chose a lift and shift of their existing on-premise product to the cloud. Others followed with Google Data Studio and smaller players like Sisense and Yellowfin. Tableau's offering has adapted and is still one of the largest players. &lt;br&gt;
For all the advancements in the Modern Data Stack, this is still the one area that has not changed as much as others. We've seen no major evolutions in this space in years. Sisense and Domo will push augmented BI. Sisu will push their ML capabilities. I admit that I am cynical after seeing so many dashboards and reports gather cobwebs once they had been built for their initial use case.  For the most part, 90% of what a data visualization tool can do has been commoditised for a long time.&lt;br&gt;
The fact that excel is still the most popular data analysis tool shows the failure of such tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  3) Data Sources (including Streaming)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0kc6rvhogsewbkb9daq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr0kc6rvhogsewbkb9daq.png" alt="Evolution3"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Outside of the data platform, the other big shift happening was rise of SaaS. Salesforce, Zendesk, Google Analytics and Hubspot are all examples of this trend. There are services for every business process now and they all hold an organisations data outside of the organisations network. These vendors sold their APIs and ability to access data as a competitive advantage. They didn't try to block organisations accessing their data and actively provided ways for them to do so. It was a new paradigm and the need for a platform to consolidate data from multiple disparate sources became more urgent. Streaming data technology became more mature and consolidated around technologies such as Kafka.&lt;/p&gt;

&lt;h2&gt;
  
  
  4) ETL to EL + Transform (plus Streaming)
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa2a9dx2q8l7htvsb4hk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa2a9dx2q8l7htvsb4hk.png" alt="Evolution4"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With the commoditisation of services and business data into SaaS based tools and the number of data warehouses consolidating around a small number of players, the commoditisation of tools and connectors to extract and load data was inevitable. Tools like Fivetran, Stitch Data and Airbyte have made it easy for data teams to extract and load data from SaaS tools like Salesforce into your cloud data warehouse.&lt;br&gt;
Data sharing is another data integration capability that Redshift, Snowflake and Databricks now support. It's a bit of a confusing name but it allows for users to access their data without moving it. For example, if one of the SaaS providers is setup within your data warehouse vendors marketplace, you can now subscribe to your own data from within your data warehouse and maintain an up to date view of your data without using any data integration tools. Imagine having your Amplitude data available as a table in your Snowflake instance and this data automatically updating without a line of code for you to maintain. As this is a recent feature, I haven't used it this way or talked to anyone who has but it sounds extremely powerful.&lt;br&gt;
However the data is loaded into your data warehouse, it is being loaded in the same structure as it was extracted out of the source system. This is useful and the extra compute and storage available with the cloud data warehouses meant that it can be processed. However, having the ability to join data from these disparate sources together and transform it into a persisted table or view is still the very essence of a data warehouse. Especially so if you are paying for your compute by the minute. In that case, you'll want to run your transform once and persist the results for all your customers. This is where dbt comes in and changes the game. dbt has democratised this part of the data cycle so much that it has lead to the rise of a new role called Analytics Engineer. &lt;br&gt;
As the modern data platform has shifted from one all-encompassing ETL tool to separate EL and Transform, you may need to orchestrate between these. For example, once Fivetran or Airbyte finishes loading the source data, you would want to kick off a dbt job to transform it into the more valuable data model for your reporting and insights. You may have subsequent steps to send out emails and alerts or push data back to consumer tools via Reverse ETL. This is why we have seen the rise of tools like Airflow, Dagster, Prefect and others to orchestrate the running of pipelines end-to-end. &lt;br&gt;
We are also starting to see standardisation of transforms where a repeatable process can be applied to everyones Salesforce or other SaaS data. Fivetran will now supply dbt models for customers to run once they loaded their data. There are even models that join data from different SaaS products together to create a composite view of customer data. &lt;br&gt;
Companies like Trifacta and Mozartdata are taking this a step further with integrated tool that combines the EL and Transform steps into a single tool.&lt;br&gt;
The transform stage is also where you see ML integrate into the Modern Data Stack. As part of your transform, you can call out to ML models to transform your data. This transformation could be sentiment analysis on surveys, translations, churn predictions, etc. This could be a bespoke model developed in house or an existing ML service like Google Translate or Amazon Forecast. Most of the cloud data warehouses will now allow you to call inference from within a SQL statement, passing features from each row in the query out to an externally hosted model. Or vendors like Continual AI will run your ML inference for you as data is landing in your cloud data warehouse.&lt;/p&gt;

&lt;h2&gt;
  
  
  5) Operations to DevOps
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6mm4wcqxqz8fd88wew6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6mm4wcqxqz8fd88wew6d.png" alt="Evolution5"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the past, data teams would have typically split into developers and administrators (DBAs). DBAs would take the code from developers and get it running in production. DBAs would be kept busy trying to keep this code from bringing production down, making sure there was enough space for growth and maintenance of indexes and archiving data if they had time. If your organisation was trying to implement a business continuity plan, your DBAs may be occupied maintaining a disaster recovery copy of your database. Everything was on-premise so DBAs had to deal with other operational teams for the OS, network and storage support. It was a very specialised role and not to be taken lightly. Every DBA and database instance was different and therefore it was harder to commoditise. DBAs would often act as protectors and gate keepers for the data system. &lt;br&gt;
Cloud data warehouses have reduced the operational burden so much that the traditional DBA role is no longer necessary. In addition the rise of DevOps movement has seen developers take on the "you build it, you run it" mentality. As such, developers have moved to take over any remaining operational tasks still needed for a cloud data warehouse.&lt;br&gt;
In addition, data teams embraced automation with regards to CI/CD and monitoring reducing or eliminating the need for manual intervention for deployments and monitoring.&lt;/p&gt;

&lt;h2&gt;
  
  
  6) New patterns
&lt;/h2&gt;

&lt;p&gt;Simon Wardley contends that the commoditisation of existing technologies leads to further innovation as new solutions are built on top of these technologies. The commoditisation of data storage and compute, data integration and transformation and data visualisation tools are now leading to new means of extracting value from data. Up to this point, most of the successful parts of the data stack have been commoditised versions of older patterns. Snowflake, Fivetran and dbt are all just new, albeit better ways of doing the old thing faster. Therefore I contend that we are only at the start of this innovation cycle.&lt;br&gt;
We are already starting to see new patterns start to emerge. Three examples that I can give are Reverse ETL, the Metrics layer and a whole plethora of new tools dealing with data trust.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ttjjjun2im9f4cotsn0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ttjjjun2im9f4cotsn0.png" alt="Evolution 6"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  6.1) Reverse ETL
&lt;/h3&gt;

&lt;p&gt;Similar to how FiveTran and Airbyte allowed the commoditisation of data ingestion from SaaS tools into the cloud data warehouse, Reverse ETL tools like Hightouch and Census have allowed organisations to push data from the cloud data warehouses back into your SaaS systems. I debated whether this was actually a new pattern or just another data integration. The primary reason I chose to see this as a new pattern is that these tools are getting data into the hands of a new set of customers. Customer service reps who use Salesforce throughout their day can now see details about the customer revenue or NPS or other details that can help the rep in their day to day communication with the organisations customers. Direct marketers can see more details and metrics about their customer directly within the tools they use to target customers. Personally, I think this is a great evolution. Organisations can get more return from their data and data teams can see their products being used every day contributing directly to the organisations bottom line. &lt;br&gt;
It has so quickly become an established part of the stack that I have shown it as commoditised already.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.2) Metrics Layer/Headless BI
&lt;/h3&gt;

&lt;p&gt;The metrics layer is getting a lot of press recently. What I think a metrics layer can be defined as is a centralised store of definitions that can be accessed by an api and therefore any tool within an organisation.&lt;br&gt;
As I referenced earlier, these definitions were often hidden in a data visualisation's data model, within an OLAP cube or Excel. Separating them out into their own accessible layer is a great idea. Transform.co, Metriql and Supergrain are early innovators in this space. LookML from Looker has been around for a number of years but wasn't seen as separate to the core Looker data vizualisation tool. And dbt Labs have announced their own version as part of dbt v1.0 at the Coalesce 2021 conference.&lt;br&gt;
These products still belong to a very nascent part of the stack and need to establish themselves in terms of value and need to data consumers.&lt;/p&gt;

&lt;h3&gt;
  
  
  6.3) Data Trust
&lt;/h3&gt;

&lt;p&gt;There are a lot of new vendors working on solutions to tackle Data Discoverability, Quality, Observability, Reliability or Lineage. Regardless of what question these services are proposing to tackle, I see all them all working to improve trust in an organisations data. For as long as there have been data systems and insights being produced, data teams are always asked to defend their numbers and ensure they are correct. These questions are valid and necessary as important decisions are often made based on those numbers. However, they do take time to answer and issues are often caused by data changing rather than any software bug. As data volumes grew and and all parts of the data stack expanded, it has become more difficult for teams to keep a track of all the data in their platform. Companies like Monte Carlo, Datafold, Bigeye and Metaplane all have products to help data teams keep on top of state of the data in their data warehouses. &lt;br&gt;
These tools all operate by tracking where data is sourced from, how it gets into the data warehouse, transformation and then profiling it at rest. Combined with open frameworks like &lt;a href="https://openlineage.io/" rel="noopener noreferrer"&gt;Open Lineage&lt;/a&gt; and &lt;a href="https://open-metadata.org/" rel="noopener noreferrer"&gt;Open Metadata&lt;/a&gt;, these tools have the potential to improve organisations confidence in their data.&lt;br&gt;
Up to now, the vendors for these tools are targeting the data teams as their customer. Data teams understand the problem and are happy to outsource it if possible. I think the real potential for these tools will come when data customers are the ones using them. If data customers can bypass the data teams and check how much trust they can put in the insights being generated, that would be a game-changer. Self-service data trust if you will.&lt;br&gt;
Taking this a step further, if our industry ever gets to the point where the modern data stack starts to drive customer facing applications, tools that automate and verify data trust will become essential.&lt;/p&gt;

&lt;p&gt;Why are you ignoring Hadoop?&lt;br&gt;
The main reason why Hadoop is excluded from the Modern Data Stack is that it hasn't enabled this new set of data tooling and processes that the cloud data warehouses have. We can draw a direct line from the introduction of Amazon Redshift and Snowflake to where the Modern Data Stack is now. Hadoop did not enable that. There definitely was some cross-pollination between the two. Spark and Presto both came from Hadoop and are used widely in data stacks. Even as Hadoop was commoditised for the cloud with AWS EMR or Azure HDInsight or GCP Dataproc, it hasn't had the same impact as for Snowflake, Amazon Redshift and other cloud data warehouses.&lt;/p&gt;

&lt;h1&gt;
  
  
  The future
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Continued commoditisation and convergence at the data storage layer.
&lt;/h2&gt;

&lt;p&gt;We have seen how the commoditisation of the OLAP data warehouse technology has caused an explosion in new data tools and productivity. My wish for the future is for other data storage technologies to be converged under a single endpoint. We have already seen this with most of the cloud data warehouses consolidating the data lake storage into the data warehouse by now supporting storing semi-structured data types. This enables organisations to store this type of data alongside your relational data. Previously, you would have to store this type of data on object storage making it harder to join with.&lt;br&gt;
If a customer could use this endpoint for multiple data uses, that could drive the post-Modern Data Platform. Build a search index or graph endpoint all on the one platform. &lt;/p&gt;

&lt;p&gt;TDWI have called out Multimodel platforms as their number 1 trend in their &lt;a href="https://tdwi.org/Articles/2021/12/20/DIQ-ALL-Data-Management-Look-Back-At-2021.aspx?Page=1" rel="noopener noreferrer"&gt;Data Management: A Look Back At 2021&lt;/a&gt; article. They define multimodel as&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Multimodel databases combine relational and nonrelational data and seamlessly execute analytics, transactions, and other workloads in a single platform with scalability, performance, high availability, and unified management.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;They reference the NoSQL vendors driving into this space but Snowflake is also driving at this where you can seamlessly use SQL for search with their &lt;a href="https://docs.snowflake.com/en/user-guide/search-optimization-service.html" rel="noopener noreferrer"&gt;Search Optimisation Service&lt;/a&gt; feature. TigerGraph providers a &lt;a href="https://info.tigergraph.com/tigergraph-snowflake-connector" rel="noopener noreferrer"&gt;Snowflake Connector&lt;/a&gt; that allows customers to analyse their Snowflake data with a graph. Imagine now that you could run a  graph workload all through the same endpoint.&lt;/p&gt;

&lt;p&gt;AWS should the obvious frontrunner in this space given that they have a managed service for every &lt;a href="https://aws.amazon.com/products/databases/" rel="noopener noreferrer"&gt;database&lt;/a&gt; engine known to humanity. Their Amplify service allows application developers to model data for transactions and for search by specifying the @searchable directive as part of your GraphQL schema. This will automatically deploy an Opensearch index fed from your DynamoDB table. However their official approach is for purpose-built databases, allowing their customers choose the right database engine for their use case. Amplify is for application development but if we could see something similar for data analytics, I think it could be a massive win. I do not believe that developers should only have to deal with this complexity if they want to or need to. It should not be by design.&lt;br&gt;
Any of the major cloud vendors could provide this type of solution. They have all the different database technologies and a single abstraction layer over these could in theory work. For the likes of Snowflake or Firebolt to build this, they would need to standup these different technologies. Databricks also has a lot of experience building low-level technology and has shown that they are far more than the Spark company.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ability to join streaming data to other state data.
&lt;/h2&gt;

&lt;p&gt;I see great potential in streaming but in reality most streaming systems have limited use cases. The inability of streaming systems to join to other state data is a severe limiting factor. Most streaming systems are built against a single event and have limited scope. An event may be decorated with extra information before it is published onto a stream but this can slow the publishing process down. The real power of a data warehouse is its ability to join state from multiple different sources and create new information and insights. If a streaming platform can be built that could join quickly and easily to all state within a data warehouse, that would be a game changer. Or it could happen the other way around. A process from within a data warehouse could read a stream of data in realtime and look up state within the wider data warehouse and take appropriate actions within milliseconds. I see Materialize, Upsolver and Clickhouse going in this direction and it could eventually bring a realtime data warehouse into existence.&lt;/p&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;To be honest, before I started this exercise I was filled with a lot of excitement for the Modern Data Stack and the potential around it. I still am but mapping has made it clear to me that we are still only in the early days of seeing any new patterns emerge. Up to now, we have seen the commoditisation of existing patterns rather than new patterns. As more and more customers move onto the commoditised platforms, we'll see new patterns emerge and start to move from genesis to custom built to product and eventually commoditised themselves.&lt;/p&gt;

</description>
      <category>analytics</category>
      <category>wardleymaps</category>
      <category>moderndata</category>
      <category>data</category>
    </item>
    <item>
      <title>Filtering DynamoDB Streams before Lambda</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Mon, 29 Nov 2021 20:34:15 +0000</pubDate>
      <link>https://dev.to/aws-builders/filtering-dynamodb-streams-before-lambda-p5p</link>
      <guid>https://dev.to/aws-builders/filtering-dynamodb-streams-before-lambda-p5p</guid>
      <description>&lt;h1&gt;
  
  
  Event Filtering
&lt;/h1&gt;

&lt;p&gt;AWS recently released a feature that could dramatically reduce your Lambda costs and improve the sustainability of your code. This new feature allows you to filter event sources from SQS, Kinesis Data Streams and DynamoDB Streams before they invoke the Lambda. See the official announcement &lt;a href="https://aws.amazon.com/about-aws/whats-new/2021/11/aws-lambda-event-filtering-amazon-sqs-dynamodb-kinesis-sources/"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Before the release of this feature, every insert, update and delete operation on items in the source DynamoDB table caused the Lambda function to be invoked. Filtering would have to be applied to the events within the function to ascertain whether further processing should occur. This always seemed inherently wasteful to me and this new feature is most definitely welcome. &lt;/p&gt;

&lt;h1&gt;
  
  
  Example
&lt;/h1&gt;

&lt;p&gt;I have a use case where I maintain a running count of items per partition key on a DynamoDB table. To do this, I enabled the Streams feature on the source table and used that to trigger a Lambda function. As I am just counting items the function only processes INSERT and REMOVE events and does not process MODIFY events. You can read more about it &lt;a href="https://dev.to/aws-builders/select-count-from-dynamodb-group-by-pk1-sk1-with-streams-43dj"&gt;here&lt;/a&gt;. I should say that the item holding the counts is on the same table and every update to that item puts a MODIFY event on the stream. This event in turn then triggers the Lambda again which the function ignores.&lt;br&gt;
One of the earliest steps in the function is to check for an INSERT or REMOVE event. If it is not an INSERT or REMOVE event, the function exits. With the new ability to filter INSERT and REMOVE events on the stream, I can choose to ignore MODIFY events before invoking the Lambda function. This will reduce the number of times Lambda is invoked by 50%.&lt;/p&gt;
&lt;h1&gt;
  
  
  Implementation
&lt;/h1&gt;

&lt;p&gt;It was surprisingly easy to implement the filter and it just meant the addition of 3 lines of yaml in my SAM template.&lt;/p&gt;
&lt;h2&gt;
  
  
  Without filtering
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="na"&gt;Events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;DynamoDBEvent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DynamoDB&lt;/span&gt;
          &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;Stream&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;DynamoDBTable.StreamArn&lt;/span&gt;
            &lt;span class="na"&gt;StartingPosition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TRIM_HORIZON&lt;/span&gt;
            &lt;span class="na"&gt;BatchSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  With filtering
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="na"&gt;Events&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;DynamoDBEvent&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DynamoDB&lt;/span&gt;
          &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;Stream&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;DynamoDBTable.StreamArn&lt;/span&gt;
            &lt;span class="na"&gt;StartingPosition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;TRIM_HORIZON&lt;/span&gt;
            &lt;span class="na"&gt;BatchSize&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
            &lt;span class="na"&gt;FilterCriteria&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;Filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
                &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Pattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{"eventName":&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;["INSERT","REMOVE"]}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h1&gt;
  
  
  Single Table Design
&lt;/h1&gt;

&lt;p&gt;This feature will have positive benefits where you are using single table design in conjunction with DynamoDB Streams. In a single table design, you can record several different entities within the one table. If you have a Lambda that is targeted only to one entity in the table, you should now be able to filter events belonging to that entity and ignore others. This new feature allows you to filter to patterns that include the data item being written to the stream. Depending on how you designed your partitioned key or other fields, you can reference then within the data field labelled "dynamodb".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;      &lt;span class="na"&gt;FilterCriteria&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="na"&gt;Filters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
           &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Pattern&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;{"dynamodb":&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;{"pk1":&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;[{"prefix":"ANIMAL#"}]}}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Sustainability
&lt;/h1&gt;

&lt;p&gt;I love the implication here that this feature reduces the carbon footprint of your code while also saving you money. This is sustainability in action. In my use case, there wasn't any code that could be removed. But I could see that there may be cases where code in a Lambda function could shift left into a filter. This would further reduce the footprint of the function and make it faster to load.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>dynamodb</category>
      <category>lambda</category>
    </item>
    <item>
      <title>DynamoDB Single Table Design - How to use a GSI?</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Fri, 05 Nov 2021 22:20:11 +0000</pubDate>
      <link>https://dev.to/aws-builders/dynamodb-single-table-design-how-to-use-a-gsi-26eo</link>
      <guid>https://dev.to/aws-builders/dynamodb-single-table-design-how-to-use-a-gsi-26eo</guid>
      <description>&lt;p&gt;Single table design is a data modelling approach for NoSQL where you group different data items by partition key on the same table. When you use this approach with DynamoDB, Global Secondary Indexes (GSIs) are an important component to ensure data can be accessed efficiently.&lt;/p&gt;

&lt;h1&gt;
  
  
  Access Patterns
&lt;/h1&gt;

&lt;p&gt;I have put together a simple example to illustrate how and why you should use a GSI with a single table design. I will use a hands on example to show how they can improve latency and reduce cost.&lt;br&gt;
My sample application will track the players of different sport teams and a count of players in each team. These are 2 access patterns that I will support:&lt;/p&gt;

&lt;p&gt;1) List all sports teams and the count of players per team&lt;br&gt;
2) List all players of a team plus a count of players in that team&lt;/p&gt;

&lt;p&gt;In a traditional DBMS, you would store these items in two separate tables and run two operations to retrieve the data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xjwdk568tp8jbay5rgp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xjwdk568tp8jbay5rgp.png" alt=" " width="784" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With a single table design, you can use the same table to store these items and retrieve both of them in one operation. To get the most from this article, it would be good to understand the 3 options DynamoDB supports for retrieving data. These options are:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;getItem&lt;/strong&gt; - returns a set of attributes for a given primary key. If no item is found, nothing is returned. Only returns one item at a time.&lt;br&gt;
&lt;strong&gt;Query&lt;/strong&gt; - returns all items for a given partition key. You can optionally filter down items by providing a sort key and a filterexpresion.&lt;br&gt;
&lt;strong&gt;Scan&lt;/strong&gt; - returns one or more items and their attributes by accessing every item in a table or secondary index.&lt;/p&gt;

&lt;p&gt;These operations can be executed against both a table and any indexes associated with a table. As a rule of thumb, you want to avoid scan operations as they are very costly. Each item scanned counts towards your read capacity unit (RCU) quota. The more targeted you can make the operation, the less items will be read and the more efficient it will be. Therefore, getItem will always be the most efficient, followed by Query.&lt;/p&gt;

&lt;h1&gt;
  
  
  Table Model Options
&lt;/h1&gt;

&lt;p&gt;Without a GSI, this is how the data could look within our DynamoDB table. As you can see, there are 2 types of items in here. One item per player per team and then an aggregate item per team that stores the count of players.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzg7vhy98owfg3yc0kodi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzg7vhy98owfg3yc0kodi.png" alt=" " width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each team is grouped by the partition key, pk1, and all records for a team will be stored together allowing them to be retrieved quickly in a single query operation. This addresses our second access pattern.&lt;br&gt;
However, to address our first access pattern without a scan, we would need to swap the values in the partition key and sort key fields.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26sxv724teddsc4tsexr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F26sxv724teddsc4tsexr.png" alt=" " width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But in this scenario you would need a scan operation to address the second access pattern.&lt;/p&gt;

&lt;p&gt;To avoid using a scan for either operation, a GSI can be implemented. A GSI effectively creates a sidecar table with the data rewritten in a way to best support your secondary access pattern. GSIs don't need to be maintained separately and are populated by the same action that populates the table. Items are inserted, updated and deleted asynchronously to the table and GSIs have their own RCUs and WCUs specified separately. This means that operations executed against a GSI are isolated from the table operations.&lt;/p&gt;

&lt;p&gt;For our example, creating a GSI with the sk1 and pk1 fields from scenario 1 reversed will group all of the count records (where sk1=0 from the table) on the same partition key and ensure the items can be accessed efficiently.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3xy27m9hmr16khalo2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk3xy27m9hmr16khalo2r.png" alt=" " width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I can optimize this further by creating separate fields for the GSI partition key and sort key values on the table. I can then create a GSI based on these fields.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ukhcy23g7i0sizy4xzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ukhcy23g7i0sizy4xzv.png" alt=" " width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now the GSI only contains items where the gsi_pk1 and gsi_sk1 fields are populated. This reduces the size of index to hold only the count records.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65tmoozvm6mrq2d0nqyw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F65tmoozvm6mrq2d0nqyw.png" alt=" " width="800" height="126"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Physical Tables
&lt;/h1&gt;

&lt;p&gt;This gives us 4 possible physical table options to address both access patterns&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Table 1&lt;/th&gt;
&lt;th&gt;Table 2&lt;/th&gt;
&lt;th&gt;Table 3&lt;/th&gt;
&lt;th&gt;Table 4&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;No Index&lt;/td&gt;
&lt;td&gt;No index&lt;/td&gt;
&lt;td&gt;GSI&lt;/td&gt;
&lt;td&gt;Filtered GSI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Access Pattern 1&lt;/td&gt;
&lt;td&gt;Scan&lt;/td&gt;
&lt;td&gt;Query&lt;/td&gt;
&lt;td&gt;Query&lt;/td&gt;
&lt;td&gt;Query&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Access Pattern 2&lt;/td&gt;
&lt;td&gt;Query&lt;/td&gt;
&lt;td&gt;Scan&lt;/td&gt;
&lt;td&gt;Query&lt;/td&gt;
&lt;td&gt;Query&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;To further illustrate the usefulness of GSIs, I created the four tables and performance tested them to show the results. I loaded 999 items for 5 teams giving 5000 items in total when aggregation items were inserted. I monitored the following two metrics in Cloudwatch during the tests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SuccessfulRequestLatency&lt;/strong&gt; - this is the response time in milliseconds of successful requests&lt;br&gt;
&lt;strong&gt;ConsumedReadCapacityUnits&lt;/strong&gt; - number of read capacity units consumed&lt;/p&gt;

&lt;p&gt;I used an average of these metrics for each run in the graphs below.&lt;/p&gt;

&lt;h2&gt;
  
  
  Access Pattern 1
&lt;/h2&gt;

&lt;p&gt;Here we can see how inefficient a Scan operation can be. To retrieve the count of players per team, a scan operation had to be run against all 5000 items in the table and then filtered down to just 5 records.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbl0jvv95dt07u1cenzsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbl0jvv95dt07u1cenzsy.png" alt=" " width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Taking out the Scan query for table 1, we can see that the table 4 design with the filtered index is the most efficient for latency. RCUs consumed are consistent across tables as the 5 items are retrieved with a query.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wic2ic25jpb1i0otxy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1wic2ic25jpb1i0otxy2.png" alt=" " width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Access Pattern 2
&lt;/h2&gt;

&lt;p&gt;Here again we see the inefficiency of the Scan operation but this time against table 2. We are retrieving a larger dataset, 1000 items per team, 999 players plus 1 aggregation item.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjf2gqvngx8hr0v06a8hy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjf2gqvngx8hr0v06a8hy.png" alt=" " width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Taking out the Scan on table 2, we see that the RCUs consumed are consistent due to the same query operation being executed. Latency is slightly lower for table 4 but nothing substantial.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb9ec9c4yhgach7rnq66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwb9ec9c4yhgach7rnq66.png" alt=" " width="600" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;I hope this helps you better understand how a GSI can be implemented to support a successful single table design. All the code I used for my tests is available in this &lt;a href="https://github.com/thomasmilner/ddbgsidemo" rel="noopener noreferrer"&gt;repo&lt;/a&gt;. It's not production ready, just good enough for my tests but there might be something useful in there for you.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>database</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Decouple your DAGs with an event-driven architecture on AWS</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Mon, 13 Sep 2021 20:08:39 +0000</pubDate>
      <link>https://dev.to/aws-builders/decouple-your-dags-with-an-event-driven-architecture-on-aws-bk</link>
      <guid>https://dev.to/aws-builders/decouple-your-dags-with-an-event-driven-architecture-on-aws-bk</guid>
      <description>&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;Applying domain-driven design and an event-driven architecture to the orchestration of our services has given our teams some very practical benefits in their day-to-day work on development and support. &lt;/p&gt;

&lt;h1&gt;
  
  
  The Problem
&lt;/h1&gt;

&lt;p&gt;Up until recently, we ran our main scoring job in one big DAG running in Airflow. This DAG calls services developed and maintained by at least 3 separate teams. With this setup, we were tightly coupling our systems, our processes (on-call, support) and our development and technology choices.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fds0z3wyluo28u83p0tp6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fds0z3wyluo28u83p0tp6.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In practice, here are a couple of real world problems we were running into:&lt;/p&gt;

&lt;p&gt;1) The upstream team had several variations of their steps within the DAG. Each variation needed to have our teams steps copied and maintained in separate DAGs. Decoupling allows us to keep all our steps in a single DAG and to know where exactly our services are being orchestrated from.&lt;br&gt;
2) The decision to use Airflow was made by the upstream team as it made sense for their skills and technologies. Decoupling will also allow us to use a technology that may be better suited to our teams skills and technologies. For example, we could move to Step Functions if we wanted. We will not be bound to another team or domains technology choices.&lt;br&gt;
3) In addition to being on a mailing list for all alerts from the DAG, having to troubleshoot any failure may involve going through the larger DAG. While this may seem minor, situations like these can take a toll on a developer's productivity. Having our own separate DAG allows us to focus on our own services.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Solution
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Domain Driven Design
&lt;/h2&gt;

&lt;p&gt;The first step was to identify the different domains represented within the larger DAG. From the outside, these may seem simple. The core services can be easy to identify but the boundaries are harder to identify. Where does one domain end and another begin? Our criteria for each domain was resolved around which team supported the service called. In addition, it was agreed the upstream domain was responsible for publishing an event when a material step in the DAG had completed. The downstream domain was responsible to consume that event. Using these guidelines, we were able to split the DAG out into 3 separate domains.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76e2s6gtmh7xlypd6xs4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F76e2s6gtmh7xlypd6xs4.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Communication between domains
&lt;/h2&gt;

&lt;p&gt;We knew we needed to communicate between domains. This communication would involve more than just a marker to say that an event had happened. We also needed to pass some parameters between domains. These parameters were necessary to the execution of the end-to-end flow and needed to be passed from domain to domain. &lt;/p&gt;

&lt;p&gt;The term event-driven has become ubiquitous in modern software development but what does it mean? What exactly is an event? According to AWS&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;An event is a change in state, or an update, like an item being placed in a shopping cart on an e-commerce website. Events can either carry the state (the item purchased, its price, and a delivery address) or events can be identifiers (a notification that an order was shipped).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Using this definition, we would able to use the event to pass information from one domain to another. &lt;/p&gt;

&lt;h2&gt;
  
  
  Technical solution
&lt;/h2&gt;

&lt;p&gt;While our Airflow clusters are hosted on-premise, we decided early on that we wanted to use AWS services to publish and subscribe to events. We have an internal goal to host our services on AWS and to use a serverless service where we can. Ultimately, the SNS Fanout to SQS pattern fitted well to our requirements. For more information on this pattern, see this post on the AWS blog.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/compute/enriching-event-driven-architectures-with-aws-event-fork-pipelines/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/compute/enriching-event-driven-architectures-with-aws-event-fork-pipelines/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This pattern allows us to separate out the publisher and subscriber into distinct services. The upstream service publishes an SNS topic with the event details. Each downstream service owns a separate SQS queue that subscribes to that topic. A JSON document can be passed between both services to communicate any necessary parameters.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86o179nxa2ywdxa1umxb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F86o179nxa2ywdxa1umxb.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;1) This is the upstream Airflow DAG. Once it has passed a certain point, a JSON document is passed via API Gateway to an SNS topic. &lt;br&gt;
2) SNS immediately informs all subscribers that a new event has been received. The JSON document is passed along to all subscribers.&lt;br&gt;
3) In the downstream domain, an FIFO SQS queue is subscribed to the SNS topic. &lt;br&gt;
4) The first step in the downstream DAG polls the SQS queue on a regular interval for messages using API Gateway. If a message is on the queue, the step validates the message to see if it is properly formed. If so, it kicks off the DAG with the parameters from the JSON document and deletes the message from the queue via API Gateway.&lt;/p&gt;

&lt;p&gt;An obvious advantage of this design is that when multiple SQS queues can subscribe to the SNS topic without impacting on the upstream DAG or other subscribed SQS queues.&lt;/p&gt;

&lt;p&gt;Note: No Lambdas were harmed in the development of this application. Serverless is about more than Lambda.&lt;/p&gt;

&lt;h2&gt;
  
  
  CDK
&lt;/h2&gt;

&lt;p&gt;We used CDK to deploy our services. This construct is very similar to what we used. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://constructs.dev/packages/@aws-solutions-constructs/aws-sns-sqs/v/1.120.0?lang=python" rel="noopener noreferrer"&gt;https://constructs.dev/packages/@aws-solutions-constructs/aws-sns-sqs/v/1.120.0?lang=python&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, you will need to split out the SQS queue into the downstream domains code base parameterized with the name of the SNS topic. This is still a manual step for us but we are investigating the use of AWS Systems Manager Parameter Store to store and retrieve the name of relevant topic within the CI/CD process.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;Utilizing AWS services to facilitate an event-driven architecture  has been a game-changer for us. It is a relatively simple change in our case but provides several powerful benefits.&lt;/p&gt;

&lt;p&gt;To find out more about how AWS can help you decouple your applications and take advantage of event driven architectures, check out this link:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/event-driven-architecture/" rel="noopener noreferrer"&gt;https://aws.amazon.com/event-driven-architecture/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To check out the individual services used, use the links below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/sqs/" rel="noopener noreferrer"&gt;https://aws.amazon.com/sqs/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/sns/" rel="noopener noreferrer"&gt;https://aws.amazon.com/sns/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/api-gateway/" rel="noopener noreferrer"&gt;https://aws.amazon.com/api-gateway/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eventdriven</category>
      <category>serverless</category>
    </item>
    <item>
      <title>From zero to AWS Community Builder in 2 years</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Mon, 16 Aug 2021 20:27:07 +0000</pubDate>
      <link>https://dev.to/aws-builders/from-zero-to-aws-community-builder-in-2-years-4g8c</link>
      <guid>https://dev.to/aws-builders/from-zero-to-aws-community-builder-in-2-years-4g8c</guid>
      <description>&lt;h2&gt;
  
  
  How I Learned to Stop Worrying and Love the Cloud!
&lt;/h2&gt;

&lt;h1&gt;
  
  
  Introduction
&lt;/h1&gt;

&lt;p&gt;In April of 2021, I was accepted into the AWS Community Builders program. This was the latest milestone in a 2 year journey (so far) to upskill myself on cloud technologies. I think now is a good time to pause and share how I was able to achieve this.&lt;/p&gt;

&lt;h1&gt;
  
  
  AWS Community Builders
&lt;/h1&gt;

&lt;p&gt;AWS Community Builders is a program designed to encourage individuals to share their own learnings and experience with others within their network. This can be done via social network platforms, blogging, presenting at meetups or any other method of putting yourself out to the world. If you are doing any of these, I would encourage you to apply via the link below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/developer/community/community-builders/"&gt;https://aws.amazon.com/developer/community/community-builders/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You get a lot of benefits if you get accepted into the program:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access to AWS product teams and information about new services and features&lt;/li&gt;
&lt;li&gt;Cool swag kit (pictured above)&lt;/li&gt;
&lt;li&gt;Promotional credits to be used for AWS development&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are all great but after being a member for 3 months now, I can say the single biggest benefit to me has been access to the AWS Community Builders Slack channel. There you will find a great group of individuals who can really help you up your AWS game. And who you can help too. It is a community and you get back as much as you put in.&lt;/p&gt;

&lt;h1&gt;
  
  
  Who
&lt;/h1&gt;

&lt;p&gt;A bit of background on myself. I am a 41 year old data engineering manager with 23 years experience in IT. I spent most of the last decade managing data warehousing teams, focused on trying to climb the career ladder and not prioritizing my technical skills. I was always considered technical by my peers and team members and was often been told by developers that they appreciate being managed by someone technical. To be honest, this made me a little lazy. I thought I had the technical side covered and therefore I could focus on being a manager and advancing my career. My strengths lay in organizing the flow of work into the team and protecting them from too many distractions. And that is where I felt I could add most value in a team.&lt;/p&gt;

&lt;p&gt;In the first half of 2019, three things happened that forced me to assess the best way to move forward in IT.&lt;/p&gt;

&lt;p&gt;1) Restructuring within my employer's organization meant I lost half my team. This made me realize that I had to rethink how I wanted to fulfill my career ambitions as a manager.&lt;br&gt;
2) I attended a course on strengths-based goal setting. The basic premise is to focus on your strengths and that will lift your weaknesses. Rather than trying to improve my presentation and networking skills head on, what if I focused on learning the material and message I had to present and socialize. The better you know your material, the better you would be at presenting about it. In turn, this would make you a better presenter.&lt;br&gt;
3) I became a mentor with the Coderdojo association. Initially, this was to help my son learn how to code. I partnered with another parent to set up a dojo in our local town to help teach kids Scratch. Seeing how kids learn really ignited a fire in me to restart my own technical training.&lt;/p&gt;

&lt;p&gt;Around the same time that this was happening, it was starting to become obvious that my employer's existing on-premise data platforms needed to be replaced. I felt that now was a good time to start advocating for a project to upgrade our platform. We wanted to go cloud and this eventually lead me onto AWS.&lt;/p&gt;

&lt;h1&gt;
  
  
  How
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Certification
&lt;/h2&gt;

&lt;p&gt;I understand that some people are skeptical of certifications but for me, I wouldn't be where I am without them. My first cert was the AWS Cloud Practitioner. It is the entry level cert and perfect for anyone starting off. It's the easiest AWS certification exam you will sit and gets you familiar with the whole process of studying, scheduling and sitting exams. I had not sat an exam in over 10 years so I felt this was a big step for me. Also, when I say easiest, I don't mean it is easy, just not as hard as the other exams.&lt;br&gt;
From passing the Cloud Practitioner exam, I have passed four more and will hopefully continue with others. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Certified Cloud Practitioner&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.credly.com/badges/8fc559e6-b8c3-42f3-b78e-7daa81ba6b86/public_url"&gt;https://www.credly.com/badges/8fc559e6-b8c3-42f3-b78e-7daa81ba6b86/public_url&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Certified Solutions Architect – Associate&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.credly.com/badges/80e7e22d-af7c-4b73-b3f1-1e95136e42b8/public_url"&gt;https://www.credly.com/badges/80e7e22d-af7c-4b73-b3f1-1e95136e42b8/public_url&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Certified Big Data – Specialty&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.credly.com/badges/f208e2bb-a63b-41d0-addd-555db292c637/public_url"&gt;https://www.credly.com/badges/f208e2bb-a63b-41d0-addd-555db292c637/public_url&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Certified Data Analytics – Specialty&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.credly.com/badges/429e9873-1fd4-43df-b3c1-6b2205e01442/public_url"&gt;https://www.credly.com/badges/429e9873-1fd4-43df-b3c1-6b2205e01442/public_url&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Certified Developer – Associate&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://www.credly.com/badges/a0c991e4-b007-421b-a00b-403aa2c85903/public_url"&gt;https://www.credly.com/badges/a0c991e4-b007-421b-a00b-403aa2c85903/public_url&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I am spending more time this year getting hands on experience and blogging than working on certifications. However, I wouldn't be at this point without starting with certifications. They provided a structure and a target for me for getting trained up on AWS and I will continue to study for more certifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Blogging
&lt;/h2&gt;

&lt;p&gt;Publishing my first blog was an even more nerve-wracking experience than sitting my first certification exam. Putting myself out there to the general public was pushing myself far out of my comfort zone. My article did several thousand views and didn't bring the world down crashing on me so that was a good start. Since then, I have published 7 other articles. I set a target at the start of 2021 to publish an article every month. So far, I have published 6 in 8 months. When you apply for the AWS Community Program, you don't get a breakdown of why you were accepted but I believe that blogging is what made my application stand out.&lt;br&gt;
I use &lt;a class="mentioned-user" href="https://dev.to/practicaldev"&gt;@practicaldev&lt;/a&gt; to publish my blogs as there is no paywall for the readers. I don't do it for money, I do it to share what I've learned and to help others. Check out my blogs here:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/tom_millner"&gt;https://dev.to/tom_millner&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Social Media
&lt;/h2&gt;

&lt;p&gt;Being active on Twitter and/or LinkedIn has been a game-changer for me. Similar to my experience with the AWS Builders Slack channel, being on Twitter exposes you to a community of like minded individuals. I have learned so much from following experts like Jeremy Daly, Sheen Brisals, Yan Cui, Zack Kanter, Chris Munns, James Beswick, etc. The list goes on. Check out who I follow for inspiration.&lt;/p&gt;

&lt;p&gt;Check out &lt;a href="https://twitter.com/tom_millner/following"&gt;https://twitter.com/tom_millner/following&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even if you don't tweet, being on Twitter is one of the best way of keeping up to date with the latest technical news and releases.&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;If you serious about joining the AWS Community Builders program, there is no secret to how you will be accepted. You need to put the time in and learn as much as you can about AWS Cloud. There are different tracks so you can specialize in one or two different areas. Some of those tracks are Data, Development Tools, Game Tech, Frontend Web and Mobile, Machine Learning and Serverless. These allow you the scope to focus on your chosen track. After that, it's down to hard work and spreading the knowledge of what you have learned.&lt;br&gt;
I would have to say it's definitely been worth it for me both professionally and personally. I am enjoying my industry now more than ever and the key to it has been learning and focusing on your strengths. By going all in on cloud and AWS specifically, I am learning so much everyday and I continue to see great scope to grow my technical skills and career.&lt;/p&gt;

</description>
      <category>aws</category>
    </item>
    <item>
      <title>Migrating to Redshift RA3 nodes</title>
      <dc:creator>Tom Milner</dc:creator>
      <pubDate>Tue, 20 Jul 2021 06:00:52 +0000</pubDate>
      <link>https://dev.to/aws-builders/migrating-to-redshift-ra3-nodes-fn7</link>
      <guid>https://dev.to/aws-builders/migrating-to-redshift-ra3-nodes-fn7</guid>
      <description>&lt;h1&gt;
  
  
  What is RA3?
&lt;/h1&gt;

&lt;p&gt;RA3 is the latest family of Redshift node types launched at re:Invent 2019. The other node types are Dense Storage (DS2) and Dense Compute (DC2). The primary difference with RA3 is that it has a completely separate storage layer called Redshift Managed Storage (RMS). RMS uses high performance SSDs for your hot data and Amazon S3 for cold data. In addition, it uses high bandwidth networking built on the AWS Nitro System to reduce the time taken for data to be offloaded to and retrieved from Amazon S3.&lt;br&gt;
With the DC2 and DS2 node types, storage is tightly coupled to the compute nodes using EBS volumes attached to each individual node.&lt;/p&gt;

&lt;h1&gt;
  
  
  Why move to RA3?
&lt;/h1&gt;

&lt;p&gt;There are a number of reasons to consider migrating your existing cluster to the RA3 node types and I have listed what I believe are the main ones.&lt;/p&gt;

&lt;h2&gt;
  
  
  Decoupled Storage
&lt;/h2&gt;

&lt;p&gt;Every organization is different but the main reason for ours to migrate to RA3 was to take advantage of the increased storage to compute ratio. For every ra3.4xl or ra3.16xl node you add to the your cluster, you get access to 128TB storage capacity. Plus you only pay for what you consume. You are not paying for the full 128TB every month.&lt;br&gt;
By migrating one ds2.8xl node to 2 ra3.4xl nodes, we have increased our storage capacity by 1600% from 16TB to 256TB. While storage is still actually coupled to a compute node, there is so much of it that it has effectively become decoupled.&lt;br&gt;
This table lists the difference in technical specifications between 1 ds2.8xl node and 2 ra3.4xl nodes.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Node Type&lt;/th&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;vCPU&lt;/th&gt;
&lt;th&gt;Memory&lt;/th&gt;
&lt;th&gt;Storage&lt;/th&gt;
&lt;th&gt;I/O&lt;/th&gt;
&lt;th&gt;Slices&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ds2.8xl&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;36&lt;/td&gt;
&lt;td&gt;244 GiB&lt;/td&gt;
&lt;td&gt;16TB HDD&lt;/td&gt;
&lt;td&gt;3.30 GB/s&lt;/td&gt;
&lt;td&gt;160&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ra3.4xl&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;td&gt;192 GiB&lt;/td&gt;
&lt;td&gt;256TB RMS&lt;/td&gt;
&lt;td&gt;4.00 GB/s&lt;/td&gt;
&lt;td&gt;80&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;It may look like you are taking a reduction in vCPU and Memory but that hasn't had any noticeable impact on the performance of our clusters. Our typical queries are running in the same time or slightly faster. The combination of newer hardware and SSDs makes up for this apparent reduction.&lt;br&gt;
With the DS2 and DC2 node types storage is tightly coupled to vCPU and memory. To purchase more storage, you also have to purchase the attached vCPU and memory. The pushes up the cost of purchasing 1TB storage for these node types to between $77 and $306 per month, depending on purchase plan. With RA3, storage is priced separately to vCPU and memory and 1TB of RMS costs $24 per month, regardless of purchase plan. In our case, we had previously expanded a cluster from 5 to 7 ds2.8xl nodes purely for storage requirements. When we migrated to ra3.4xl, we were able to reduce what should have been 14 nodes (using a 1:2 ratio) to 10 nodes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operational Overhead
&lt;/h2&gt;

&lt;p&gt;The increased storage to compute ratio means that you can now scale storage and compute separately. With our previous 10 node ds2.8xl cluster, we were bound to 160TB. Storage capacity on the cluster bounced between 60% and 80% used depending on the time of year. Working with our business customers to keep used storage capacity below 80% at peak times of activity was a fulltime job. This added more overhead to IT and the business teams taking time away from value creation. Not having to chase business users to delete data from the cluster has been a relief for both the technical and business teams.&lt;br&gt;
If you are purchasing RIs on a 3 or even 1 year term, having different renewal dates on different nodes within the same cluster complicates your cost management. Nodes will come up for renewal at different times of the year and it also makes any future plans you may have for your cluster size have more complicated. Imagine you started with a 5 node cluster purchased on a 3 year term. 1.5 years in, you realize that you need to increase your cluster size to 7 nodes so you purchase 2 more nodes on a 3 year term. Effectively, you have now extended the lifetime of your cluster to 4.5 years. And when the original 5 nodes come up for renewal, what do you do? Extend for another 1 or 3 years?&lt;br&gt;
There is no RI plan for RMS so this reduces complexity here.&lt;/p&gt;

&lt;h2&gt;
  
  
  Availability
&lt;/h2&gt;

&lt;p&gt;If you are running a two node cluster for availability purposes, you may be able to scale down to a smaller RA3 cluster. In our case, we were able to resize a 2 node ds2.8xl cluster to 2 node ra3.4xl cluster. It gives us the same availability profile with 256TB storage.&lt;/p&gt;

&lt;h2&gt;
  
  
  New Features
&lt;/h2&gt;

&lt;p&gt;Other reasons to upgrade to RA3 nodes are that a number of features are only and will only be available on this node type. The new storage model is making possible a number of new features that couldn't be supported on the DC2 and DS2 node types.&lt;/p&gt;

&lt;p&gt;1) AQUA: As outlined, RMS stores all data on S3 and then moves data to an SSD layer when requested by compute layer. AQUA pushes more predicates down to RMS and reduces data movement between storage and compute layers. The first iteration of AQUA supports scan and aggregation operations when they contain at least one predicate that contains a LIKE or SIMILAR TO expression. This should greatly enhance query performance where you have these types of queries. We recently turned it on but most of our queries do not match this profile and we have seen very little benefit. However, we hope that AWS will add more support for other expressions in the future and we are now best place to take advantage of any future optimizations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-aqua-understanding.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-aqua-understanding.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2) Data sharing: This feature allows you to share live data stored on RMS between different Redshift clusters. For example, you can have one cluster that writes your data to RMS and separate clusters for consuming that data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/big-data/announcing-amazon-redshift-data-sharing-preview/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/big-data/announcing-amazon-redshift-data-sharing-preview/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3) Cross-AZ Cluster Relocation. Up to now, Redshift was hosted entirely in a single AZ. A failure in the host AZ would mean you would have to restore a snapshot to a new cluster in a separate AZ, generally via a bespoke process. However, RMS is not bound to a single AZ. Therefore, in case of a failure, you will only need to restore your compute nodes in a separate AZ. AWS is making this a setting in the backup details called Cluster relocation. Enabling this feature allows Redshift to relocate your cluster when certain AZ level issues arise.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-recovery.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4) Some existing features may work more effectively with RA3 and RMS specifically. We are currently re-testing concurrency scaling with RA3 as we expect this should work differently with RA3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Storage Tiering
&lt;/h2&gt;

&lt;p&gt;With the restricted storage capacity of the DS2 and DC2 nodes, one storage saving pattern that could be applied is to offload the older data on the cluster to S3 and expose them in Redshift using the Spectrum service. Storing 1TB of data in S3 Standard class would cost $23.55 per month. Storing this data in RMS would only cost slightly more at $24.58 per month. However this ignores the operational cost of supporting a bespoke solution using Spectrum.&lt;br&gt;
If you choose a different S3 storage class, the Spectrum pattern may be more cost-effective. For example, if you went for Standard-Infrequent Access, the price would be almost halved to $12.8. However be careful with Standard-IA storage class. Data stored in Standard-IA costs more to access and your costs could quickly exceed Standard if you access it too frequently. A GET request against Standard-IA costs 2.5 times more than against Standard. This can add up quickly.&lt;br&gt;
I would say that RMS offers an easy way to take advantage of S3 storage pricing without any of the operational overhead of explicitly archiving data to S3.&lt;/p&gt;

&lt;h1&gt;
  
  
  Pricing
&lt;/h1&gt;

&lt;p&gt;Pricing is a tricky beast here. On-demand and 1yr terms are almost the same even when you include storage cost. However, if you compare 3 year terms for ds2.8xl nodes versus ra3.4xl, prices are 44% more expensive for the newer hardware. &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Node Type&lt;/th&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;On Demand&lt;/th&gt;
&lt;th&gt;1yr&lt;/th&gt;
&lt;th&gt;3yr&lt;/th&gt;
&lt;th&gt;Storage (16TB * 0.8)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;ds2.8xl&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;$7.60 per Hour&lt;/td&gt;
&lt;td&gt;$5.00&lt;/td&gt;
&lt;td&gt;$2.27&lt;/td&gt;
&lt;td&gt;$0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ra3.4xl&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;$7.212 per Hour&lt;/td&gt;
&lt;td&gt;$4.84&lt;/td&gt;
&lt;td&gt;$2.84&lt;/td&gt;
&lt;td&gt;$0.44&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost Difference&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;0.65%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;5.6%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;44%&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbqi6otuop5kivjr50so.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbqi6otuop5kivjr50so.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pricing for backup storage is also different. For DS2 and DC2 clusters the charge is only applied to the amount of manual snapshot storage above the amount of provisioned storage on the cluster. All manual snapshots taken for RA3 clusters are billed as backup storage at standard Amazon S3 rates. However, automated snapshots are priced the same.&lt;/p&gt;

&lt;p&gt;All of these pricing scenarios are based on the theory that you upgrade on 1:2 ratio. In theory, if you replace 1 ds2.8xl with 2 ra3.4xl nodes, it will cost you more over a 3 year term. This ignores the main benefit of the decoupled compute and storage that upgrading to RA3 provides. In practice, we were able to reduce the number of nodes we were using and our monthly Redshift costs dropped by 12%.&lt;br&gt;
Other cost savings with reduced operational overhead are harder to quantify but worth taking into account if assessing a move to RA3.&lt;/p&gt;

&lt;h1&gt;
  
  
  Migration
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;The migration process is straightforward. The only technical decision you need to make is whether you wish to use elastic or classic resize. Classic resize provisions a new cluster and copies the data from the original cluster to the new cluster. Elastic resize works by changing or adding nodes to your existing cluster. Elastic resize is the AWS recommended approach and they have improved it a lot recently especially with the ability to change node types. It is also the fastest way to add and remove nodes. Classic resize can take several hours or days depending on the size and configuration of your cluster.&lt;br&gt;
The main reason to use classic over elastic resize is if the new cluster configuration isn't possible with an elastic resize. Elastic resize is an opinionated option and only allows you to add or remove nodes within certain parameters. For example,&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;For dc2.8xlarge, ds2.8xlarge, ra3.4xlarge, or ra3.16xlarge node types, you can change the number of nodes to half the current number or double the current number of nodes. A 4-node cluster can be resized to 2, 3, 5, 6, 7, or 8 nodes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The easiest way to check what cluster size is available via elastic resize is to check within the console. &lt;br&gt;
Go to Actions --&amp;gt; Resize. Elastic resize (recommended) is chosen by default. Scroll down to the Nodes dropdown and you can see the number of nodes you can upgrade to.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pml0h7wi9vn3hk0u236.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pml0h7wi9vn3hk0u236.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For more information, AWS have a good page with the details you'll need on resizing your cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-operations.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-operations.html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Execution
&lt;/h2&gt;

&lt;p&gt;At the time of our upgrade, we were able to choose elastic resize for 2 of our larger clusters. Elastic resize was not available as an option for moving from a 2 node ds2.8xl cluster to 2 node ra3.4xl cluster so we went with classic resize for those. However, that option is now available via the console.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Original Node Type&lt;/th&gt;
&lt;th&gt;New Node Type&lt;/th&gt;
&lt;th&gt;Elastic vs Classic&lt;/th&gt;
&lt;th&gt;Data Size&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;10 ds2.8xl&lt;/td&gt;
&lt;td&gt;20 ra3.4xl&lt;/td&gt;
&lt;td&gt;Elastic&lt;/td&gt;
&lt;td&gt;120tb&lt;/td&gt;
&lt;td&gt;30 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7 ds2.8xl&lt;/td&gt;
&lt;td&gt;10 ra3.4xl&lt;/td&gt;
&lt;td&gt;Elastic&lt;/td&gt;
&lt;td&gt;70tb&lt;/td&gt;
&lt;td&gt;32 minutes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2 ds2.8xl&lt;/td&gt;
&lt;td&gt;2 ra3.4xl&lt;/td&gt;
&lt;td&gt;Classic&lt;/td&gt;
&lt;td&gt;2.66tb&lt;/td&gt;
&lt;td&gt;8 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2 ds2.8xl&lt;/td&gt;
&lt;td&gt;2 ra3.4xl&lt;/td&gt;
&lt;td&gt;Classic&lt;/td&gt;
&lt;td&gt;9.86tb&lt;/td&gt;
&lt;td&gt;35 hours&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;As you can see from the table above, there is a huge difference in execution times of elastic versus classic resize. While there may be reasons to use classic, an elastic resize makes the entire upgrade simple and quick.&lt;/p&gt;

&lt;p&gt;One thing to note is that while we were able to run both elastic resizes in parallel, the classic resizes cannot run in parallel. If you kick off a second classic resize, it will wait until the first completes before executing. It will be in read-only mode while waiting so you may want to only kick off 2nd resize after the 1st resize completes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Post-migration
&lt;/h1&gt;

&lt;p&gt;As stated previously, the migration to RA3 did not bring a massive lift in performance. It is a very stable and easy migration that brings a lot of benefits to reduce your operational and management load in relation to Redshift. That stability was apparent in the Redshift CloudWatch Dashboard where we saw large drops in our Read and WriteIOPS combined with a drop in Read and WriteLatency but no major changes overall. This was expected with the move to RMS plus a slight increase in CPU Utilisation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febo469mexp9ndik07mc9.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Febo469mexp9ndik07mc9.PNG" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Summary
&lt;/h1&gt;

&lt;p&gt;Upgrading your Redshift cluster to RA3 node types simplifies the management of your cluster. This is primarily enabled through the massive increase in storage available per node. This increase in storage comes without any trade-off in performance. We are 4 months after the upgrade and the best word I can use to describe our experience is stability. The upgrades on our largest clusters took 30 minutes using an elastic resize. The upgrade required no code changes and had no impact on our customers.&lt;br&gt;
From AWS, we see all the new features being enabled on RA3 first. The new storage architecture enables new features that would not be possible in the older node types and such features will never be available on the DC2 or DS2 node types.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>analytics</category>
      <category>database</category>
    </item>
  </channel>
</rss>
