<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Natalia Marek</title>
    <description>The latest articles on DEV Community by Natalia Marek (@nataliam).</description>
    <link>https://dev.to/nataliam</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nataliam"/>
    <language>en</language>
    <item>
      <title>AWS Backup and Logically Air Gapped Vault</title>
      <dc:creator>Natalia Marek</dc:creator>
      <pubDate>Fri, 19 Dec 2025 11:39:13 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-backup-and-logically-air-gapped-vault-3mio</link>
      <guid>https://dev.to/aws-builders/aws-backup-and-logically-air-gapped-vault-3mio</guid>
      <description>&lt;p&gt;AWS Backup is a fully managed data protection service that enables you to automate and centralise backup of the data from multiple AWS services. AWS Backup supports cross-region and cross-account backups, incremental and continuous backups and lifecycle management for some supported resources. &lt;/p&gt;

&lt;p&gt;AWS Backup can be turned on  number of AWS Resources such as RDS, Dynamo DB table, S3 buckets, EBS, Redshift, CloudFormation - &lt;a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-resource" rel="noopener noreferrer"&gt;here is the list of supported services&lt;/a&gt;. The advantage of using AWS Backup beyond data protection include reduced operational overhead, centralised monitoring and simplified compliance monitoring. &lt;/p&gt;

&lt;h2&gt;
  
  
  Centralised, Cross-account Backup and Monitoring
&lt;/h2&gt;

&lt;p&gt;You can set up AWS Backup in your workload account, however backing up data cross-account improves resilience and reduces blast radius if your account gets compromised. &lt;/p&gt;

&lt;p&gt;In the presented pattern the resource being backed up resides in production workload account, the backup policy is created and managed in Organisation management account and the actual backup vaults and backup plan with the backed-up data would be created in a dedicated Backup account.&lt;/p&gt;

&lt;p&gt;This way if any if the workload account gets compromised, the backup plan continues as usual and all the Backup recovery points would remain secure in Backup account. Additionally, you can implement IAM policies and SCP (Service Control Policies) to prevent unauthorised deletion of backups, even by administrators in the workload accounts.&lt;/p&gt;

&lt;p&gt;Key benefits of cross-account backup:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Blast radius reduction&lt;/strong&gt; - compromised workload account cannot affect backups&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Separation of duties&lt;/strong&gt; - different teams manage workloads vs backups&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Centralised compliance&lt;/strong&gt; - single pane of glass for audit &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost optimisation&lt;/strong&gt; - centralised lifecycle policies and storage tiers (where applicable) -&amp;gt; see &lt;a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/backup-feature-availability.html#features-by-resource" rel="noopener noreferrer"&gt;backup feature availability&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs72qybqdm6mj9rm4qmke.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs72qybqdm6mj9rm4qmke.png" alt="Cross-account Backup Architecture Diagram" width="800" height="490"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Vault Lock and Logically Air Gapped Vault
&lt;/h2&gt;

&lt;p&gt;With the pattern above AWS Backup vault, policies and plan all stay within AWS Organisation and Backup account. In addition to storing backups in a separate account we can add another layer of protection, by creating logically air-gapped vault or enabling vault lock.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vault lock
&lt;/h3&gt;

&lt;p&gt;Vault Lock is a feature available on any vault as additional security. There is two modes to choose from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Governance mode&lt;/strong&gt; - allows users with specific IAM permissions to remove the vault lock and modify retention settings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliance mode&lt;/strong&gt; - provides immutable backups where the vault cannot be deleted if any recovery points exist, and the retention period cannot be shortened even by the root user. Once the grace period expires (which is usually 72 hours), the lock becomes permanent and cannot be removed &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using vault locks enforces the retention period set by Backup policy for the resource, protecting against ransomware, accidental deletion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Logically Air Gapped (LAG)
&lt;/h3&gt;

&lt;p&gt;LAG Vault offers additional layer of protection and security compared to a standard AWS Backup vault. With LAG vault:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS-managed encryption&lt;/strong&gt; - the KMS key used to  encrypt backups is owned and managed by AWS, preventing key deletion and modification &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance vault lock by default&lt;/strong&gt; - automatically configured with compliance mode for immutable backups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Isolated from source account&lt;/strong&gt; - backups are logically isolated, meaning they cannot be accessed or deleted from the source account&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAM sharing for recovery&lt;/strong&gt;- uses Resource Access Management (RAM) to share backups with specific accounts - this means that you can setup Backups to be shared for a quick recovery, reducing RTO.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-party approval&lt;/strong&gt; - optional integration requiring multiple authorises users (approval team, min 3 in a team)created in AWS Organisations to approve access to backups - which can also be obtain even in the event of Backup or management account being compromised .&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdv08plakovmqkambsn1x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdv08plakovmqkambsn1x.png" alt="Multi approval team" width="800" height="463"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Use case for LAG vault vs Standard vault:
&lt;/h4&gt;

&lt;p&gt;Use LAG Vault when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need to meet strict compliance requirements&lt;/li&gt;
&lt;li&gt;Protecting critical production data or financial/sensitive records&lt;/li&gt;
&lt;li&gt;You need protection against sophisticated ransomware attacks&lt;/li&gt;
&lt;li&gt;You require immutable backups with multi-party approval&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use standard Vault:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You are backing up dev or test workloads&lt;/li&gt;
&lt;li&gt;You need flexibility to modify retention policies frequently&lt;/li&gt;
&lt;li&gt;Cost optimisation is a priority as LAG vault has slightly higher cost&lt;/li&gt;
&lt;li&gt;Your compliance requirements don't mandate immutability&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Backup Audit Manager and Notification
&lt;/h2&gt;

&lt;h4&gt;
  
  
  Backup Audit Manager
&lt;/h4&gt;

&lt;p&gt;Backup Audit Manager is a compliance tool that allows you to audit existing Backup against chosen controls and requirements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring&lt;/strong&gt; - You can view compliant and non-compliant resources and this way prioritise remediation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Auditing and Compliance&lt;/strong&gt; - create customised frameworks that are aligned with your compliance requirements and monitor whether existing backup meets internal policies&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reporting&lt;/strong&gt; -automatically or on demand generate a report delivered to S3 for audit trails.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Build-In controls&lt;/strong&gt; - includes pre-configured controls for common requirements like backup frequency, retention periods, encryption and cross-region rpelication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimc1ph4757dkpmk8ud48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fimc1ph4757dkpmk8ud48.png" alt="Framework examples" width="800" height="709"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Backup Notifications:
&lt;/h4&gt;

&lt;p&gt;You can configure AWS Backup notifications via Amazon SNS for backup job events such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backup job started, completed ot failed&lt;/li&gt;
&lt;li&gt;Restore job started, completed or failed&lt;/li&gt;
&lt;li&gt;Copy job completed (i.e. to another account)&lt;/li&gt;
&lt;li&gt;Recovery point lifecycle transitions&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Best Practices for AWS Backup:
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Utilise tags - leverage AWS tags to include/exclude resources that need to be backed up.&lt;/li&gt;
&lt;li&gt;Implement lifecycle policies - where possible (not all resources support cold storage) utilise cold storage to reduce cost.&lt;/li&gt;
&lt;li&gt;Enable cross-account and cross region copy - to protect against regional failure, copy over at least periodically&lt;/li&gt;
&lt;li&gt;Use Vault Lock - either in compliance or in governance mode&lt;/li&gt;
&lt;li&gt;Monitor - set up notifications for failed Backups/Copy&lt;/li&gt;
&lt;li&gt;Document recovery procedures - create and maintain runbooks for different disaster recovery scenarios.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Resources:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/aws-backup/latest/devguide/" rel="noopener noreferrer"&gt;Backup developer guide&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/storage/automate-centralized-backup-at-scale-across-aws-services-using-aws-backup/" rel="noopener noreferrer"&gt;Automate centralised backup at scale across AWS services using AWS Backup&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/storage/introducing-aws-backup-logically-air-gapped-vault/" rel="noopener noreferrer"&gt;Introducing AWS Backup logically air-gapped vault&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>backup</category>
      <category>security</category>
      <category>awscommunity</category>
    </item>
    <item>
      <title>Building a Stronger Security Posture with AWS Security Hub</title>
      <dc:creator>Natalia Marek</dc:creator>
      <pubDate>Sun, 12 Jan 2025 19:43:48 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-stronger-security-posture-with-aws-security-hub-2foi</link>
      <guid>https://dev.to/aws-builders/building-a-stronger-security-posture-with-aws-security-hub-2foi</guid>
      <description>&lt;p&gt;Security Hub is the first security service that we will explore. From a monitoring and compliance point of view, it is one of the most significant services that AWS has to offer. It offers continuous checks and monitoring of your infrastructure compliance aligned with the security standards that you choose or require. This means that to monitor your compliance, you do not need external monitoring. In addition to monitoring, you can also configure automatic remediation, which we will delve into later in this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  Central Management and aggregation
&lt;/h2&gt;

&lt;p&gt;Security Hub allows centralised management of security findings in multi-account organisations. You can now aggregate Security Hub findings into one central management account across multiple regions and organisational units (OUs).&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78x86t11d2tw5z85ulqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F78x86t11d2tw5z85ulqe.png" alt="Setting up central configuration" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This aggregation ensures that security teams can access all findings from a single place, simplifying operations and reducing the risk of oversight and limiting unauthorised access. It provides an overview of security risks across the organisation, enabling quicker response times and more efficient resource allocation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dv8p185mt4zi62vx4vy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dv8p185mt4zi62vx4vy.png" alt="Central configuration" width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This central management could be any delegated administrator of your choice. The primary recommendation is that the admin account should be the same for all security tools, and it should not be the Organisation management account (although this is possible). In our case, we have designated the SecurityOps account as the delegated administrator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Standards
&lt;/h2&gt;

&lt;p&gt;Security standards are predefined frameworks that outline best practices and compliance guidelines to protect your infrastructure and data. By adhering to these standards, organisations can systematically address security risks and meet regulatory or industry-specific requirements.&lt;/p&gt;

&lt;p&gt;Security standards that are currently available in Security Hub:  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Foundational Security Best Practices v1.0.0&lt;/strong&gt;: Covers basic security controls for IAM, logging, and encryption.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CIS AWS Foundations Benchmark (Center for Internet Security)&lt;/strong&gt;: Offers prescriptive controls for organisations seeking external validation, often used for audits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NIST SP 800-53 Rev. 5 (National Institute of Standards and Technology)&lt;/strong&gt;: Designed for projects requiring adherence to U.S. government security standards.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PCI DSS (Payment Card Industry Data Security Standard)&lt;/strong&gt;: Ensures secure handling of payment card transactions, which could be critical for e-commerce platforms.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Resource Tagging Standard:&lt;/strong&gt; Promotes consistent tagging to improve resource management and cost visibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These standards simplify compliance by offering clear guidelines tailored to various industries and use cases. For example, an online retailer might use PCI DSS to ensure secure payment processing, while a government contractor could apply NIST SP 800-53 to meet federal compliance requirements. Security Hub evolves continuously; just in December 2024 Security Hub added 84 new controls. &lt;/p&gt;

&lt;p&gt;Next we will talk about how can we implement those standards across our AWS organisation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Custom policies
&lt;/h2&gt;

&lt;p&gt;You can create customised configuration policies for your central configuration. The recommended policy when first setting up Security Hub is enabling AWS Foundational Security Best Practices v1.0.0, with all controls across all accounts and any new AWS account automatically enrolled.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnfk5imnl0093s681k38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnfk5imnl0093s681k38.png" alt="Central Configuration" width="800" height="308"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However you can change this  and create a customised configuration policy for your organisation, and apply different policies across different accounts and/or Organisational Units in  AWS Organisation. There might be different reasons for utilising Custom Policies on different accounts, here are some examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Different compliance requirements: For example, enabling PCI DSS only for accounts that process payments, while applying less stringent standards to development and sandbox accounts.&lt;/li&gt;
&lt;li&gt;Custom control parameters: Adjusting specific checks, such as flagging only public S3 buckets storing sensitive data, instead of all public buckets. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can enable this in the Configuration settings, under the Policies section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5qashin6c3s1qz4r9x7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa5qashin6c3s1qz4r9x7.png" alt="Custom Policies in Security Hub" width="800" height="277"&gt;&lt;/a&gt;&lt;br&gt;
When creating a new policy, you first select which standards should be used—in this example, we have chosen three, including PCI DSS.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfs4e4j1w5hcdpskbftx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhfs4e4j1w5hcdpskbftx.png" alt="Custom Policies creation" width="800" height="213"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, you can enable all controls, disable or enable specific controls. Here we have chosen to disable two controls. In addition, you can customise how specific parameters are evaluated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g5ppav95ylo8dk059t7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0g5ppav95ylo8dk059t7.png" alt="Disabling controls in Security Hub policies" width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Customisation reduces noise from unnecessary alerts and focuses compliance efforts on high-priority risks, so that findings and insights are tailored and custom for our use case, without having to enable standards that are unnecessary in the whole Organisation, which will also save on the cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automated Response and Remediation
&lt;/h2&gt;

&lt;p&gt;Security Hub supports automation through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated Rules&lt;/li&gt;
&lt;li&gt;Automated Response and Remediation:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Automation Rules, you can automatically update and/or suppress findings based on criteria. This is useful for updating severity levels or suppressing findings that match defined criteria. You can set up an automation rule from a template or create a custom rule.&lt;/p&gt;

&lt;p&gt;Let’s have a closer look at one of the templates — this one relates to elevating findings in specific production accounts. This is just a template, so we can update any values to match our requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgfocn80c4d1m91xge2x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgfocn80c4d1m91xge2x.png" alt="Creating automation rule" width="800" height="212"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the Criteria section, you can choose which findings the rule should apply to. One of those keys is the account ID — in this case, we would like it to apply only to the production account.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pcxxo3uzxka6g8xwohg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pcxxo3uzxka6g8xwohg.png" alt="Criteria Section in Create Automation" width="800" height="288"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, we choose what automated action will take place against the findings that match our criteria. Here, the severity will be updated to "Critical" and a note will be added stating:&lt;br&gt;
"A resource in production accounts is at risk. Please review ASAP." &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn399o4h154fkz4ls55r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frn399o4h154fkz4ls55r.png" alt="Automated action" width="768" height="779"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Any of these actions can be modified. For instance, for sandbox accounts, we can update the severity to a lower level. You can also add a user defined field—here, we could add a key of Environment with a value of Production or Sandbox. This means that any findings originating from the production account will contain that extra field with environment information.&lt;/p&gt;

&lt;p&gt;Automated Response and Remediation involves triggering remediation actions through integrations such as AWS Lambda, EC2 run command, Step Functions, pushing messages to an SNS topic, or sending findings to third-party tools or chats. These actions are automatically sent to EventBridge in near real-time.&lt;/p&gt;

&lt;p&gt;For example, when a control detects a publicly accessible S3 bucket, a Lambda function can automatically restrict access. Another example is isolating compromised EC2 instances automatically.&lt;/p&gt;

&lt;p&gt;AWS offers a set of templates for cross-account automated responses and remediation that can be deployed using CloudFormation. You can explore this solution here: &lt;a href="https://docs.aws.amazon.com/solutions/latest/automated-security-response-on-aws/solution-overview.html" rel="noopener noreferrer"&gt;Automated Security Response on AWS.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Hub Dashboards, Insights and Integrations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Dashboard
&lt;/h3&gt;

&lt;p&gt;Security Hub dashboard provides a customisable summary of your security posture, highlighting key metrics such as open findings by severity, compliance scores or resource distribution across regions and accounts. This helps security teams quickly identify trends, track remediation progress, and focus on critical issues. Over time, the dashboard reveals whether compliance and security efforts are improving or require additional attention.&lt;/p&gt;

&lt;h3&gt;
  
  
  Insights
&lt;/h3&gt;

&lt;p&gt;Security Hub also offers insights—  predefined or customisable views/filters that help organisations focus on specific security findings. For instance, you can filter findings based on severity,  AWS resources with the most findings, account type, S3 buckets with public write or read permissions etc.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8087otnahwv87snni15.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc8087otnahwv87snni15.png" alt="Insights" width="800" height="375"&gt;&lt;/a&gt;&lt;br&gt;
You can also create your own insight/filter to quickly access finding that are a priority for your Organisation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Integrations
&lt;/h3&gt;

&lt;p&gt;Security Hub integrates with other security AWS services like GuardDuty, Macie, Inspector or IAM Access Analyser, feeding findings directly into the dashboard. You can also integrate third-party services such as Splunk, PagerDuty, SumoLogic and Palo Alto Networks. For better visibility, you can add widgets like Latest Findings from AWS Integrations to the dashboard, centralising all critical information in one place. &lt;/p&gt;

&lt;p&gt;Additionally, Security Hub supports &lt;a href="https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-custom-providers.html" rel="noopener noreferrer"&gt;custom integrations&lt;/a&gt;, allowing you to integrate other custom security products, not listed about. By using the Security Hub custom providers API, you can send findings from other security tools or workflows into Security Hub. This enables a unified and comprehensive view of security across both AWS-native and custom solutions, tailored to your requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Security Hub has come a long way since it was first released, evolving into a powerful tool for managing and automating security in AWS environments. With regular updates like the addition of new controls and enhanced integrations, it’s clear that Security Hub is only getting better. Its ability to centralise findings, customise compliance, and automate responses keeps improving, making it an important consideration for securing you AWS Infrastructure.&lt;/p&gt;

&lt;p&gt;Useful resources and training about Security Hub:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://catalog.workshops.aws/security" rel="noopener noreferrer"&gt;Security Hub Workshop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-internal-providers.html" rel="noopener noreferrer"&gt;List of AWS service integrations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-partner-providers.html" rel="noopener noreferrer"&gt;List of third-party Security Hub integrations&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html" rel="noopener noreferrer"&gt;Security Hub user guide&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>awscommunity</category>
      <category>security</category>
    </item>
    <item>
      <title>Essential AWS Security Services to Safeguard Your AWS Cloud Workloads</title>
      <dc:creator>Natalia Marek</dc:creator>
      <pubDate>Thu, 07 Nov 2024 11:05:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/essential-aws-security-services-to-safeguard-your-aws-cloud-workloads-1lgc</link>
      <guid>https://dev.to/aws-builders/essential-aws-security-services-to-safeguard-your-aws-cloud-workloads-1lgc</guid>
      <description>&lt;p&gt;With how fast things are changing in the digital world, securing your cloud setup is more important than ever. In my upcoming blog posts, I'll be focusing on security, exploring the AWS services you need to effectively protect your cloud environment.&lt;/p&gt;

&lt;p&gt;I'm aiming to give an overview of each service—talking about why we should consider using them in AWS Organizations, especially with multi-account architectures in mind. We'll delve into the latest features, implementation strategies, and best practices.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Security Hub
&lt;/h2&gt;

&lt;p&gt;AWS Security Hub is a cloud security posture management service that provides a comprehensive view of your security state within AWS. It aggregates security data from across your AWS accounts and services like Amazon GuardDuty, Amazon Inspector, and Amazon Macie, as well as supported third-party products (e.g., Sumo Logic, Snyk). By consolidating this information into a single dashboard, Security Hub enables you to analyse security trends and prioritise high-priority issues effectively.&lt;/p&gt;

&lt;p&gt;The service supports multiple security standards and best practices frameworks, including AWS Foundational Security Best Practices, CIS, PCI DSS, Resource Tagging Standard and NIST. It continuously runs automated security checks against these standards, generating findings to help you assess compliance and calculate security scores.&lt;/p&gt;

&lt;p&gt;Features worth mentioning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated Response and Remediation: Security Hub can automate responses to findings by integrating with AWS Systems Manager and AWS Lambda, allowing for quicker mitigation.&lt;/li&gt;
&lt;li&gt;Custom Insights and Dashboards: You can create custom insights to focus on specific types of findings and tailor dashboards to meet your organisation's needs.&lt;/li&gt;
&lt;li&gt;Integration with AWS Organizations: Security Hub can be set up across multiple AWS accounts using AWS Organizations, simplifying centralised security management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Amazon Inspector
&lt;/h2&gt;

&lt;p&gt;Amazon Inspector is an automated security assessment service that identifies vulnerabilities and unintended network exposure in EC2 instances, Lambda functions, and container images in Amazon Elastic Container Registry (Amazon ECR). It can also be integrated into your CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;Similar to Security Hub, it provides a central place to monitor vulnerabilities for all the services mentioned above.&lt;/p&gt;

&lt;p&gt;Features worth mentioning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continuous Scanning: Amazon Inspector offers continuous scanning of your workloads, providing near real-time insights into vulnerabilities.&lt;/li&gt;
&lt;li&gt;Integration with AWS Security Hub: Findings from Amazon Inspector can be automatically forwarded to AWS Security Hub for centralised management.&lt;/li&gt;
&lt;li&gt;Enhanced Scanning for Container Images: It supports enhanced scanning of container images, to identify vulnerabilities before deployment.&lt;/li&gt;
&lt;li&gt;Support for AWS Lambda Layers: Inspector can assess vulnerabilities in Lambda layers&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Identity and Access Management (IAM) and IAM Identity Center (SSO)
&lt;/h2&gt;

&lt;p&gt;AWS Identity and Access Management (IAM) is probably one of the most well-known AWS services, but we'll look at it from the perspective of securing organisational environments further anmd ensuring best practices are in place. IAM enables you to securely manage access to AWS resources by defining who can access what. IAM Identity Center (formerly AWS Single Sign-On) simplifies access management across multiple AWS accounts and applications, allowing centralised user and group permissions.&lt;/p&gt;

&lt;p&gt;In our overview, we'll look at how to implement best practices for access management, utilising IAM Access Analyzer, Attribute-Based Access Control and how to use and implement permission boundaries.&lt;/p&gt;

&lt;p&gt;Features worth mentioning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM Access Analyzer: Helps identify resources in your organisation and accounts that are shared with an external entity.&lt;/li&gt;
&lt;li&gt;Permission Boundaries: Allow you to set the maximum permissions that an IAM entity (user or role) can have, providing an extra layer of security.&lt;/li&gt;
&lt;li&gt;Attribute-Based Access Control (ABAC): IAM now supports ABAC, which allows permissions based on tags attached to users and resources, simplifying permission management in large environments.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Amazon GuardDuty
&lt;/h2&gt;

&lt;p&gt;Amazon GuardDuty is a near real-time threat detection service that monitors malicious activity and unauthorised behaviour by analysing AWS CloudTrail logs, DNS logs, and VPC Flow Logs as foundational data sources. It provides insights into potential threats affecting AWS resources. Similar to Security Hub, it allows us to aggregate findings from all accounts.&lt;/p&gt;

&lt;p&gt;In addition to the foundational data source analysis mentioned above, GuardDuty now offers malware protection specifically for Amazon EKS, Amazon S3, EC2, RDS, and Lambda functions.&lt;/p&gt;

&lt;p&gt;Features worth mentioning:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 Protection: GuardDuty can monitor and analyse data events from Amazon S3 to detect suspicious activities like unauthorised data access.&lt;/li&gt;
&lt;li&gt;EKS Runtime Monitoring: It provides security monitoring for Amazon EKS clusters, detecting threats at the container and Kubernetes levels.&lt;/li&gt;
&lt;li&gt;Integration with AWS Organisations: Enables centralised threat detection across multiple AWS accounts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Shield and AWS WAF (Web Application Firewall)
&lt;/h2&gt;

&lt;p&gt;AWS Shield and AWS WAF (Web Application Firewall) work in tandem to protect web applications from a wide range of threats. AWS Shield provides managed protection against Distributed Denial of Service (DDoS) attacks, with Shield Standard offering basic DDoS protection at no additional cost and Shield Advanced delivering enhanced safeguards for applications running on EC2 instances, Elastic Load Balancers, Amazon Route 53, AWS Global Accelerator, and more.&lt;/p&gt;

&lt;p&gt;In addition, AWS WAF protects your applications from common web exploits that can affect availability, compromise security, or consume excessive resources. It allows you to create custom rules to block common attack patterns such as SQL injection or cross-site scripting (XSS). Together, AWS Shield and AWS WAF can be deployed on services like Amazon CloudFront, Application Load Balancer, Amazon API Gateway, and AWS AppSync, providing a layered defence that secures your applications from both network-level DDoS attacks and application-level vulnerabilities.&lt;/p&gt;




&lt;p&gt;We have highlighted some essential services that enhance security within your AWS Organization, it's important to remember that safeguarding your AWS accounts is an ongoing process. Exploring additional features like applying Service Control Policies, ensuring data encryption is in place, and utilising AWS Secrets Manager for managing secrets can further strengthen your security posture. We will also be looking into these. By making the most of AWS's security tools and best practices, you can build a strong and secure cloud environment that's just right for your organisation's needs.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>security</category>
    </item>
    <item>
      <title>Let's talk about AWS VPC endpoints</title>
      <dc:creator>Natalia Marek</dc:creator>
      <pubDate>Wed, 10 Jan 2024 09:53:55 +0000</pubDate>
      <link>https://dev.to/aws-builders/lets-talk-about-aws-vpc-endpoints-2bj</link>
      <guid>https://dev.to/aws-builders/lets-talk-about-aws-vpc-endpoints-2bj</guid>
      <description>&lt;p&gt;VPC endpoints enable us to establish private connections between your VPC and supported AWS services, bypassing the need for public internet access, making your infrastructure more secure and improving data protection. In this blog post, we will look into AWS VPC endpoints, specifically focusing on differences between Gateway and Interface endpoints, and explore why it is important to consider them in the context of AWS infrastructure. &lt;/p&gt;

&lt;h2&gt;
  
  
  Gateway Endpoints
&lt;/h2&gt;

&lt;p&gt;First lets talk about &lt;strong&gt;Gateway Endpoints&lt;/strong&gt;, which are available to connect to S3 and DynamoDB. We would use them if we require connectivity from &lt;em&gt;within&lt;/em&gt; AWS network, without having to create internet gateway or NAT device or VPN. Unlike Interface Endpoints, they do not use AWS PrivateLink, are fee of charge and are available only in the Region where you created it.&lt;/p&gt;

&lt;p&gt;In a Gateway Endpoint, traffic is routed from your VPC, via the Gateway Endpoint, to S3 or DynamoDB. It serves as a target for a route in your route table for traffic destined for one of these services.&lt;/p&gt;

&lt;p&gt;When creating a Gateway Endpoint first you will have to select route table in which to create routes to the service and add resource policy - you can select full access to allow all operations by principals or create a custom policy to enable more granular access, for instance restricting which S3 buckets can be accessed via the gateway endpoint.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhs9cb9sd50abl65dmu9o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhs9cb9sd50abl65dmu9o.png" alt="Gateway endpoint setup in AWS console"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Note that Gateway Interfaces do not allow access from on-premises networks, peered VPCs in other AWS Regions, or through a transit gateway - in those instances you will have to use Interface Endpoints.&lt;/p&gt;

&lt;h2&gt;
  
  
  Interface Endpoints
&lt;/h2&gt;

&lt;p&gt;Interface Endpoints, utilise AWS PrivateLink, that securely connects your VPC to supported AWS services, such as EC2 API, ECS, Secret Manager, Lambda, but also services hosted by other AWS customers, AWS Partner Network (APN) partners in their own VPCs, or allows access from on premises. This is done by creating elastic network interfaces (ENIs) within your VPC that serve as the entry point for traffic destined to these AWS services.&lt;/p&gt;

&lt;p&gt;The key difference between Interface and Gateway Endpoints lies in their underlying architecture, cost and usage. While Gateway Endpoints are limited to S3 and DynamoDB and are free of charge, Interface Endpoints support a broader range of services but incur a cost based on the amount of data processed.&lt;/p&gt;

&lt;p&gt;When we create an Interface Endpoint, it gets associated with a specific subnet in our VPC and creates an ENI. This ENI is assigned a private IP address from the subnet's IP range. Your VPC traffic to the AWS service then goes through this private IP address, ensuring that the data never goes over the public internet. This setup enhances security and could potentially reduce data transfer costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Use Cases of Interface Endpoints for AWS Services
&lt;/h3&gt;

&lt;p&gt;Interface Endpoints are particularly beneficial for secure and private access to AWS services not supported by Gateway Endpoints. For services like Kinesis or Lambda, and when latency and secure data transfer are crucial, Interface Endpoints can provide a consistent, low-latency connection, vital for real-time data processing and serverless computing scenarios.&lt;/p&gt;

&lt;p&gt;Next, using AWS Secrets Manager integrated with Interface Endpoints allows you to access Secrets Manager within your VPC without exposing the traffic to the public internet. This would be crucial for applications that need to securely access sensitive information, like database credentials or API keys. &lt;/p&gt;

&lt;p&gt;Last example use-case I want to discuss is ECS, which can also be used with Interface Endpoints for secure, private communication with other AWS services. For example, an ECS cluster in a VPC can privately pull images from Amazon Elastic Container Registry (ECR) or securely &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch-logs-and-interface-VPC.html" rel="noopener noreferrer"&gt;store logs in CloudWatch, essential for compliance-sensitive applications&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;These are just a few examples of the services accessible via Interface Endpoints. You can find the full list of supported services &lt;a href="https://docs.aws.amazon.com/vpc/latest/privatelink/aws-services-privatelink-support.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3: Gateway vs. Interface Endpoints
&lt;/h2&gt;

&lt;p&gt;It's worth noting that S3 supports both Gateway and Interface endpoints. When deciding between them for Amazon S3, consider the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cost: Gateway Endpoints are free, while Interface Endpoints incur charges per hour and per GB of data transferred. &lt;a href="https://aws.amazon.com/privatelink/pricing/" rel="noopener noreferrer"&gt;Here&lt;/a&gt; you can find pricing information.&lt;/li&gt;
&lt;li&gt;Access Pattern: Gateway Endpoints are suitable for accessing S3 within the same VPC. Interface Endpoints can be used for accessing S3 from outside the VPC, including from on-premises or from different regions (Gateway Endpoints are limited to the same region).&lt;/li&gt;
&lt;li&gt;Bandwidth and Architecture: Interface Endpoints offer high throughput and are beneficial in &lt;a href="https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centralized-access-to-vpc-private-endpoints.html" rel="noopener noreferrer"&gt;centralised VPC endpoint architectures&lt;/a&gt;, especially when accessing S3 from multiple VPCs. Gateway Endpoints have no inherent throughput limit and are simpler to manage for in-VPC use.&lt;/li&gt;
&lt;li&gt;DNS Configuration: Interface Endpoints offer options for DNS configurations, allowing for more flexible connectivity setups, particularly for on-premises and cross-region access patterns.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Gateway Endpoints are ideal for simple, in-VPC access to S3 at no cost, while Interface Endpoints offer more flexibility and higher throughput for more complex architectures and external VPC access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#types-of-vpc-endpoints-for-s3" rel="noopener noreferrer"&gt;Here&lt;/a&gt; you can read more about types of vpc endpoints for s3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this blog post, we have looked into of both Gateway and Interface endpoints. Rather than separating the benefits of each endpoint type, let's look at the advantages of using VPC endpoints as a whole&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced Security&lt;/strong&gt;: VPC endpoints enable direct, private connectivity to AWS services, such as S3, DynamoDB, Secrets Manger, EC2 and others, reducing the need to expose traffic to the public internet and keeping you data protected.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cost-effectiveness&lt;/strong&gt;: This is especially the case(but not only!) with Gateway Endpoints, you can avoid internet egress charges, especially with high-volume data transfers. While Interface Endpoints do incur charges, they can offset these costs by reducing data transfer fees over the public internet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Improved Performance and Reliability&lt;/strong&gt;: VPC endpoints often provide lower latency and higher throughput, ensuring faster and more efficient data transfers. This increase in performance is crucial for applications requiring real-time data processing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wide Range of supported services&lt;/strong&gt;: Interface Endpoints support a large number of AWS services, which enables secure connection with various AWS services, facilitating diverse and complex cloud architectures.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Compliance and Security Standards&lt;/strong&gt;: For organisations with compliance and security requirements, VPC endpoints provide a secure communication channel, meeting various regulatory standards by keeping data within the AWS network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Flexible Connectivity Options&lt;/strong&gt;: The ability to customise DNS configurations with Interface Endpoints, as well as the simplicity of Gateway Endpoints, offers diverse connectivity options. This is beneficial for simpler in-VPC scenarios and more complex setups involving cross-region or on-premises access.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified Network Management&lt;/strong&gt;: VPC endpoints simplify the network architecture by reducing the need for Internet Gateways, NAT devices, or VPN connections, making network management more straightforward and less prone to security risks.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using VPC endpoints can significantly enhance the security, performance, and efficiency of your cloud infrastructure. It can provide a robust solution for securely accessing AWS services, catering to various architectural needs and use cases.&lt;/p&gt;




&lt;p&gt;References/further reading:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/aws-privatelink/" rel="noopener noreferrer"&gt;Securely Access Services Over AWS PrivateLink - whitepaper&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/25daa7f1-11a5-4c96-8923-9b0e333acc59/en-US" rel="noopener noreferrer"&gt;VPC Endpoint Workshop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/architecture/reduce-cost-and-increase-security-with-amazon-vpc-endpoints/#:~:text=A%20VPC%20endpoint%20allows%20you,or%20AWS%20Direct%20Connect%20connection." rel="noopener noreferrer"&gt;Reduce Cost and Increase Security with Amazon VPC Endpoints&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Fargate cost optimisation - using Fargate Spot with Terraform</title>
      <dc:creator>Natalia Marek</dc:creator>
      <pubDate>Thu, 31 Aug 2023 15:44:02 +0000</pubDate>
      <link>https://dev.to/aws-builders/fargate-cost-optimisation-using-fargate-spot-with-terraform-b50</link>
      <guid>https://dev.to/aws-builders/fargate-cost-optimisation-using-fargate-spot-with-terraform-b50</guid>
      <description>&lt;p&gt;&lt;a href="https://aws.amazon.com/fargate/"&gt;AWS Fargate&lt;/a&gt; is pay as you go serveless compute for containers. You can use Fargate if you have small, batch, or burst workloads or if you want zero maintenance overhead of your containers, as this is all taken care of by AWS. In this post I will be talking about how to cost optimise your Fargate workloads and utilise Fargate Spot using Terraform. &lt;/p&gt;

&lt;h2&gt;
  
  
  Fargate Cost Optimisation
&lt;/h2&gt;

&lt;p&gt;Lets start with general overview on how we can make sure that we do not overspend on our Fargate instances. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Right-sizing tasks&lt;/strong&gt;. Rightsizing is probably one of the first and most important tasks to ensure that you are not over provisioning your tasks, which can lead to unnecessary spending. You can specify value for CPU and Memory in your task definition - however make sure to specify a &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-cpu-memory-error.html"&gt;valid value&lt;/a&gt; for both of them.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Utilise auto scaling.&lt;/strong&gt; Auto-scaling can help you ensure that you are only running the necessary number of tasks, which can help to reduce costs. There are two types of scaling policies Target Tracking and Step Scaling and can be scaled based on Memory and CPU utilisation. Here is more information about &lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-auto-scaling.html"&gt;ECS autoscaling&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Utilise AWS Savings Plans&lt;/strong&gt;. To reduce Fargate cost you can use &lt;a href="https://aws.amazon.com/savingsplans/compute-pricing/"&gt;Compute Savings Plans&lt;/a&gt;. By purchasing Savings Plan you commit to using a certain amount of compute resources over a period of time. With Compute Savings you commit to 1 or 3 year plans, and you can save up to 66% on your EC2, Fargate, and Lambda costs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fargate Spot&lt;/strong&gt;. You can use spot instances with Fargate to further reduce your costs, which could mean up to 70% discount compared to  Fargate On-Demand pricing, and there is no up-front commitment which is an advantage for more unpredictable or new workloads over Compute Savings Plan, however you can also utilise both at the same time to maximise your savings. It is important to note that spot instances can be terminated at any time, so you need to make sure that your application can handle this.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Additionally you can use cost allocation tags to track your AWS costs. You can use tags to group your costs together so that you can see how much you are spending on different parts of your application. &lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing Fargate Spot with Terraform
&lt;/h2&gt;

&lt;p&gt;As mentioned before when using Fargate Spot you have to be prepared that Spot instances can be terminated at any time, which makes it a good candidate for dev and test environments. However if you configure it correctly you can also utilise it on production, by setting base and weight on your capacity provider strategy to ensure availability of your application. &lt;/p&gt;

&lt;p&gt;When creating a Fargate cluster via AWS Console it will have both Fargate and Fargate Spot capacity providers by default. However when creating it via Terrafrom you need to make sure to list them both in "capacity-providers" attribute. Here is example of that:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecs_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"ecs"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my_ecs_cluster"&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecs_cluster_capacity_providers"&lt;/span&gt; &lt;span class="s2"&gt;"ecs"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;cluster_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_ecs_cluster&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;

  &lt;span class="nx"&gt;capacity_providers&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"FARGATE_SPOT"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"FARGATE"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;default_capacity_provider_strategy&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;base&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
    &lt;span class="nx"&gt;weight&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;
    &lt;span class="nx"&gt;capacity_provider&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"FARGATE"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are creating ECS cluster, setting capacity providers to be Fargate spot and Fargate and setting the default capacity provider strategy to be just Fargate, which we will can later set for each service by defining capacity_provider_strategy on "aws_ecs_service" resource.&lt;/p&gt;

&lt;p&gt;Next we will create a service for our ECS cluster with both FARGATE_SPOT and FARGATE set as capacity providers.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecs_service"&lt;/span&gt; &lt;span class="s2"&gt;"main_service"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"my_service"&lt;/span&gt;
  &lt;span class="nx"&gt;cluster&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_ecs_cluster&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs&lt;/span&gt;&lt;span class="err"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;

  &lt;span class="nx"&gt;capacity_provider_strategy&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;capacity_provider&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"FARGATE"&lt;/span&gt;
    &lt;span class="nx"&gt;weight&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;
    &lt;span class="nx"&gt;base&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;capacity_provider_strategy&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;capacity_provider&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"FARGATE_SPOT"&lt;/span&gt;
    &lt;span class="nx"&gt;weight&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are setting both &lt;em&gt;base&lt;/em&gt; and &lt;em&gt;weight&lt;/em&gt; of each capacity provider here. Weight of 4 for FARGATE means that for every 6 tasks, 4 will be running on FARGATE and 2 will be running on FARGATE_SPOT. For dev and test environments we can increase the weight of FARGATE_SPOT utilisation or remove FARGATE all together. BASE set to 3 on FARGATE capacity provider means that we will always have 3 tasks running on FARGATE - this is the configuration that would be important when considering using FARGATE_SPOT on production environments.&lt;/p&gt;

&lt;p&gt;Last but not least when using CodeDeploy with Blue/Green deployments for existing ECS service make sure to add CapacityProviderStrategy in the resource section to the appspec.yml file. You can find more information about it &lt;a href="https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-resources.html#reference-appspec-file-structure-resources-ecs"&gt;over here &lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Here is an example of appspec.yml file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0.0&lt;/span&gt;
&lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;TargetService&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::ECS::Service&lt;/span&gt;
      &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;TaskDefinition&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;TASK_DEFINITION&amp;gt;&lt;/span&gt;
        &lt;span class="na"&gt;LoadBalancerInfo&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;ContainerName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;my_container&lt;/span&gt;
          &lt;span class="na"&gt;ContainerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
        &lt;span class="na"&gt;CapacityProviderStrategy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Base&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
            &lt;span class="na"&gt;CapacityProvider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;FARGATE_SPOT&lt;/span&gt;
            &lt;span class="na"&gt;Weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Base&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
            &lt;span class="na"&gt;CapacityProvider&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;FARGATE&lt;/span&gt;
            &lt;span class="na"&gt;Weight&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;4&lt;/span&gt;
            &lt;span class="na"&gt;Base&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Further reading about Fargate Spot:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/compute/deep-dive-into-fargate-spot-to-run-your-ecs-tasks-for-up-to-70-less/"&gt;Deep dive into Fargate Spot to run your ECS Tasks for up to 70% less&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ec2spotworkshops.com/ecs-spot-capacity-providers/module-2.html"&gt;Using Fargate Spot Capacity providers AWS Workshop&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-capacity-providers.html"&gt;Fargate capacity provider considerations&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>terraform</category>
      <category>ecs</category>
      <category>aws</category>
      <category>costoptimisation</category>
    </item>
    <item>
      <title>Aurora Serverless v1 to Serverless v2 - comparison, migration and blue/green deployment</title>
      <dc:creator>Natalia Marek</dc:creator>
      <pubDate>Fri, 27 Jan 2023 16:19:46 +0000</pubDate>
      <link>https://dev.to/aws-builders/aurora-serverless-v1-to-serverless-v2-comparison-migration-and-bluegreen-deployment-4aa8</link>
      <guid>https://dev.to/aws-builders/aurora-serverless-v1-to-serverless-v2-comparison-migration-and-bluegreen-deployment-4aa8</guid>
      <description>&lt;p&gt;In this blog we will go through main differences between Aurora Serverless v1 and v2, how to upgrade your Aurora Serverless v1 cluster to v2 and how utilising blue/green deployment could help mitigate risk and downtime during future upgrades.&lt;/p&gt;

&lt;h2&gt;
  
  
  Aurora serverless v1 vs v2
&lt;/h2&gt;

&lt;p&gt;Aurora Serverless v2 has become generally available in April 2022, with a promise of "90% cost savings compared to provisioning for peak capacity". Let's have a look at the main differences here:&lt;/p&gt;




&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aurora Serverless v1&lt;/th&gt;
&lt;th&gt;Aurora Serverless v2&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;- scales only by increasing or decreasing the capacity of the writer&lt;/td&gt;
&lt;td&gt;can scale both by changing the size of the DB instance and by adding more DB instances to the cluster, or by adding more regions to Aurora Global database&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;- all the compute for the cluster runs un a single AZ and region&lt;/td&gt;
&lt;td&gt;- Multi AZ the same as in Aurora provisioned&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;- scaling cannot happen when SQL statements are running&lt;/td&gt;
&lt;td&gt;-scaling can happen at any time (no requirement for quiet time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;- scales up and down by doubling and halving ACU's&lt;/td&gt;
&lt;td&gt;- Scale up and down with minimum increment of 0.5 ACUs&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;- reader DB instances aren't available, therefore cannot scale.&lt;/td&gt;
&lt;td&gt;-reader DB can scales up and down independently of the writer&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;- best effort failover only, subject to capacity availability&lt;/td&gt;
&lt;td&gt;-failover the same as is provisioned Aurora&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;- multi AZ -scaling based on memory -Aurora global databases - exporting snapshots to s3 - associating IAM role -RDS proxy - performance insights&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;To summarise, upgrading to Aurora Serverless v2 would offer you more granularity with having the ability to scale down to 0.5 in comparison to 1 or 2 with v1. It also offers the same failover as provisioned clusters with multi AZ capability and and option to create Aurora Global database, all of which was not possible with Aurora Serverless V1. Another advantage of the newer version is ability to use RDS proxy.&lt;/p&gt;

&lt;p&gt;These are just selected differences, however you can find the full list in &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.upgrade.html#aurora-serverless.comparison-scaling" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Upgrading Aurora Serverless v1 to v2
&lt;/h2&gt;

&lt;p&gt;For this demo purposes I will be using Aurora Serverless 2.07.01 MySQL 5.7–compatible and I will provide an approximate time each step should take.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a DB cluster snapshot of the Aurora Serverless v1 cluster (around 2-4 mins) &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnl4b6cftrcf6tl4lsmh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgnl4b6cftrcf6tl4lsmh.png" alt="Creating cluster snapshots" width="800" height="173"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Restore the snapshot to create a new, &lt;strong&gt;provisioned&lt;/strong&gt; (not Aurora Serverless) DB cluster running Aurora MySQL version 2.  Choose the latest minor engine version available for the new cluster, which at the time of writing this is 2.11.0 (around 10 minutes) &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cy13g7e9kswea8a2x94.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cy13g7e9kswea8a2x94.png" alt="Restoring snapshot" width="800" height="179"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uqnlsgnbgbb2c9a7s3h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6uqnlsgnbgbb2c9a7s3h.png" alt="Creating provisioned database cluster" width="800" height="728"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It should take around 10-15 minutes for the new provisioned cluster to become available.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;When Cluster becomes available upgrade the provisioned Aurora MySQL version 2 cluster to a Aurora MySQL version 3 that s compatible with Aurora Serverless v2, i.e. 3.02.2. Here you can check &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Concepts.Aurora_Fea_Regions_DB-eng.Feature.ServerlessV2.html" rel="noopener noreferrer"&gt;Aurora Serverless v2 compatible versions&lt;/a&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevdi3h8ztb8j8gy88wqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fevdi3h8ztb8j8gy88wqy.png" alt="Upgrading provisioned cluster to the latest version" width="800" height="683"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(18-20 mins)&lt;/p&gt;

&lt;p&gt;4.Modify the writer DB instance of the provisioned DB cluster to use the Serverless v2 DB instance class. (approx 20 mins).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7oio9h41wt5qern1lcr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7oio9h41wt5qern1lcr.png" alt="Modifying writer instance" width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Total approximate time for the upgrade from taking a snapshot to converting and modyfying DB instance to Serverless v2 should take around 60-70 mins.&lt;/p&gt;

&lt;p&gt;Further documentation&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.upgrade.html#aurora-serverless-v2.upgrade-from-serverless-v1-procedure" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.upgrade.html#aurora-serverless-v2.upgrade-from-serverless-v1-procedure&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Blue/Green deployment to minimize risk and downtime
&lt;/h2&gt;

&lt;p&gt;Utilising Blue/Green deployments for you managed databeses means that a staging copy of your production enviroment will be created, that later on can be promoted to production. Staging environment stays in sync with your production database. All of this makes it possible for you to significantly decrease downtime when upgrading your database engine, changing schema or parameters. When you have thoroughly tested and undertaken any changes you wanted to your staging(green) copy, you can do a &lt;em&gt;switchover&lt;/em&gt; and promote it to be a production(blue)environment, which usually takes less than a minute.&lt;/p&gt;

&lt;p&gt;To create a Blue/Green deployment of your database cluster first you need to make sure that your database cluster is associated with a custom paramater group with binary logging turned on (&lt;code&gt;binlog_format&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh3p8j2x8jqrhrazwzxb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh3p8j2x8jqrhrazwzxb.png" alt="Turning on  raw `binlog_format` endraw  in associated parameter group" width="800" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After associating your DB with a custom parameter group, you can now create Blue/Green deployment and perform any neccessary upgrades on your new Green(staging) environment&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nb2e3fmcesol8z2zg9c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6nb2e3fmcesol8z2zg9c.png" alt="Creating Blue/Green deployment of your RDS database" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Blue/Green deployments aren't supported for Aurora Serverless v1, however it is available for Aurora provisioned a with MySQL Compatibility 5.6 or higher, Amazon RDS for MariaDB 10.2 and higher and Serverless v2. More information about this &lt;a href="https://aws.amazon.com/blogs/aws/new-fully-managed-blue-green-deployments-in-amazon-aurora-and-amazon-rds/" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

</description>
      <category>gratitude</category>
    </item>
    <item>
      <title>How to migrate to Terraform Cloud and why should you do it?</title>
      <dc:creator>Natalia Marek</dc:creator>
      <pubDate>Fri, 23 Dec 2022 11:36:12 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-migrate-to-terraform-cloud-and-why-should-you-do-it-5hn</link>
      <guid>https://dev.to/aws-builders/how-to-migrate-to-terraform-cloud-and-why-should-you-do-it-5hn</guid>
      <description>&lt;p&gt;&lt;a href="https://cloud.hashicorp.com/products/terraform" rel="noopener noreferrer"&gt;Terraform cloud&lt;/a&gt; is a cloud infrastructure management tool that allows users to easily create and remotely manage their cloud infrastructure in a consistent and efficient manner. You can use it to manage cloud infrastructure, including Amazon Web Services, Google Cloud Platform, and Microsoft Azure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Terraform Cloud?
&lt;/h2&gt;

&lt;p&gt;If you are already running your infrastructure on Terraform, here are some reasons on why you should consider migrating to Terraform Cloud:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;store your state remotely and provide easy access to shared state, secret data and access controls - you can add users and assign different permissions to them, i.e. owners or developers&lt;/li&gt;
&lt;li&gt;manage infrastructure at scale. This means that users can easily manage large numbers of resources across multiple cloud providers and environments.&lt;/li&gt;
&lt;li&gt;use workspaces to manage your collections of infrastructure, which allows to manage multiple resources as well as grant individual users and user groups permissions for each workspace. Read more about how you can take advantage of workspaces features (over here)[&lt;a href="https://developer.hashicorp.com/terraform/cloud-docs/workspaces" rel="noopener noreferrer"&gt;https://developer.hashicorp.com/terraform/cloud-docs/workspaces&lt;/a&gt;]&lt;/li&gt;
&lt;li&gt;store sensitive variables securely - you can store them in variable sets and apply them to the &lt;/li&gt;
&lt;li&gt;do it your way -  manage Terraform runs through 3 different workflows:

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;UI/VCS driven workflow&lt;/strong&gt; - here you are connecting you VCS to Terraform Cloud - easily integrate version control such as GitHub, GitLab, BitBucket or Azure Devops and automatically initiate Terraform runs when changes are committed to the specified branch with out the box triggers.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4hfg05gx62bjrtojmcus.png" alt="VCS Connection"&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CLI driven workflow&lt;/strong&gt; - you can use your standard Terraform CLI to trigger remote runs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;API driven workflow&lt;/strong&gt; - where you can manage and trigger runs through other tools by triggering calls to Terraform Cloud.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  More about VCS workflows
&lt;/h2&gt;

&lt;p&gt;Version Control Workflow in Terraform cloud is something worth spending a bit more time on when configuring your workspace. Here is a few things that I find useful about VCS config in Terraform Cloud:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;initiating speculative plans every time a PR is created against the default branch (this is set up by default so you don't have to do anything)&lt;/li&gt;
&lt;li&gt;once PR is merged this will trigger plan and apply, however by default apply will require manual approval.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;you have various triggers to choose from, and this is where you can really customise your deployment triggers: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;*&lt;em&gt;path changes triggers *&lt;/em&gt;(especially useful for monorepos)

&lt;ul&gt;
&lt;li&gt;pattern based triggers (recommended)- use glob patters to select which what changes should trigger runs and ignore others (i.e. &lt;code&gt;/submodule/**/*.tf&lt;/code&gt; if you only require a run when &lt;code&gt;.tf&lt;/code&gt; in submodule files were changed or &lt;code&gt;/**/networking/**/*&lt;/code&gt; any changes in the files that have networking in their path will trigger a run.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwwnoehsb1n2vgzou0h8.png" alt="Path based triggers"&gt;
&lt;/li&gt;
&lt;li&gt;prefix based triggers - where you select which directory path should be tracked and trigger a run. One of the examples here would be to track changes in &lt;code&gt;modules&lt;/code&gt; directory in each workspace.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;git tag based triggers&lt;/strong&gt; - run will only be triggered when indicated git tag is published. 
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F23xpbbbzb6sz3noiqde8.png" alt="Git tag triggers"&gt;
&lt;/li&gt;

&lt;li&gt;you also have an option to always trigger runs, whenever changes are made to any file in the repository.&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;When setting up your Terraform Cloud organisation and workspaces it is good to assess and implement triggers that are right for your use case and take advantage of all the features offered by Terraform Cloud. &lt;/p&gt;

&lt;h2&gt;
  
  
  Migrate your existing Terraform infrastructure to Terraform Cloud
&lt;/h2&gt;

&lt;p&gt;First lets start with prerequisites:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Make sure you have [Terraform CLI 1.1]&lt;/strong&gt;(&lt;a href="https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli" rel="noopener noreferrer"&gt;https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli&lt;/a&gt;) or higher installed&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://developer.hashicorp.com/terraform/tutorials/cloud-get-started/cloud-sign-up" rel="noopener noreferrer"&gt;*&lt;em&gt;Set up Terraform Cloud account *&lt;/em&gt;&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Migrating existing state to Terraform Cloud:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Replace your backend config with:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;cloud&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;organization&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;ORGANIZATION-NAME&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;workspaces&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;staging&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;and remove your backend config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;&lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;s3&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Run &lt;code&gt;terraform init&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Log in to terraform cloud in the  CLI by running:&lt;br&gt;
&lt;code&gt;terraform login&lt;/code&gt; &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You will be taken to Terraform Cloud website where you will create API token, that you need to copy and paste in you command line&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk557wfa7nlrgccrt6eqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk557wfa7nlrgccrt6eqt.png" alt="Terraform CLI prompt"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;(optional) Set up version control - you can set this either for workspace or or for the whole organisation&lt;/li&gt;
&lt;li&gt;Set up correct working directory i.e. &lt;code&gt;terraform/infrastructure&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;After verifying that Terraform migrated your state to Terraform Cloud, remove your local state file.&lt;/li&gt;
&lt;li&gt;(optional) Create variable sets with variables that are shared across organisation (note that these can still be overwritten in workspace if necessary)&lt;/li&gt;
&lt;li&gt;(optional) Migrate workspace &lt;code&gt;.tfvars&lt;/code&gt; variables and assign them values in Terraform Cloud.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What about the cost?
&lt;/h2&gt;

&lt;p&gt;Often there is a misconception that in order to migrate and use Terraform Cloud you have to pay a lot, in fact the opposite is true -  if you are working in a small team, and do not need access to advance features such as team management and Policy as code (Sentinel policy as code)you can utilise free plan, which allows up to 5 users - this will allow your organisation to assess whether it's a right tool without incurring any cost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncfnvxa8lx4xktb4xt0w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fncfnvxa8lx4xktb4xt0w.png" alt="Terraform Cloud plans"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>terraform</category>
      <category>terraformcloud</category>
      <category>iac</category>
      <category>cicd</category>
    </item>
    <item>
      <title>Monitor and protect sensitive log data with CloudWatch and Terraform</title>
      <dc:creator>Natalia Marek</dc:creator>
      <pubDate>Fri, 02 Dec 2022 11:12:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/monitor-and-protect-sensitive-log-data-with-cloudwatch-15g9</link>
      <guid>https://dev.to/aws-builders/monitor-and-protect-sensitive-log-data-with-cloudwatch-15g9</guid>
      <description>&lt;p&gt;New feature for CloudWatch Logs has been announced on the first day of this years AWS Re:Invent - you can now set up a policy for each CloudWatch logs group, which will allow you to mask and audit sensitive log data. &lt;/p&gt;

&lt;p&gt;In this post I will write about the details of this functionality and walkthrough how to implement it in the AWS Console and using Terraform.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does it work?
&lt;/h2&gt;

&lt;p&gt;This new CloudWatch feature will allow you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;mask&lt;/strong&gt; any information that you deem sensitive&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;audit&lt;/strong&gt; it by setting a destination for it - this could be S3 bucket, Kinesis Data Firehose or/and CloudWatch Logs.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After Data Protection policy is attached to a log group any logs added to that log group will be masked, and only users with &lt;code&gt;logs:Unmask&lt;/code&gt; permissions will be able to view them.&lt;/p&gt;

&lt;p&gt;Making sure that any sensitive data is protected is &lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uo1enq8hxc99eyeo2z6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9uo1enq8hxc99eyeo2z6.png" alt="How does masked data looks like in the CloudWatch console"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Important to note that any sensitive information in the log groups created prior to setting up sensitive data policy will not be detected. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What types of data you can identify?
&lt;/h2&gt;

&lt;p&gt;AWS gives yo ua choice of 100 data types to choose from, such as &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Credentials&lt;/strong&gt;, such as AWS Access Keys or Private keys&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Personally Identifiable information&lt;/strong&gt;, such as national identification numbers, Social Security Number , drivers license numbers, passport number, addresses, Electoral roll number taxpayers number etc&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Protected Health Information (PHI)&lt;/strong&gt; such as health insurance numbers, NHS numbers, health insurance information&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Financial information&lt;/strong&gt; - bank account numbers, credit card numbers and security codes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Device Identifiers&lt;/strong&gt; such as IP adress&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can find the whole list of the types along with their ARN identifiers &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/protect-sensitive-log-data-types.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to create sensitive data policy in the AWS Console
&lt;/h2&gt;

&lt;p&gt;In your AWS console, go to CloudWatch, select Log groups, choose log group you would like to create the policy for, and next select &lt;em&gt;Data protection&lt;/em&gt; section, and click on &lt;em&gt;Create policy&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxg91vysxan34zhdg4l56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxg91vysxan34zhdg4l56.png" alt="Choose Log group and select Data protection"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you will be able to select Data identifiers you want to mask and audit - for the purpose of this tutorial, we will choose three: Address, BankAccountNumber-GB and BankAccountNumber-US. Next we can select audit destination, here we have selected an existing &lt;em&gt;sensitive-data-audit-example-bucket&lt;/em&gt;, however this is optional,&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdoqvlcqy6f55pr9yke7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdoqvlcqy6f55pr9yke7.png" alt="Creating data protection policy"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You also have ability to view how your policy looks like in JSON format. This is the example policy that we have just created:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"data-protection-policy"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2021-06-01"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"audit-policy"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"DataIdentifier"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:dataprotection::aws:data-identifier/Address"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:dataprotection::aws:data-identifier/BankAccountNumber-GB"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:dataprotection::aws:data-identifier/BankAccountNumber-US"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Operation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Audit"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"FindingsDestination"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"S3"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
              &lt;/span&gt;&lt;span class="nl"&gt;"Bucket"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"sensitive-data-audit-example-bucket"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Sid"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"redact-policy"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"DataIdentifier"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:dataprotection::aws:data-identifier/Address"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:dataprotection::aws:data-identifier/BankAccountNumber-GB"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:dataprotection::aws:data-identifier/BankAccountNumber-US"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"Operation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Deidentify"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
          &lt;/span&gt;&lt;span class="nl"&gt;"MaskConfig"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;



&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Next click on &lt;em&gt;Activate data protection&lt;/em&gt;, and your newly created policy will mask and audit any logs added to the log group &lt;strong&gt;after&lt;/strong&gt; the policy was created.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create data protection policy for CloudWatch with Terraform
&lt;/h2&gt;

&lt;p&gt;Below you can see example of how to add a Data Protection policy resource in Terraform. &lt;/p&gt;

&lt;p&gt;Here we will be setting up similar policy to the one created before - adding Data policy to an existing log_group_example policywith s3 bucket as audit destination, Address and BankAccount data identifiers.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight jsx"&gt;&lt;code&gt;


&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws_cloudwatch_log_group&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;log_group_example&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;log_group_example&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sensitive_data_audit-example_bucket&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sensitive-data-audit-example-bucket&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;aws_cloudwatch_log_data_protection_policy&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;log_group_example&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;log_group_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_cloudwatch_log_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log_group_example&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;

  &lt;span class="nx"&gt;policy_document&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;Name&lt;/span&gt;    &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;data_protection_policy&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;Version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;2021-06-01&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;

    &lt;span class="nx"&gt;Statement&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;Sid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Audit&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="nx"&gt;DataIdentifier&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;arn:aws:dataprotection::aws:data-identifier/Address&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;arn:aws:dataprotection::aws:data-identifier/BankAccountNumber-US&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;arn:aws:dataprotection::aws:data-identifier/BankAccountNumber-GB&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="nx"&gt;Operation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;Audit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;FindingsDestination&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
              &lt;span class="nx"&gt;S3&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nx"&gt;Bucket&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;sensitive_data_audit&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;example_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;bucket&lt;/span&gt;
              &lt;span class="p"&gt;}&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;},&lt;/span&gt;
      &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;Sid&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Redact&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="nx"&gt;DataIdentifier&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;arn:aws:dataprotection::aws:data-identifier/Address&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
          &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;arn:aws:dataprotection::aws:data-identifier/BankAccountNumber-US&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;arn:aws:dataprotection::aws:data-identifier/BankAccountNumber-GB&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="nx"&gt;Operation&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;Deidentify&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;MaskConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
          &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;You can read more about CloudWatch Data Protection policy in the &lt;a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/mask-sensitive-log-data.html" rel="noopener noreferrer"&gt;Amazon&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudwatch</category>
      <category>dataprotection</category>
      <category>terraform</category>
    </item>
    <item>
      <title>How to set up Session Manager and enable SSH over SSM</title>
      <dc:creator>Natalia Marek</dc:creator>
      <pubDate>Tue, 11 Jan 2022 18:12:58 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-set-up-session-manager-and-enable-ssh-over-ssm-43k9</link>
      <guid>https://dev.to/aws-builders/how-to-set-up-session-manager-and-enable-ssh-over-ssm-43k9</guid>
      <description>&lt;p&gt;This is a quick guide on how to set up sessions manager on your EC2 instance and enable SSH connections through SSM.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up sessions manager on EC2 instance
&lt;/h2&gt;

&lt;h4&gt;
  
  
  1. Create IAM instance profile to allow Sessions Manager to connect to your instance (this is not enabled by default)
&lt;/h4&gt;

&lt;p&gt;You can do that either by creating a new IAM role with Session Manager permissions or by adding inline policy permissions to an existing role already attached to our instance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;To create/add instance profile:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to IAM and click on Create role&lt;/li&gt;
&lt;li&gt;Select EC2 as trusted entity.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm0bf29z5yh4x03edctyl.png" alt="EC2 truested entity"&gt;
&lt;/li&gt;
&lt;li&gt;Add &lt;em&gt;AmazonSSMManagedInstanceCore&lt;/em&gt; policy to your role or &lt;em&gt;AmazonSSMFullAccess&lt;/em&gt; if you require to grant all Systems Manager permissions and click next.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9j8d5l58e0k2tcbbtttw.png" alt="Adding AmazonSSMManagedInstanceCore permission to a new role"&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add tags and then click &lt;em&gt;Create role&lt;/em&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To add SSM permissions to an existing role, find the role that is attached to the instance, and then &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/getting-started-add-permissions-to-existing-profile.html" rel="noopener noreferrer"&gt;add SSM permissions as an inline policy&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  2. Next add newly created role as your instance profile:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Go to EC2 instances, select the instance you would like to enable SSM on.&lt;/li&gt;
&lt;li&gt;Click on Actions, select Security, and then Modify IAM role&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe1sl0szr2ihpyi2pm74u.png" alt="Modifying IAM role"&gt;
&lt;/li&gt;
&lt;li&gt;Next select IAM role we have created in the previous step&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj7yso548cptfa1p6jgpg.png" alt="Selecting IAM role"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. You can now connect to your instance through Session Manager.&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqleteddomsmwvuo0u88n.png" alt="Connect to your instance!"&gt;
&lt;/h4&gt;

&lt;p&gt;Your can find out more information about EC2 instance profiles and IAM roles for SSM &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-instance-profile.html" rel="noopener noreferrer"&gt;over here.&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Enabling SSH over SSM from your local machine
&lt;/h2&gt;

&lt;p&gt;First of all we need to make sure we meet all the prerequisites:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Have installed latest&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;aws-cli&lt;/a&gt;installed.&lt;/li&gt;
&lt;li&gt;Install &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-install-plugin.html" rel="noopener noreferrer"&gt;Session Manager plugin&lt;/a&gt;on the machine you want to connect to your instance from.&lt;/li&gt;
&lt;li&gt;Make sure your instance has latest SSM agent installed &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-manual-agent-install.html" rel="noopener noreferrer"&gt;&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Update local .ssh configuration on your machine.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Windows (usually located at: &lt;code&gt;C:\Users\username\.ssh\config&lt;/code&gt;):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# SSH over Session Manager
host i-* mi-*
    ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Mac (usually located at: &lt;code&gt;~/.ssh/config&lt;/code&gt;):&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# SSH over Session Manager
host i-* mi-*
    ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;ol&gt;
&lt;li&gt;Add permissions to role/user that you are using to connect to the console. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can use policy below to allow SSH connections through Sessions Manager.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Version"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"Statement"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"ssm:StartSession"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:ec2:region:987654321098:instance/i-02573cafcfEXAMPLE"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:ssm:*:*:document/AWS-StartSSHSession"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;


&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now, after assuming your chosen role/connecting through command line to AWS, you can connect to your instance through SSH over SSM by running this command:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ssm start-session --target i-02573cafcfEXAMPLE --region your-chosen-region&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can remove any port 22 access in your node's security groups - this will no longer be needed to connect to your instance.&lt;/p&gt;

&lt;p&gt;Any further reading about this, make sure to check &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html" rel="noopener noreferrer"&gt;AWS documentation&lt;/a&gt; about this.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note that you can make sure that your instance has the latest SSM agent installed, by &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-automatic-updates.html" rel="noopener noreferrer"&gt;Automating Updates to SSM Agent.&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>security</category>
    </item>
    <item>
      <title>Get ready for AWS Developer Associate exam</title>
      <dc:creator>Natalia Marek</dc:creator>
      <pubDate>Thu, 18 Nov 2021 16:51:39 +0000</pubDate>
      <link>https://dev.to/aws-builders/prepare-for-aws-developer-associate-exam-for-free-3fnh</link>
      <guid>https://dev.to/aws-builders/prepare-for-aws-developer-associate-exam-for-free-3fnh</guid>
      <description>&lt;p&gt;I have decided to write a guide on what resources I have used when preparing for the AWS exam, because I have gone through a variety of them before finding the right ones. Additionally, as AWS Developer Associate was my first AWS exam to take, I will write this post from that perspective. Last but certainly not least, I would also like to make this guide as accessible from economic perspective as possible, so majority of resources mentioned here are completely free to use.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fx79qrtbx7t9m6mlyr6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fx79qrtbx7t9m6mlyr6.png" alt="download"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  How long should I prepare?
&lt;/h3&gt;

&lt;p&gt;The answer to that is - of course - it depends! I have started using AWS in September and sat down to take the exam in July, which makes it around 10 months, but this can easily be done in around 2-3 months - it all depends on the amount of time you have available for learning, and how often you get to work using AWS, and whether you have taken any AWS exams before. &lt;/p&gt;

&lt;p&gt;Especially with the last in mind - if this is not your first AWS exam, you would have gotten used to the way exam questions are phrased, and you would have covered some of the material if you have passed the AWS Solutions Architect Associate exam.&lt;/p&gt;

&lt;h3&gt;
  
  
  Resources available for free
&lt;/h3&gt;

&lt;p&gt;I would like to start with and mainly focus on resources available, because in my opinion they are absolutely enough to prepare you well for the exam, especially if combined with practical usage of AWS tools and technologies in your everyday work.&lt;/p&gt;

&lt;p&gt;*&lt;a href="https://www.youtube.com/watch?v=RrKRN9zRBWs" rel="noopener noreferrer"&gt;AWS Certified Developer - Associate 2020 from FreeCodeCamp and created by Andrew Brown&lt;/a&gt; - this is a very comprehensive 16 hours course, in my opinion better than some paid courses. &lt;a href="https://dev.to/andrewbrown"&gt;Andrew Brown&lt;/a&gt; goes through and covers all the services in detail and at the end of each section there is a cheat sheet/summary available, that I found extremely helpful for the revision just before the exam.&lt;/p&gt;

&lt;p&gt;*&lt;a href="https://cloudacademy.com/webinars/" rel="noopener noreferrer"&gt;Cloud Academy webinars&lt;/a&gt; - this is a free webinar, hosted monthly by Cloud Academy instructors and it goes through each domain and exam questions. Some of the past webinars are available on &lt;a href="https://www.youtube.com/channel/UCeRY0LppLWdxWAymRANTb0g" rel="noopener noreferrer"&gt;Cloud Academy YouTube channel&lt;/a&gt;, so you can access them without having to sign up, but I definitely recommend attending live webinar, as you do get a chance to ask questions. This is a great resource to go through before you sit the exam, it highlights what are the things you should look out for, what do you need to focus on in your revision and what resources are useful to look at.&lt;/p&gt;

&lt;p&gt;*&lt;a href="https://go.aws/3kKYUbs" rel="noopener noreferrer"&gt;Exam Certification Readiness Webinar&lt;/a&gt; from AWS - official and very thorough live 3=4 hour walk through all of the domains and exam questions. Worth mentioning that they are available for most certification exams and take place live in different time zones, so it makes it easy to find one that fits your schedule.&lt;/p&gt;

&lt;p&gt;*&lt;a href="https://d1.awsstatic.com/training-and-certification/ramp-up_guides/Ramp-Up_Guide_Developer.pdf" rel="noopener noreferrer"&gt;AWS Ramp-Up Guide: Developer&lt;/a&gt; - this is by far the best resource, that was recommended during one of the AWS exam prep webinars. Whilst it is not specifically targeted at Developer Associate Exam, it contains links to labs, courses, whitepapers, and many more resources for developers, engineers and DevOps engineers.&lt;br&gt;
*&lt;a href="https://explore.skillbuilder.aws/learn/course/external/view/elearning/9153/aws-certification-official-practice-question-sets-english" rel="noopener noreferrer"&gt;Official Practice Question Set with AWS skills builder &lt;/a&gt;. AWS SkillsBuilder is a new AWS learning center offering free digital courses and learning paths, and this is where you can now find Practice Question Set to get used to the way questions are phrased before you sit the exam.&lt;/p&gt;

&lt;p&gt;*&lt;a href="https://explore.skillbuilder.aws/learn/learning_plan/view/84/developer-learning-plan" rel="noopener noreferrer"&gt;Developer Learning Plan with AWS Skill Builder&lt;/a&gt;- AWS designed this to help anyone, who wants to learn how to develop modern applications on AWS. It will help you learn your ways with serverless solutions, containers and DevOps on AWS. TThis Learning Plan can also help prepare you for the AWS Developer Associate certification eExam.&lt;/p&gt;

&lt;p&gt;*&lt;a href="https://d1.awsstatic.com/training-and-certification/docs-dev-associate/AWS-Certified-Developer-Associate_Exam-Guide.pdf" rel="noopener noreferrer"&gt;AWS Certified Developer – Associate Exam Guide&lt;/a&gt;- this documents covers all you need to know about the preparation for your exam, discussing scope of the exam, content, exam domains, technology and tools that might be covered in the exam and services that are out of scope for the exam.&lt;/p&gt;

&lt;h3&gt;
  
  
  Paid courses and resources
&lt;/h3&gt;

&lt;p&gt;When researching on how to best approach preparation for AWS Developer Associate exam I came across a vast amount of paid courses, however after reading plenty of reviews I have found these courses/resources are the most useful and comprehensive: &lt;/p&gt;

&lt;p&gt;*&lt;a href="https://www.udemy.com/course/aws-certified-developer-associate-dva-c01/" rel="noopener noreferrer"&gt;Ultimate AWS Certified Developer Associate 2021&lt;/a&gt; - this is a 32 hours course that will&lt;br&gt;
thoroughly prepare you to sit the exam. in addition to the course itself you also receive slides that were used in the course, which I used to review the material before the exam day. It also comes with 1 full practice exam.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://amzn.to/3x0NGVj" rel="noopener noreferrer"&gt;AWS Certified Developer Associate Practice Tests [2021]: 390 AWS Practice Exam Questions with Answers &amp;amp; detailed Explanations&lt;/a&gt; - this ebook not only offers you more than enough practice tests, but comes with very detailed explanations to both correct and incorrect answers and walking through each one of them in great details. If there is one resource you are willing to pay for to prepare for the exam - in my opinion that is the one!&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  What next?
&lt;/h3&gt;

&lt;p&gt;The first thing you should do when starting your exam preparation is book your exam - you probably heard that from other people many times, but it does help with your learning plan, and if anything comes up, you can always reschedule your exam up to two times or cancel it all together for a full refund as long it's more than 24 hours before the exam start date.&lt;a href="https://www.aws.training/Certification" rel="noopener noreferrer"&gt;Book your exam here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Additionally, if English is not your first language you can request an extra 30 minutes to be added to your exam.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>awscertified</category>
    </item>
    <item>
      <title>Deploy static Next.js website with CDK and Amplify</title>
      <dc:creator>Natalia Marek</dc:creator>
      <pubDate>Mon, 17 May 2021 15:38:54 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploy-your-react-and-next-js-app-with-cdk-3807</link>
      <guid>https://dev.to/aws-builders/deploy-your-react-and-next-js-app-with-cdk-3807</guid>
      <description>&lt;p&gt;In this post we will walk through deploying a Next.js starter website using AWS amplify CDK module.&lt;/p&gt;

&lt;p&gt;Let's start with architectural overview of the application that we will be deploying.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vwimonjw0i27ryraqlo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1vwimonjw0i27ryraqlo.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We will be creating CodeCommit repository, Amplify app, API gateway and Lambda – all of this will be defined and deployed using AWS CDK. &lt;/p&gt;

&lt;p&gt;CDK code will be converted to a CloudFormation template which will be deployed in the target region to create our resources. &lt;br&gt;
We will be adding API gateway REST API that will enable communication between our Next.js app and Lambda.&lt;br&gt;
We will also configure Amplify application to connect to newly created CodeCommit repository and this will trigger Amplify continuous deployment pipeline, in the end Amplify will generate a URL for our Next.js application, which will make our application accessible on the internet.&lt;/p&gt;

&lt;p&gt;Before we start make sure that you have installed &lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt; and &lt;a href="https://nodejs.org/en/" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt;.&lt;/p&gt;
&lt;h5&gt;
  
  
  Creating Next.js app
&lt;/h5&gt;

&lt;p&gt;We will start by creating a default next.js app by running:&lt;br&gt;
&lt;code&gt;npx create-next-app next-cdk-app&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;This will create a basic template for our Next.js application. &lt;br&gt;
Before we start with CDK, we need to make few changes in our newly created Next.js app directory.&lt;/p&gt;

&lt;p&gt;We will start by adding amplify.yml to the root of our directory to configure our build settings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="na"&gt;frontend&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
   &lt;span class="na"&gt;phases&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;preBuild&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;npm ci&lt;/span&gt;
     &lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="na"&gt;commands&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
         &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;npm run build&lt;/span&gt;
   &lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;baseDirectory&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;out&lt;/span&gt;
     &lt;span class="na"&gt;files&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;**/*'&lt;/span&gt;
   &lt;span class="na"&gt;cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
     &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
       &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;node_modules/**/*&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we need to update package.json file and modify build script by adding &lt;code&gt;next export&lt;/code&gt;, which will enable fully static build of our Next.jsapplication. Our scripts should now look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="nl"&gt;"scripts"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"dev"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"next dev"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"build"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"next build &amp;amp;&amp;amp; next export"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"start"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"next start"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Initialising CDK app
&lt;/h5&gt;

&lt;p&gt;We will start by creating a new directory for the CDK infrastructure within our Next.js app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir &lt;/span&gt;cdk-infra
&lt;span class="nb"&gt;cd &lt;/span&gt;cdk-infra
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example project we will be using Typescript, however you can also use Python, JavaScript, Java, C# or Go to define your infrastructure using CDK.&lt;/p&gt;

&lt;p&gt;Before we initialise CDK app, lets make sure that we have the latest AWS CDK Toolkit installed by running &lt;code&gt;npm install -g aws-cdk&lt;/code&gt;. You can find more information about it over &lt;a href="https://docs.aws.amazon.com/cdk/latest/guide/cli.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now lets initialise our CDK application by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cdk init app &lt;span class="nt"&gt;--language&lt;/span&gt; typescript
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will create several directories necessary for CDK project in typescript to run. Our project structure should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffo6wkq5jpeha0f2nfgyj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffo6wkq5jpeha0f2nfgyj.png" alt="Screenshot 2021-06-05 at 17.35.41"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;••cdk-infra-stack.ts•• file in ••lib•• directory is where we will be defining our stack and adding resources. &lt;/p&gt;

&lt;p&gt;This is how your cdk-infra-stack.ts should look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@aws-cdk/core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CdkInfraStack&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Stack&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;StackProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// The code that defines your stack goes here&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we will start a typescript compiler by writing &lt;code&gt;npm run watch&lt;/code&gt; in the command line. This will monitor our project directory for any changes and it will compile any changes to our ts files to js.&lt;/p&gt;

&lt;h5&gt;
  
  
  Adding resources
&lt;/h5&gt;

&lt;p&gt;In our project we will be creating Lambda function, API Gateway, CodeCommit repository and Amplify app. in order to do that, we need to install CDK modules for each one of them. We can do that by running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; @aws-cdk/aws-codecommit
@aws-cdk/aws-lambda@aws-cdk/aws-apigateway
@aws-cdk/aws-amplify
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First we will add CodeCommit repository, that will later on connect to our Amplify app which will set up our continuous deployment with Amplify. If you prefer to connect your project to Github you can skip this step, you will still be able to set up continuous deployment with Amplify.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@aws-cdk/core&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="nx"&gt;npm&lt;/span&gt; &lt;span class="nx"&gt;install&lt;/span&gt; &lt;span class="p"&gt;@&lt;/span&gt;&lt;span class="nd"&gt;aws&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="nx"&gt;aws&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="nx"&gt;codecommit&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CdkInfraStack&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Stack&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;constructor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Construct&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;string&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;?:&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;StackProps&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;super&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;props&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;


&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;amplifyNextAppRepository&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;codecommit&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Repository&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;AmplifyNextRepo&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;repositoryName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;amplifyNextAppRepo&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt; Next.js app repository that will be used as a source repository for amplify-cdk app&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we will add simple Lambda function and Api Gateway to establish a communication between the front end of our application and Lambda function. Lambda function code will be added in the Lambda directory in our cdk-infra directory&lt;/p&gt;

&lt;p&gt;First lets add those two resources to our stack in ••cdk-infra-stack.ts•• file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;helloCDK&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;HelloCDKHandler&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Runtime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;NODEJS_12_X&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Code&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromAsset&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lambda&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hellocdk.handler&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;apigw&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;LambdaRestApi&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Endpoint&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;helloCDK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we are defining a Lambda function, that will be loaded from 'lambda' directory, from hellocdk.js file, with a 'handler' function.&lt;/p&gt;

&lt;p&gt;Now lets add our Lambda code. First we will create new lambda directory in CDK-Infra directory, next create new file called hellocdk.js and add this simple lambda that will be returning a 'Hello from CDK!" message.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;exports&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;handler&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;headers&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Access-Control-Allow-Origin&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;*&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Access-Control-Allow-Credentials&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
              &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Access-Control-Allow-Headers&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="na"&gt;statusCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Hello from CDK!&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we will add Amplify app, and set the source code provider to be a CodeCommit repository that have created earlier:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;amplifyApp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;amplify&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;App&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;amplifyNextApp&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;sourceCodeProvider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;amplify&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;CodeCommitSourceCodeProvider&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;amplifyNextAppRepo&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="nx"&gt;amplifyApp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;addBranch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;main&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this code we are creating new Amplify application and connecting it to AmplifyAppNextRepository. We are also adding main branch - any changes pushed to that repository will trigger a new build and deployment in Amplify.&lt;br&gt;
If you would like to set source code provider to be a GitHub repository you can do so by replacing CodeCommit source code with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nx"&gt;sourceCodeProvider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;amplify&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;GitHubSourceCodeProvider&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="na"&gt;owner&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;[Repository-Owner]&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;repository&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;[RepositoryName]&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="na"&gt;oauthToken&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;SecretValue&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;secretsManager&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;[SecretName]&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="na"&gt;jsonField&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;[SecretKey]&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;}),&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can read more about source code provider class and constructs &lt;a href="https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws-amplify.GitHubSourceCodeProvider.html" rel="noopener noreferrer"&gt;over here&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Deployment
&lt;/h5&gt;

&lt;p&gt;Now that we have defined our infrastructure using CDK, lets create/synthesize CloudFormation template by running &lt;code&gt;cdk synth&lt;/code&gt; - this will generate and print the CloudFormation template into our CLI. &lt;/p&gt;

&lt;p&gt;Next we will deploy our application by running &lt;code&gt;cdk deploy&lt;/code&gt;.&lt;br&gt;
After running this, first we will see a warning message and a list of resources that will be created on our behalf. After agreeing for those changes to be made, the stack will be deployed and we should see the output with API gateway endpoint url in the command line along with stack ARN. &lt;/p&gt;

&lt;p&gt;Let's copy the endpoint url and add it to our index.js in Next.js ••pages•• directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;Head&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;next/head&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;useEffect&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;react&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;styles&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;../styles/Home.module.css&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="k"&gt;default&lt;/span&gt; &lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;Home&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;setMessage&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useState&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;Something is not working!&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;process&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;env&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ENDPOINT_URL&lt;/span&gt;
    &lt;span class="nf"&gt;fetch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
      &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nf"&gt;setMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)));&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;

  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt; &lt;span class="nx"&gt;className&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;container&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;Head&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Create&lt;/span&gt; &lt;span class="nx"&gt;Next&lt;/span&gt; &lt;span class="nx"&gt;App&lt;/span&gt; &lt;span class="kd"&gt;with&lt;/span&gt; &lt;span class="nx"&gt;CDK&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/title&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/Head&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt; &lt;span class="nx"&gt;className&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;main&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;h1&lt;/span&gt; &lt;span class="nx"&gt;className&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;title&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
          &lt;span class="nx"&gt;Welcome&lt;/span&gt; &lt;span class="nx"&gt;to&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://nextjs.org&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;Next&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;js&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/a&amp;gt;with{" "&lt;/span&gt;&lt;span class="err"&gt;}
&lt;/span&gt;          &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="nx"&gt;href&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://docs.aws.amazon.com/cdk/latest/guide/home.html&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="nx"&gt;CDK&lt;/span&gt;&lt;span class="o"&gt;!&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/a&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/h1&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;h1&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/h1&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;        &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;div&lt;/span&gt; &lt;span class="nx"&gt;className&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;styles&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;grid&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/main&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;

      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/footer&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/div&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we have added useEffect hook that will fetch the data from the lambda function after our page is rendered. We will use this to test the functionality of the Lambda function and API gateway. The message should fist say "Something is not working", and after render, the message 'Hello from CDK!" should display. It should look like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzszl6tiwijvxr146lsq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzszl6tiwijvxr146lsq1.png" alt="Alt Text"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Setting up continuous deployment
&lt;/h5&gt;

&lt;p&gt;Last step will be initialising git repository in the root of our next-cdk-app, connecting it to CodeCommit repository that we have just created (or GitHub repository if you have created that instead) and pushing all the changes. This will trigger continuous build and deployment. &lt;/p&gt;

&lt;p&gt;After pushing the changes to the repository, if we now go to AWS console, in Amplify we should now see our app built and deployed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oihxkugx1b7wq94c422.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oihxkugx1b7wq94c422.png" alt="Screenshot 2021-06-06 at 15.18.52"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://main.d341hvijq7meom.amplifyapp.com/" rel="noopener noreferrer"&gt;Here&lt;/a&gt; you can find how application should look like (with some handy links to CDK learning resources). You can also find source code in my &lt;a href="https://github.com/ripleycmd/next-with-cdk" rel="noopener noreferrer"&gt;GitHub repository&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;This blog post is based on the &lt;a href="https://www.cdkday.com/" rel="noopener noreferrer"&gt;CDK Day&lt;/a&gt; talk I gave in April 2021, where I was presenting 2 ways of deploying React applications using CDK. You can find the talk over &lt;a href="https://www.youtube.com/watch?v=rbiGbFvwEjI" rel="noopener noreferrer"&gt;here.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cdk</category>
      <category>amplify</category>
      <category>nextjs</category>
    </item>
  </channel>
</rss>
