<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Chandrashekar Y M</title>
    <description>The latest articles on DEV Community by Chandrashekar Y M (@shekarym).</description>
    <link>https://dev.to/shekarym</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shekarym"/>
    <language>en</language>
    <item>
      <title>Automating Snowflake Resource Deployment using Terraform and GitHub Actions</title>
      <dc:creator>Chandrashekar Y M</dc:creator>
      <pubDate>Sat, 18 May 2024 12:57:51 +0000</pubDate>
      <link>https://dev.to/aws-builders/automating-snowflake-resource-deployment-using-terraform-and-github-actions-2blj</link>
      <guid>https://dev.to/aws-builders/automating-snowflake-resource-deployment-using-terraform-and-github-actions-2blj</guid>
      <description>&lt;p&gt;Lately at work, I have been using Terraform for our Infrastructure as Code (IaC) requirements for AWS workloads. As part of this learning journey, I also acquired &lt;a href="https://www.linkedin.com/posts/chandrashekar-ym_hashicorp-certified-terraform-associate-activity-7192537859235409920-X15X/" rel="noopener noreferrer"&gt;Terraform Associate certification&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;I wanted to explore Terraform for non-AWS use cases. At work, we are building a unified data platform for our data needs using Snowflake. So, I thought I will try to automate Snowflake resource deployments using Terraform. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snowflake&lt;/strong&gt; is defined as a cloud native, data platform offered as a SaaS. Lately, in the ever-evolving world of data platforms, Snowflake has emerged as a leading cloud-based data warehousing solution. &lt;/p&gt;

&lt;p&gt;Manual provisioning of Snowflake resources like Databases, Schemas, Tables, Grants, Warehouses etc, is time consuming and prone to errors. This is where Infrastructure as Code (IaC) tools like Terraform and CI/CD pipelines using GitHub Actions makes life easier. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terraform&lt;/strong&gt; is an open-source, cloud agnostic IaC tool that allows us to define and provision cloud infrastructure using a very high level configuration language. Terraform implements this using plugins called &lt;strong&gt;providers&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Actions&lt;/strong&gt; enables us to create efficient CI/CD pipelines based on code in GitHub repository. &lt;/p&gt;

&lt;p&gt;This blog post demonstrates a step-by-step guide on how to deploy resources to Snowflake using Terraform and GitHub Actions, leveraging our repository &lt;a href="https://github.com/shekar-ym/cicd-with-terraform-for-snowflake" rel="noopener noreferrer"&gt;cicd-with-terraform-for-snowflake&lt;/a&gt;. We will deploy a Database, a Schema, Grants and a Table onto two different environments (DEV and PROD) on the a Snowflake instance. We will use release based deployment pipelines to deploy to PROD environment. &lt;/p&gt;

&lt;h3&gt;
  
  
  Some pre-requisites and assumptions:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;An AWS Account with an S3 bucket and DynamoDB table already provisioned - we will be using these for Terraform remote backend and State locking.&lt;/li&gt;
&lt;li&gt;AWS credentials (Access Key and Secret Access Key) for the above AWS account configured as GitHub repository secrets.&lt;/li&gt;
&lt;li&gt;A Snowflake instance, a user with &lt;code&gt;ACCOUNTADMIN&lt;/code&gt; permissions and related Key-pair authentication setup. Related private key configured as GitHub repository secret.&lt;/li&gt;
&lt;li&gt;A Snowflake role TF_READER pre-created in the Snowflake instance. We will be deploying grants for this role using Terraform resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setting up repository:
&lt;/h2&gt;

&lt;p&gt;Clone the repository to your local machine:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/shekar-ym/cicd-with-terraform-for-snowflake.git
cd cicd-with-terraform-for-snowflake
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Repository Structure:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;.github/workflows/&lt;/code&gt;: Contains the GitHub Actions workflow files that automate the deployment process.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;dev&lt;/code&gt; and &lt;code&gt;prod&lt;/code&gt; folders contain the Terraform files for &lt;code&gt;development&lt;/code&gt; and &lt;code&gt;production&lt;/code&gt; environment respectively.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;module&lt;/code&gt; folder contains the Terraform module definition which will be used for provisioning Snowflake resources like Database, Schema and Tables. &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Terrform Modules:
&lt;/h2&gt;

&lt;p&gt;Modules are containers for multiple resources that are used together. Modules are used to package and reuse resource configurations with Terraform.&lt;/p&gt;

&lt;p&gt;In our case, we will be using modules to define Snowflake database, schema, grants, table and warehouse resources configuration. This module will be reused to create resource for development and production environments.&lt;/p&gt;

&lt;p&gt;For example, below is the module resource configuration for database and schema:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "snowflake_database" "tf_database" {
  name                        = var.database
  comment                     = "Database for ${var.env_name}"
  data_retention_time_in_days = var.time_travel_in_days

}

resource "snowflake_schema" "tf_schema" {
  name     = var.schema
  database = snowflake_database.tf_database.name
  comment  = "Schema for ${var.env_name}"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Refer to the Github &lt;a href="https://github.com/shekar-ym/cicd-with-terraform-for-snowflake/tree/main/modules/snowflake_resources" rel="noopener noreferrer"&gt;repository&lt;/a&gt; for other module resource configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  GitHub Actions Workflow:
&lt;/h2&gt;

&lt;p&gt;The workflow will trigger a deployment to &lt;code&gt;DEV&lt;/code&gt; environment when you merge any code changes to &lt;code&gt;main&lt;/code&gt; branch using a pull request. &lt;/p&gt;

&lt;p&gt;The workflow also includes a step for infrastructure code scan to scan Terraform code. This uses &lt;a href="https://github.com/bridgecrewio/checkov" rel="noopener noreferrer"&gt;Checkov&lt;/a&gt; action against infrastructure-as-code, open source packages, container images, and CI/CD configurations to identify misconfigurations, vulnerabilities, and license compliance issues.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    security-scan-terraform-code:
        name: Security scan (terraform code)
        runs-on: ubuntu-latest
        steps:
          - name: Checkout repo
            uses: actions/checkout@v4

          - name: Run Checkov action
            id: checkov
            uses: bridgecrewio/checkov-action@master
            with:
              directory: .
              soft_fail: true
              download_external_modules: true
              framework: terraform
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a preview step, when you create a pull request - this preview step performs a &lt;code&gt;terraform plan&lt;/code&gt; to give you an overview what resources will be deployed or changed. &lt;/p&gt;

&lt;p&gt;When you create a &lt;code&gt;release/*&lt;/code&gt; branch from main branch, this triggers a deployment to &lt;code&gt;PROD&lt;/code&gt; environment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the resources to DEV:
&lt;/h2&gt;

&lt;p&gt;Let us make some changes to the Terraform code, push the changes to GitHub repo and create a pull request (PR). Below is how the deployment pipeline looks:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuzsoofcxf3dobdb0oyg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuzsoofcxf3dobdb0oyg.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And below are the steps performed as part of preview:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkke6gg3xjlwvp9n8j5pj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkke6gg3xjlwvp9n8j5pj.png" alt=" " width="800" height="540"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us merge our pull request to &lt;code&gt;main&lt;/code&gt; branch.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmw8a32h90g2tovppko8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmw8a32h90g2tovppko8.png" alt=" " width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2aeztpdipzas2y13nuyo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2aeztpdipzas2y13nuyo.png" alt=" " width="800" height="556"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the output of &lt;code&gt;terraform apply&lt;/code&gt; step:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc805mvl2jbizdskyxs1z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc805mvl2jbizdskyxs1z.png" alt=" " width="800" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us verify the resources on Snowflake. As you can see, the deployment pipeline created a database(&lt;code&gt;TASTY_BYTES_DEV&lt;/code&gt;) and schema(&lt;code&gt;RAW_POS&lt;/code&gt;) and a table (&lt;code&gt;MENU&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflmjlusbl2xh8uv9koan.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fflmjlusbl2xh8uv9koan.png" alt=" " width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A new warehouse was also provisioned.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44a3b91qrqwrmylniq3u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F44a3b91qrqwrmylniq3u.png" alt=" " width="800" height="508"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying the resources to PROD:
&lt;/h2&gt;

&lt;p&gt;Let us create a release branch from the main branch. This will trigger a deployment to &lt;code&gt;PROD&lt;/code&gt; environment. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5963reunajvhswt3qaor.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5963reunajvhswt3qaor.png" alt=" " width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As mentioned earlier, there will be an preview step which performs a &lt;code&gt;terraform plan&lt;/code&gt; to give you an overview what resources will be deployed or changed.&lt;/p&gt;

&lt;p&gt;Since, I have configured environment protection rules, the pipeline stops for a manual approval, before triggering a deploy to PROD. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihpi450j56fxqs7nyy1b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fihpi450j56fxqs7nyy1b.png" alt=" " width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Approving this will trigger a deploy to PROD. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fim2mrk71zi9htuyki4j3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fim2mrk71zi9htuyki4j3.png" alt=" " width="800" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the output of terraform apply step (for PROD):&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mmszejeiml6n5liuaig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5mmszejeiml6n5liuaig.png" alt=" " width="800" height="544"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Completed pipeline:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1vky6c52anz6evraygn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw1vky6c52anz6evraygn.png" alt=" " width="800" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Let us verify the resources on Snowflake for &lt;code&gt;PROD&lt;/code&gt; environment. As you can see, the deployment pipeline created a database(&lt;code&gt;TASTY_BYTES_PROD&lt;/code&gt;) and schema(&lt;code&gt;RAW_POS&lt;/code&gt;) and a table (&lt;code&gt;MENU&lt;/code&gt;)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gtmku54dksjxicswt1k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gtmku54dksjxicswt1k.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A new warehouse for &lt;code&gt;PROD&lt;/code&gt; was also provisioned.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1abupz8zv64zkbaiouq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1abupz8zv64zkbaiouq.png" alt=" " width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Automating the deployment of Snowflake resources using Terraform and GitHub Actions streamlines the process, reduces the potential for errors, and ensures that infrastructure is managed consistently. This setup not only saves time but also enhances the reliability and reproducibility of deployments. By following the steps outlined in this guide, you can leverage the power of IaC and CI/CD to manage your Snowflake infrastructure efficiently.&lt;/p&gt;

&lt;p&gt;Thanks for reading. Please let me know your feedback in comments section.&lt;/p&gt;

</description>
      <category>snowflake</category>
      <category>terraform</category>
      <category>githubactions</category>
      <category>devops</category>
    </item>
    <item>
      <title>Harnessing Managed GitHub Action Runners on AWS CodeBuild for Efficient DevOps Workflows</title>
      <dc:creator>Chandrashekar Y M</dc:creator>
      <pubDate>Fri, 10 May 2024 12:37:48 +0000</pubDate>
      <link>https://dev.to/aws-builders/harnessing-managed-github-action-runners-on-aws-codebuild-for-efficient-devops-workflows-57ma</link>
      <guid>https://dev.to/aws-builders/harnessing-managed-github-action-runners-on-aws-codebuild-for-efficient-devops-workflows-57ma</guid>
      <description>&lt;p&gt;Few weeks back, AWS announced a &lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/04/aws-codebuild-managed-github-action-runners/" rel="noopener noreferrer"&gt;new&lt;/a&gt; feature involving AWS CodeBuild, that allows you to configure self-hosted GitHub action runners in CodeBuild containers to process GitHub Action workflow jobs. This feature allows CodeBuild projects to receive GitHub Actions workflow job events and run them on CodeBuild ephemeral hosts. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is AWS CodeBuild?&lt;/strong&gt;&lt;br&gt;
AWS CodeBuild is a robust, managed continuous integration service that automates code compilation, testing, and artifact production without requiring the management of underlying servers. &lt;/p&gt;

&lt;p&gt;Traditionally, our approach involved using EC2 Spot Instances with custom AMIs in a scheduled auto-scaling setup to accommodate the fluctuating demands of GitHub Action runners. This method, while effective, often led to bottlenecks due to time-zone variances across our DevOps teams, resulting in delayed CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;The introduction of managed GitHub Action runners by AWS offers a promising alternative, integrating seamlessly with AWS services like IAM, Secrets Manager, CloudTrail, and VPC for enhanced security and operational efficiency.&lt;/p&gt;

&lt;p&gt;Let us explore this feature step by step, by connecting one of my GitHub &lt;a href="https://github.com/shekar-ym/codebuild-github-action-runner" rel="noopener noreferrer"&gt;repositories&lt;/a&gt; to a CodeBuild project, see how it can pick up the queued workflow job and performs the GitHub actions configured in the workflow job. The workflow is quite simple one - it builds a container image based on a Docker file, tags it and pushes this to an existing Amazon Elastic Container Registry (ECR).&lt;/p&gt;

&lt;p&gt;Step 1- CodeBuild project:&lt;/p&gt;

&lt;p&gt;Let us navigate to CodeBuild console and create a project with name &lt;strong&gt;github-action-runners&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5tyoahc2aycwd8ktqua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj5tyoahc2aycwd8ktqua.png" alt=" " width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select the source provider as GitHub and Connect using OAuth. We can also us PAC (personal access tokens) to connect to GitHub. To keep things simple, let us stick to OAuth:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bm1mlg5800i6ikovzl9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7bm1mlg5800i6ikovzl9.png" alt=" " width="800" height="361"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Authorize aws-codesuite to access your GitHub repositories:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkclaxsghd4tni5by2wzt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkclaxsghd4tni5by2wzt.png" alt=" " width="634" height="675"&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Once the connection is successful, you should be able to select your repository from the list:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju5h8yzrczukvh8wm72y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fju5h8yzrczukvh8wm72y.png" alt=" " width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;Webhook&lt;/strong&gt;, let us select &lt;em&gt;Rebuild every time a code change is pushed to this repository&lt;/em&gt; and &lt;strong&gt;Event type&lt;/strong&gt; as &lt;em&gt;WORKFLOW_JOB_QUEUED&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqy9pegqnu1y6hnfq2bnb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqy9pegqnu1y6hnfq2bnb.png" alt=" " width="800" height="695"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are following event types available - PUSH, PULL_REQUEST_CREATED, PULL_REQUEST_UPDATED, PULL_REQUEST_REOPENED, PULL_REQUEST_MERGED, PULL_REQUEST_CLOSED, WORKFLOW_JOB_QUEUED,RELEASED, PRERELEASED.&lt;/p&gt;

&lt;p&gt;You can also add additional conditions for "Start build" and "Don't start build", if needed.&lt;/p&gt;

&lt;p&gt;Next, let us choose the Compute Environment for the CodeBuild. CodeBuild offers On-demand and Reserved Capacity options to choose from. For image options you can choose CodeBuild managed images or a custom Docker image. In case of custom image option, you can choose an image from your Amazon ECR (in your account or from another account). Or you can choose an image hosted in an external Docker registry. &lt;/p&gt;

&lt;p&gt;Let us stick to default values for our use case. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieh8iemtrzo1ex7q688v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fieh8iemtrzo1ex7q688v.png" alt=" " width="800" height="791"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note that, we can always override these options by using a label in our GitHub Actions workflow file. More on this in later sections.&lt;/p&gt;

&lt;p&gt;CodeBuild can create a service role with required permissions for you or you can create and choose your own custom role for the project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpq43uo1s6geg1yvj4wk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdpq43uo1s6geg1yvj4wk.png" alt=" " width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next is Buildspec yaml file. In our case, Buildspec will be ignored when we use CodeBuild to run GitHub Actions workflow jobs. Instead, CodeBuild will override it to use commands that will setup the self-hosted runner.&lt;/p&gt;

&lt;p&gt;Let us go-ahead and create the CodeBuild project. Once the creation is successful, you can see that a webhook has been created on your GitHub repository. Navigate to Settings --&amp;gt; Webhooks section to see this: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zd9x4ognlehe4kax8ri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9zd9x4ognlehe4kax8ri.png" alt=" " width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: GitHub Actions Workflow Configuration &lt;/p&gt;

&lt;p&gt;Below is the workflow file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;name: HelloWorld app
on:
  repository_dispatch:
    types: [webhook_triggered]
  pull_request:
    branches:
      - main
  push:
    branches:
      - main
env:
  AWS_REGION: 'us-east-1'

jobs:
  build:
    name: Build Docker Image
    runs-on: codebuild-github-action-runner-${{ github.run_id }}-${{ github.run_attempt }}-al2-5.0-small
    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Set outputs
        id: vars
        run: echo "short_sha=$(git rev-parse --short HEAD)" &amp;gt;&amp;gt; $GITHUB_OUTPUT

      - name: Setup AWS ECR Details
        uses: aws-actions/configure-aws-credentials@v4
        with:
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          aws-region: ${{env.AWS_REGION}} 

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Build, tag, and push image to Amazon ECR
        id: build
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          ECR_REPOSITORY: ${{ secrets.AWS_ECR_REPO }}
          IMAGE_TAG: v1.0.0.${{ steps.vars.outputs.short_sha }}
        run: |
          docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
          docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
          echo "image_tag=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" &amp;gt;&amp;gt; $GITHUB_OUTPUT
          echo "${{ github.event.action }}"
    outputs:
      image_tag: ${{ steps.build.outputs.image_tag }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From above, you can see that I have AWS credentials and ECR private repository name, configured as repository secrets and referred in the workflow. Refer to my repository &lt;a href="https://github.com/shekar-ym/codebuild-github-action-runner" rel="noopener noreferrer"&gt;here&lt;/a&gt; for Docker file and other scripts. &lt;/p&gt;

&lt;p&gt;Note that &lt;code&gt;runs-on&lt;/code&gt; label has the value in format&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;codebuild-&amp;lt;project-name&amp;gt;-${{ github.run_id }}-${{ github.run_attempt }}-&amp;lt;image&amp;gt;-&amp;lt;image-version&amp;gt;-&amp;lt;instance-size&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;project-name&lt;/strong&gt; = Name of the CodeBuild project we created in step 1 above. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;image-image-version-instance-size&lt;/strong&gt; = al2-5.0-small, which indicates I am overriding the values configured in Environment section of the CodeBuild project. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Refer to the &lt;em&gt;Supported compute images&lt;/em&gt; table in this &lt;a href="https://docs.aws.amazon.com/codebuild/latest/userguide/action-runner.html#action-runner-questions" rel="noopener noreferrer"&gt;page&lt;/a&gt; for the list of compute images that CodeBuild provides.&lt;/p&gt;

&lt;p&gt;When you push any changes or create a PR to the &lt;code&gt;main&lt;/code&gt; branch, workflow will be triggered. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw668vtxxjfywo9x7k2v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyw668vtxxjfywo9x7k2v.png" alt=" " width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The webhook associated with this repository will notify the CodeBuild project, which is now our GitHub actions runner picks up this job, as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccezs0p7zmedh6x4myxr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccezs0p7zmedh6x4myxr.png" alt=" " width="800" height="327"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you navigate to CodeBuild project on AWS console, you can see that CodeBuild run is in progress, executing the GitHub actions:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxx5x7hxnph0bz2xpauy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxx5x7hxnph0bz2xpauy.png" alt=" " width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g9m7qw7a94ds4no3n1e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g9m7qw7a94ds4no3n1e.png" alt=" " width="800" height="513"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Also, on GitHub Settings page, you can see an &lt;strong&gt;ephemeral&lt;/strong&gt; GitHub actions runner powered by CodeBuild is in works.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy98w1wvgwcirfr4s4vc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy98w1wvgwcirfr4s4vc1.png" alt=" " width="800" height="330"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is snapshot of the container image that was built and pushed to an existing ECR repository, as part of the GitHub workflow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3w0xvzhjzs7sxokhyo1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3w0xvzhjzs7sxokhyo1.png" alt=" " width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Looking Ahead&lt;/strong&gt;&lt;br&gt;
This integration not only simplifies our operations but also potentially reduces costs and improves performance compared to our previous setups. I plan to further explore and optimize this feature, considering even custom runner images to align with our security standards.&lt;/p&gt;

&lt;p&gt;I'm excited to see how this feature evolves and look forward to sharing more insights, including a detailed cost comparison among various runner options.&lt;/p&gt;

&lt;p&gt;Please let me know what you think by adding your comments.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>github</category>
      <category>codebuild</category>
    </item>
    <item>
      <title>Teslamate data logger, AWS and Raspberry PI</title>
      <dc:creator>Chandrashekar Y M</dc:creator>
      <pubDate>Mon, 18 Dec 2023 01:02:03 +0000</pubDate>
      <link>https://dev.to/aws-builders/tesla-data-logger-aws-and-raspberry-pi-o48</link>
      <guid>https://dev.to/aws-builders/tesla-data-logger-aws-and-raspberry-pi-o48</guid>
      <description>&lt;p&gt;In this blog post, I want to share how I installed and configured &lt;strong&gt;Teslamate&lt;/strong&gt; - A powerful, self-hosted data logger for my Tesla car on a Raspberry PI running Ubuntu. I will also list other AWS Cloud Options I considered to host this data logger and the challenges I faced. &lt;/p&gt;

&lt;p&gt;I own and drive a Tesla Model 3 here in Australia. For your car, Tesla provides an API which can be polled to get the logging data and store it in a local database. This data can be further used to build visualization dashboards and perform any kind data analysis using Grafana.&lt;/p&gt;

&lt;h2&gt;
  
  
  Teslamate
&lt;/h2&gt;

&lt;p&gt;Before going further, few details on &lt;strong&gt;Teslamate&lt;/strong&gt;. This is an open source software developed by Adrian Kumpf / others and is distributed under MIT License. More details about this project can be found &lt;a href="https://github.com/teslamate-org/teslamate" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;There are many installation options, I will be using advanced &lt;a href="https://docs.teslamate.org/docs/guides/traefik" rel="noopener noreferrer"&gt;option&lt;/a&gt; with Traefik, Let's Encrypt &amp;amp; HTTP Basic Auth.&lt;/p&gt;

&lt;p&gt;I have a few Raspberry PIs running on my home network, which I use for my learning and experimentation. I had written a blog &lt;a href="https://dev.to/aws-builders/aws-systems-manager-to-manage-raspberry-pi-running-ubuntu-server-3e81"&gt;here&lt;/a&gt;, on how I manage them using AWS Systems Manager.&lt;/p&gt;

&lt;p&gt;Before ending up using Raspberry PI for hosting Teslmate, with cost being priority, I explored few options on AWS:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;AWS EC2 Instance: This was the simplest option. I could have used the EC2 options eligible under AWS Free tier + an Elastic IP address (NOT FREE). However, it would cost me after free tier expires. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/" rel="noopener noreferrer"&gt;Deploy applications on Amazon ECS using Docker Compose&lt;/a&gt; - I tried this option using ECS Fargate. However, I ran into issues like ECS Fargate not supporting bind mounts. Also, this solution creates and deploys a CloudFormation stack with Amazon EFS resources which in-turn will add up-to the cost for hosting a simple personal data-logger.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/containers/automated-software-delivery-using-docker-compose-and-amazon-ecs/" rel="noopener noreferrer"&gt;Automated software delivery using Docker Compose and Amazon ECS&lt;/a&gt; - I tried this, which is an extension of above solution with continuous delivery using AWS CodePipeline, AWS CodeBuild, and Amazon ECR. It worked fine. However, the stack includes provisioning resources like Load Balancer and CI CD Pipelines which again for me, was an overkill for a simple personal data-logger. Also, major changes to Teslamate open source software is also not too often. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;So, I zeroed in on hosting Teslmate on one of my Raspberry PIs (for which i have already payed for) and use my personal domain (hosted on AWS Route53) for accessing the Teslamate Web App and Dashboards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Steps to install and configure Teslmate on Raspberry PI.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;First, let us update the available list of packages and upgrade any out-of-date packages for Ubuntu.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get update &amp;amp;&amp;amp; sudo apt-get upgrade

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faymwcgfoq6ve8qkjwnn6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faymwcgfoq6ve8qkjwnn6.png" alt=" " width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I already have docker and docker-compose installed on my Raspberry PI. You can use the following commands to install the same:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt install docker-compose
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For current user to interact with Docker, we must add it to the “docker” group. You can do the same using below command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo usermod -aG docker $USER
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You need to logout and log back in for the changes to take effect.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Next, we create a new directory &lt;strong&gt;&lt;em&gt;telsamate&lt;/em&gt;&lt;/strong&gt; and navigate to this folder.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir teslamate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd ~/teslamate
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Follow the steps in teslamate advanced installation &lt;a href="https://docs.teslamate.org/docs/guides/traefik" rel="noopener noreferrer"&gt;guide&lt;/a&gt; to create 3 files inside &lt;em&gt;&lt;strong&gt;teslamate&lt;/strong&gt;&lt;/em&gt; folder you created above.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;docker-compose.yml&lt;/p&gt;

&lt;p&gt;.env&lt;/p&gt;

&lt;p&gt;.htpasswd&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;With this advanced installation option, few things to note are:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We use a reverse proxy (Traefik) which terminates HTTPS traffic. Publicly accessible TeslaMate and Grafana sit behind this reverse proxy.&lt;/p&gt;

&lt;p&gt;Let's Encrypt certificate is automatically acquired by Traefik.&lt;/p&gt;

&lt;p&gt;TeslaMate service is protected by HTTP Basic Authentication.&lt;/p&gt;

&lt;p&gt;Grafana is configured to require a login&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;Now, we can start the stack using:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker-compose up -d
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will download the latest version of Teslamate along with Grafana, Postgres and Mosquitto.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdv86gwe2zfkxhws99kw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcdv86gwe2zfkxhws99kw.png" alt=" " width="800" height="120"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As mentioned earlier, I have my own personal domain (chandra-ym.com) registered and managed on Amazon Route53. I created an A record on the related hosted zone to direct the traffic for &lt;a href="https://teslamate.chandra-ym.com/" rel="noopener noreferrer"&gt;teslmate.chandra-ym.com&lt;/a&gt; to the IP address of my Raspberry PI.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3djq75c3pu5tz2zni8r.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3djq75c3pu5tz2zni8r.jpeg" alt=" " width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;When I first logged into &lt;a href="https://teslamate.chandra-ym.com/" rel="noopener noreferrer"&gt;https://teslamate.chandra-ym.com/&lt;/a&gt;, which is protected by HTTP Basic Authentication, I need to enter the username and password used to generate the token for .htpasswd file created in step 4. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next, you are presented the login screen, where you can enter the Tesla API credentials, which uses Access Token and Refresh token. I use iOS app &lt;strong&gt;Auth for Tesla&lt;/strong&gt; to generate these tokens, based on my actual Tesla account credentials. These are temporary tokens with expiration time.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy93cq2cp0s2x3l8mmubr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy93cq2cp0s2x3l8mmubr.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once you login with the tokens, you can see the Teslamate web interface with vehicle summary:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l1khfxbdkfsjra4wim3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5l1khfxbdkfsjra4wim3.png" alt=" " width="800" height="437"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Some of the visualisations built into Teslmate:&lt;/p&gt;

&lt;p&gt;Overview:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs531k94tk3nccrwbywly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs531k94tk3nccrwbywly.png" alt=" " width="800" height="426"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;States:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgheprg5ev53naw5mzmup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgheprg5ev53naw5mzmup.png" alt=" " width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Projected Range:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu41j3jqj70a1m3r97mmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu41j3jqj70a1m3r97mmr.png" alt=" " width="800" height="434"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Drive Details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohg9pi45bspkluiwyopy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fohg9pi45bspkluiwyopy.png" alt=" " width="800" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3qdgya1m1blr1v3yns6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb3qdgya1m1blr1v3yns6.png" alt=" " width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Charging Stats:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08utlat0zt6sn9df7qpn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F08utlat0zt6sn9df7qpn.png" alt=" " width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yos5e7x9j5hxx1ss9qw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yos5e7x9j5hxx1ss9qw.png" alt=" " width="800" height="394"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Drive Stats:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bro3laiofk722cz12w2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5bro3laiofk722cz12w2.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Drives:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g0hdvpviou31r9frsgf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5g0hdvpviou31r9frsgf.png" alt=" " width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Efficiency:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmpk2fwx5lpxud3vc6m0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmpk2fwx5lpxud3vc6m0.png" alt=" " width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Charge Details:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u2ijxs5ffgoot9v3jn4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7u2ijxs5ffgoot9v3jn4.png" alt=" " width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Charges:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1qbhfj9kimwh1yowtib.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1qbhfj9kimwh1yowtib.png" alt=" " width="800" height="200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;At this point of time, Raspberry Pi seemed to be the cheapest option to host this data logger. I know there many free and paid apps available out there for the same purpose. I will continue to explore cost effective options on AWS to host Teslamate and come up with related blog posts in future. &lt;/p&gt;

&lt;p&gt;Thanks for reading. Please provide any feedback, in the comments section. &lt;/p&gt;

</description>
      <category>tesla</category>
      <category>aws</category>
      <category>route</category>
      <category>opensource</category>
    </item>
    <item>
      <title>DynamoDB import from S3</title>
      <dc:creator>Chandrashekar Y M</dc:creator>
      <pubDate>Wed, 24 Aug 2022 07:49:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/dynamodb-import-from-s3-4oc7</link>
      <guid>https://dev.to/aws-builders/dynamodb-import-from-s3-4oc7</guid>
      <description>&lt;p&gt;I recently &lt;a href="https://www.linkedin.com/posts/chandrashekar-ym_aws-training-database-activity-6958998113994883072-oxCW" rel="noopener noreferrer"&gt;attended&lt;/a&gt; AWS ANZ Database Roadshow 2022 at Sydney AWS office. One of the session that I was looking forward to was about DynamoDB. As expected, this session delivered by one of the Solutions Architect was the highlight of this event. &lt;/p&gt;

&lt;p&gt;And every year, I look forward to Jeff Bar's &lt;a href="https://aws.amazon.com/blogs/aws/amazon-prime-day-2022-aws-for-the-win/" rel="noopener noreferrer"&gt;blog&lt;/a&gt; about Amazon Prime Day and the stats/metrics related to various AWS services which powers the Amazon site to support huge traffic on this day. DynamoDB powers multiple high-traffic Amazon properties and systems including Alexa, the Amazon.com sites, and all Amazon fulfilment centers. Over the course of Prime Day, these sources made trillions of calls to the DynamoDB API. DynamoDB maintained high availability while delivering single-digit millisecond responses and peaking at 105.2 million requests per second.&lt;/p&gt;

&lt;p&gt;So I diligently follow all the updates related to DynamoDB and its features. One such feature which was recently announced was &lt;strong&gt;&lt;a href="https://aws.amazon.com/blogs/database/amazon-dynamodb-can-now-import-amazon-s3-data-into-a-new-table/" rel="noopener noreferrer"&gt;DynamoDB import from S3&lt;/a&gt;&lt;/strong&gt;. This is a fully managed feature that doesn’t require writing code or managing infrastructure. I wanted to explore this feature and get some hands on. &lt;/p&gt;

&lt;p&gt;Before this feature was announced, there were very limited options available for bulk importing data into DynamoDB. Such pipelines needed building and operating of custom data loaders on a fleet of virtual instances, monitoring and exception handling. &lt;/p&gt;

&lt;p&gt;DynamoDB import from S3 helps you to bulk import terabytes of data from Amazon S3 into a new DynamoDB table with no code or servers required. The data in S3 should be in CSV, DynamoDB JSON or ION format with GZIP or ZSTD compression, or no compression. Each record is S3 data should have a Partition Key and a Sort Key (optional) to match the key schema of the target table. &lt;/p&gt;

&lt;p&gt;Any errors encountered during parsing of the data or during import, a log entry is created for each error in &lt;strong&gt;Amazon CloudWatch Logs&lt;/strong&gt;. If number of errors exceeds 10000 - the logging stops but the import continues. &lt;/p&gt;

&lt;p&gt;Another important thing to note here is, DynamoDB import from S3 feature &lt;strong&gt;&lt;u&gt;does not consume any write capacity units&lt;/u&gt;&lt;/strong&gt;. So you don't need to provision additional capacity when creating a new table.&lt;/p&gt;

&lt;p&gt;For testing this feature, I downloaded a dataset from &lt;a href="https://www.kaggle.com/datasets/shivamb/netflix-shows" rel="noopener noreferrer"&gt;Kaggle&lt;/a&gt;. This dataset consists of listings of all the movies and tv shows available on Netflix, along with details such as - cast, directors, ratings, release year, duration, etc. and the dataset has close to 9000 records.&lt;/p&gt;

&lt;p&gt;Now let us try to use this feature to import the above dataset into a DynamoDB table. I already have an S3 bucket called &lt;code&gt;dynamodb-import-s3-demo&lt;/code&gt; and the dataset CSV file is uploaded in the folder path &lt;code&gt;/netflix-shows-movies&lt;/code&gt; as shown below: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobvucwpynm34dsvfupm9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobvucwpynm34dsvfupm9.png" alt=" " width="800" height="234"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the dataset, I will be using the columns &lt;em&gt;title&lt;/em&gt; and &lt;em&gt;show_id&lt;/em&gt; as Partition Key and Sort Key respectively, for the DynamoDB table. Below is the snapshot of the dataset being used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63761qeihjt5wbgoqdyf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63761qeihjt5wbgoqdyf.png" alt=" " width="800" height="382"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Step 1: Next, from the AWS Console, I choose &lt;strong&gt;Imports from S3&lt;/strong&gt; option under DynamoDB service. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmh0myrhjhw5wr3mh2pr5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmh0myrhjhw5wr3mh2pr5.png" alt=" " width="283" height="590"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 2: Click on &lt;strong&gt;Import from S3&lt;/strong&gt; button to navigate to &lt;strong&gt;Import options&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the &lt;strong&gt;S3 URL&lt;/strong&gt;, enter the path to the source S3 bucket and the prefix  in URI format.&lt;/li&gt;
&lt;li&gt;Select** This AWS account** as the bucket owner&lt;/li&gt;
&lt;li&gt;Select the remaining fields as shown in the below image and click on &lt;strong&gt;Next&lt;/strong&gt;:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycg8l704v6avqharuhfh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycg8l704v6avqharuhfh.png" alt=" " width="800" height="772"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;Step 3: On the next screen: Destination table - new table&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Table Name&lt;/strong&gt; - Enter a name for the DynamoDB table.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Partition key&lt;/strong&gt; - as mentioned above, enter &lt;em&gt;title&lt;/em&gt;. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sort key&lt;/strong&gt; - as mentioned above, enter &lt;em&gt;show_id&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Table Settings&lt;/strong&gt;, leave the Default settings 
 selected. DynamoDB table will be created with default 
 RCUs and WCUs. As mentioned earlier, the import process 
 will not consume any of the table's capacity. &lt;/li&gt;
&lt;li&gt;Choose &lt;strong&gt;Next&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6ijyz4kd5av74fsthp9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6ijyz4kd5av74fsthp9.png" alt=" " width="800" height="451"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcumyrgqm92xv65kp4z0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcumyrgqm92xv65kp4z0f.png" alt=" " width="800" height="722"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 4: Review the details and click &lt;strong&gt;Import&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrvdcdcts7rvzcm82ai0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftrvdcdcts7rvzcm82ai0.png" alt=" " width="800" height="564"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wyzpmfxmb3kuvsztesp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7wyzpmfxmb3kuvsztesp.png" alt=" " width="800" height="614"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Step 5: An import job is created. It takes sometime to complete the import. Monitor the status of the job to move to &lt;strong&gt;Complete&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgcv0i8vk3wlho0wr8ak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvgcv0i8vk3wlho0wr8ak.png" alt=" " width="800" height="157"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The dataset has 8808 records and 8807 of those were successfully imported. One record failed to import and the same was logged in &lt;strong&gt;CloudWatch Log groups&lt;/strong&gt;, as shown below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetj5bjgab066la8prnrd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetj5bjgab066la8prnrd.png" alt=" " width="800" height="385"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4m9wru4qxhxjqthp9j0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc4m9wru4qxhxjqthp9j0.png" alt=" " width="800" height="256"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below are the records which was imported as DynamoDB table as items. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foesoikb46n171gw4xaf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foesoikb46n171gw4xaf5.png" alt=" " width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Common errors that we might encounter can be syntax errors, formatting issues and records which are missing Partition Key and Sort key. Please refer to the &lt;a href="https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/S3DataImport.Validation.html#S3DataImport.Validation.Errors" rel="noopener noreferrer"&gt;Validation errors&lt;/a&gt; section in the Developer Guide for more details. &lt;/p&gt;

&lt;p&gt;One limitation I see with this feature is that data can only be imported into a new table that will be created during the import process. Already existing DynamoDB tables cannot be used as part of the import process. &lt;/p&gt;

&lt;p&gt;Cost wise, DynamoDB import from S3 feature costs much less than normal write costs for loading data manually using custom solutions.&lt;/p&gt;

&lt;p&gt;Thanks for reading this blog. Please share your comments and feedback. It helps me to learn and grow. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>community</category>
      <category>cloud</category>
      <category>database</category>
    </item>
    <item>
      <title>AWS Systems Manager to manage Raspberry Pi running Ubuntu server</title>
      <dc:creator>Chandrashekar Y M</dc:creator>
      <pubDate>Sun, 17 Oct 2021 21:22:17 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-systems-manager-to-manage-raspberry-pi-running-ubuntu-server-3e81</link>
      <guid>https://dev.to/aws-builders/aws-systems-manager-to-manage-raspberry-pi-running-ubuntu-server-3e81</guid>
      <description>&lt;p&gt;As part of this blog, I wanted to share my learnings towards &lt;a href="https://aws.amazon.com/systems-manager/" rel="noopener noreferrer"&gt;AWS Systems Manager&lt;/a&gt;(previously AWS Simple Systems Manager - SSM) and how I configured Systems Manager to manage and perform auto patching on a hybrid environment. The hybrid environment here is a simple &lt;a href="https://www.raspberrypi.org/" rel="noopener noreferrer"&gt;Raspberry Pi&lt;/a&gt; running one my home network, with Ubuntu server on it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS Systems Manager?
&lt;/h2&gt;

&lt;p&gt;AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services, and then automate operational tasks across your AWS resources. Systems Manager helps you maintain security and compliance by scanning your managed instances and reporting on (or taking corrective action on) any policy violations it detects.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task
&lt;/h2&gt;

&lt;p&gt;When your environment consists of servers / VMs running on AWS cloud, on-premises data centers and on computers like Raspberry Pi, it will be difficult manage them separately on multiple tools/interfaces. Having a single interface to manage both cloud and non-cloud servers would reduce a lot of admin overhead and streamline the process. &lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;AWS Systems Manager provides a single interface to manage, administer and get operational insights from servers running in AWS cloud, on-premise / hybrid environments. &lt;/p&gt;

&lt;h3&gt;
  
  
  SSM Agent
&lt;/h3&gt;

&lt;p&gt;Systems Manager makes use of SSM agent installed on the servers to achieve this. By default SSM agent is pre-installed on instances created using certain AMIs on AWS cloud. For on-premise servers and VMs in hybrid environment, the agent needs to installed and configured manually. &lt;/p&gt;

&lt;p&gt;The solution involves following steps: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Create an Hybrid Activation on AWS Systems Manager. This activation also creates an IAM role and grants &lt;code&gt;AssumeRole&lt;/code&gt; permission to the SSM service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Install and configure SSM agent on Raspberry Pi running the Ubuntu server 20.04 to enable SSM service to communicate with the server. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;blockquote&gt;
&lt;p&gt;Optional. Setup an Inventory association on AWS Systems Manager to collect information about software and settings for a target set of managed instances. &lt;/p&gt;
&lt;/blockquote&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure Patch Manager on AWS Systems Manager to automate patching of the managed instances on a pre-configured schedule. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Raspberry Pi 4 and Ubuntu Server 20.04
&lt;/h4&gt;

&lt;p&gt;Since I am learning Linux administration, I have a Raspberry Pi set up with Ubuntu server 20.04 on my home Wifi for this purpose. If you are interested to set up the same, you can purchase a Raspberry Pi from &lt;a href="https://www.amazon.com.au/Raspberry-Model-Complete-Starter-Pack/dp/B082VQLQDQ/ref=sr_1_1_sspa?crid=3U808LWD6KK9R&amp;amp;dchild=1&amp;amp;keywords=labists+raspberry+pi+4+complete+starter+kit&amp;amp;qid=1634433434&amp;amp;sr=8-1-spons&amp;amp;psc=1&amp;amp;spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEzTExKVUJGTzBJTUU5JmVuY3J5cHRlZElkPUEwNDkzNDMwVUw5N0dKS1hQQ0xZJmVuY3J5cHRlZEFkSWQ9QTI1U1gySExIQjNSTlAmd2lkZ2V0TmFtZT1zcF9hdGYmYWN0aW9uPWNsaWNrUmVkaXJlY3QmZG9Ob3RMb2dDbGljaz10cnVl" rel="noopener noreferrer"&gt;here&lt;/a&gt; and use the step by step instructions &lt;a href="https://ubuntu.com/tutorials/how-to-install-ubuntu-on-your-raspberry-pi#1-overview" rel="noopener noreferrer"&gt;here&lt;/a&gt; to install Ubuntu server 20.04. I also setup the Ubuntu server to connect to my home Wifi. Since my home internet router has DHCP reservation by default, the Ubuntu server always gets static IP when it connects to the WiFi network. So I need not worrry about setting up a static IP separately.&lt;/p&gt;

&lt;h5&gt;
  
  
  Solution Step 1. Hybrid Activation on AWS Systems Manager
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;Login to your AWS Console and Navigate to AWS Systems Manager. Click on &lt;strong&gt;Hybrid Activations&lt;/strong&gt;. And then &lt;strong&gt;Create an Activation&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxluyhu7c2nuehuqu5cg3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxluyhu7c2nuehuqu5cg3.png" alt="Image1" width="451" height="301"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy4fv9qqrlvyrf2eihjh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foy4fv9qqrlvyrf2eihjh.png" alt="Image2" width="451" height="307"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the Activation description and Instance Limit. The Activation also creates an IAM role &lt;code&gt;AmazonEC2RunCommandForManagedInstances&lt;/code&gt; which uses IAM policy &lt;code&gt;AmazonSSMManagedInstanceCore&lt;/code&gt; and grants &lt;code&gt;AssumeRole&lt;/code&gt; permission to SSM service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc9ovpbrdplwh3bv3s58.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc9ovpbrdplwh3bv3s58.png" alt="Image3" width="451" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Successful creation of Activation provides an &lt;em&gt;Activation Code&lt;/em&gt; and &lt;em&gt;Activation ID&lt;/em&gt;. Please make a note of these 2 values as these will be used at later step to configure the SSM agent on the server.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hsig4h1675r7zn7wsks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hsig4h1675r7zn7wsks.png" alt="Image4" width="800" height="381"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Solution Step 2. Install and configure SSM agent on Ubuntu Server
&lt;/h5&gt;

&lt;ul&gt;
&lt;li&gt;ssh into the Ubuntu server with your credentials and run the following set of commands to install SSM agent
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@ubuntu:/$ mkdir /tmp/ssm
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@ubuntu:/$ curl https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_arm64/amazon-ssm-agent.deb -o /tmp/ssm/amazon-ssm-agent.deb
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 29.6M  100 29.6M    0     0   675k      0  0:00:44  0:00:44 --:--:-- 1179k
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: Since Raspberry Pi comes with ARM architecture, you need to use the corresponding version of the SSM agent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@ubuntu:/$ sudo dpkg -i /tmp/ssm/amazon-ssm-agent.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Stop the SSM agent and register the agent using the &lt;em&gt;Activation code&lt;/em&gt; and &lt;em&gt;Activation ID&lt;/em&gt; that you noted down in previous step.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@ubuntu:/$ sudo service amazon-ssm-agent stop
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@ubuntu:/$ sudo amazon-ssm-agent -register -code "ACTIVATION_CODE" -id "ACTIVATION_ID" -region "ap-southeast-2"
Error occurred fetching the seelog config file path:  open /etc/amazon/ssm/seelog.xml: no such file or directory
Initializing new seelog logger
New Seelog Logger Creation Complete
2021-10-16 22:54:14 WARN Could not read InstanceFingerprint file: InstanceFingerprint does not exist.
2021-10-16 22:54:14 INFO No initial fingerprint detected, generating fingerprint file...
2021-10-16 22:54:15 INFO Successfully registered the instance with AWS SSM using Managed instance-id: mi-001e234567890dd12
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note: You can ignore the warning and error. Make sure you receive a message that your instance/server has been registered with SSM.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start the SSM agent
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ubuntu@ubuntu:/$ sudo service amazon-ssm-agent start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you can see the registered instance / server on AWS Systems Manager --&amp;gt; Fleet Manager (which was earlier referred to as Managed instances)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuokedjls3hrov2fvxymu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuokedjls3hrov2fvxymu.png" alt="Image5" width="451" height="306"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiixisvhwbq63zrv6n5o2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiixisvhwbq63zrv6n5o2.png" alt="Image6" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Solution Step 3. [Optional] Setup an Inventory association
&lt;/h5&gt;

&lt;p&gt;AWS Systems Manager Inventory Association enables to collect information about your instances and the software installed on them, helping you to understand your system configurations and installed applications. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From the AWS Systems Manager page, navigate to &lt;strong&gt;Inventory&lt;/strong&gt; section and then &lt;strong&gt;Setup Inventory&lt;/strong&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xdlzfhug3vrolduz3ra.png" alt="Image6.1" width="451" height="293"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9s5t12n02mnzdy3okdbn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9s5t12n02mnzdy3okdbn.png" alt="Image7" width="451" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7gc4ll0gc5fhdve3ni5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7gc4ll0gc5fhdve3ni5.png" alt="Image8" width="451" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Leave the default settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9puo611td3f7mwhf59b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9puo611td3f7mwhf59b.png" alt="Image9" width="451" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjhvxa9rhqtlxqv60gh1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjhvxa9rhqtlxqv60gh1.png" alt="Image10" width="451" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the Inventory association is setup, it runs every 30 mins to gather all the inventory from the managed instances. This makes use of AWS Systems Manager document &lt;code&gt;AWS-GatherSoftwareinventory&lt;/code&gt;. You can verify the same from &lt;strong&gt;State Manager&lt;/strong&gt; section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwfiygw9u27rbgvk1bpf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwfiygw9u27rbgvk1bpf.png" alt="Image11" width="451" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyqv233f5spww9h6w6l1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyqv233f5spww9h6w6l1.png" alt="Image12" width="451" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ay7cti7qa6hk2q0cwoa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4ay7cti7qa6hk2q0cwoa.png" alt="Image13" width="451" height="153"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the status changes to &lt;em&gt;Success&lt;/em&gt;, you can view more details from the &lt;strong&gt;Resources&lt;/strong&gt; tab in &lt;strong&gt;State Manager&lt;/strong&gt; section. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkdsr44j8ovvcuheo4kr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxkdsr44j8ovvcuheo4kr.png" alt="Image14" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Information collected about the software and settings on the managed Ubuntu server are displayed in &lt;strong&gt;Inventory&lt;/strong&gt; section.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjva9gr8uulx4dzgfe5x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjva9gr8uulx4dzgfe5x.png" alt="Image15" width="451" height="278"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flolv2ujj3ey7566rwk4g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flolv2ujj3ey7566rwk4g.png" alt="Image16" width="451" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Solution Step 4. Configure Patch Manager
&lt;/h4&gt;

&lt;p&gt;AWS Systems Manager patch manager helps you select and deploy operating system and software patches automatically across large groups of Amazon EC2 or on-premises instances. &lt;/p&gt;

&lt;p&gt;Using patch baselines, you can configure to auto approve a select categories of patches to be installed like OS or high severity patches. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From the AWS Systems Manager page, navigate to &lt;strong&gt;Patch Manager&lt;/strong&gt; section and then &lt;strong&gt;Configure patching&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbguqy9dum670rborgoob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbguqy9dum670rborgoob.png" alt="Image17" width="451" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk25z33p3ft5dtlnaumju.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk25z33p3ft5dtlnaumju.png" alt="Image18" width="451" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaoi3dyd62xqyay9f5md.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaoi3dyd62xqyay9f5md.png" alt="Image19" width="800" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Then you can define a maintenance window for patches so that they are only applied during preset times. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rxusj469l3lt2mqzrrx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5rxusj469l3lt2mqzrrx.png" alt="Image20" width="451" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhlrvlqpyxvaj1kdrzur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flhlrvlqpyxvaj1kdrzur.png" alt="Image21" width="451" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83hcj28bjnd04acpuvy2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F83hcj28bjnd04acpuvy2.png" alt="Image22" width="451" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see, AWS uses the &lt;code&gt;AWS-UbuntuDefaultPatchBaseline&lt;/code&gt; for the patching the Ubuntu servers/instances. This is the default patch baseline for Ubuntu provided by AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dqljdq1hdajb7ns1dir.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6dqljdq1hdajb7ns1dir.png" alt="Image23" width="477" height="254"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;System Manger's Maintenance window acts like glue for all the components in Patch Manager. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhevfamx9xbxiqunrf4p7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhevfamx9xbxiqunrf4p7.png" alt="Image24" width="451" height="118"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9luzjf1626fw20hw43r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa9luzjf1626fw20hw43r.png" alt="Image25" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the background, Systems Manger use &lt;code&gt;RUN COMMAND&lt;/code&gt; to perform the patching task.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubbhdqp3madr6seb9tgs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fubbhdqp3madr6seb9tgs.png" alt="Image26" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Patching task is auto-executed at the preset time and details of the same can be verified in the &lt;strong&gt;History&lt;/strong&gt; section. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpzl4cifxdz79av9qzhq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcpzl4cifxdz79av9qzhq.png" alt=" " width="800" height="295"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycmow9725rojsn7umjze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fycmow9725rojsn7umjze.png" alt=" " width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqufoxvy8w2o5h7yfant.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsqufoxvy8w2o5h7yfant.png" alt=" " width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Click on the &lt;strong&gt;View Output&lt;/strong&gt; to see the task execution details. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrtm5jbwaf8b4439c1zm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxrtm5jbwaf8b4439c1zm.png" alt=" " width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoyuzq4h38lgwbkzkbxf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyoyuzq4h38lgwbkzkbxf.png" alt=" " width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;*** Conclusion&lt;br&gt;
Using AWS Systems Manager's Patch Manager feature, I was able to successfully patch my Ubuntu server on my home WiFi and also setup a maintenance window to do the same activity at preset time.&lt;/p&gt;

&lt;p&gt;Apart from Patch Manager and Inventory, AWS Systems Manager also provides features like Incident Manager, Parameter Store, Automation, Run Command and OpsCenter which I would like to explore in my future blogs. &lt;/p&gt;

&lt;p&gt;Thanks for reading my blog. Please share your comments and feedback.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>raspberrypi</category>
      <category>systemsmanager</category>
    </item>
    <item>
      <title>AWS CloudFormation - Retry Stack Operations</title>
      <dc:creator>Chandrashekar Y M</dc:creator>
      <pubDate>Sun, 05 Sep 2021 11:34:32 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-cloudformtion-retry-stack-operations-15d8</link>
      <guid>https://dev.to/aws-builders/aws-cloudformtion-retry-stack-operations-15d8</guid>
      <description>&lt;p&gt;As I continue my study for AWS Certified Solutions Architect Professional (SAP-C01) certification exam, I was practising the use of AWS CloudFormation service which provides an easy way to to model a collection of AWS and 3rd party resources, provision them quickly and consistently, and manage them through their lifecycles. &lt;/p&gt;

&lt;p&gt;As part of CloudFormation service, we create a &lt;strong&gt;template&lt;/strong&gt; that describes all the AWS resources that we need to create and manage, upload the &lt;strong&gt;template&lt;/strong&gt;, and CloudFormation service takes care of provisioning and configuring the resources and their dependencies as a &lt;strong&gt;stack&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;During the provisioning process of a &lt;strong&gt;stack&lt;/strong&gt;, it might fail for different reasons like errors in template, typos or invalid values specified for the parameters and also due to issues outside the template like IAM permission errors. When such errors occur, CloudFormation rolls back the stack to previous stable state. If this error was part of &lt;strong&gt;stack&lt;/strong&gt; creation, then CloudFormation deletes all the resources that it created up-to the point of the error. This roll back process can take a lot of time depending on the complexity of the template, number of resources and their dependencies involved. &lt;/p&gt;

&lt;p&gt;On 30-Aug-2021, AWS announced a new CloudFormation feature which allows us to &lt;strong&gt;disable&lt;/strong&gt; the automatic rollback, &lt;strong&gt;keep&lt;/strong&gt; the resources which were successfully created or updated before the error occurs, and &lt;strong&gt;retry&lt;/strong&gt; the stack operations from the point of failure. Details of this new feature can be read &lt;a href="https://aws.amazon.com/blogs/aws/new-for-aws-cloudformation-quickly-retry-stack-operations-from-the-point-of-failure/" rel="noopener noreferrer"&gt;here&lt;/a&gt;. This new feature helps save a lot of time by allowing us to fix the errors and retry the creation or update of the &lt;strong&gt;stack&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;I wanted to explore this new feature by trying out a sample CloudFormation template that AWS provides in their &lt;a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/sample-templates-applications-us-east-1.html" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;. This blog is about my learning and hands-on experience of this new feature. The sample CloudFormation template I used - installs and deploys a WordPress onto a single EC2 instance with a local MySQL database for storage. The template can be downloaded &lt;a href="https://s3.amazonaws.com/cloudformation-templates-us-east-1/WordPress_Single_Instance.template" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Since I wanted the stack creation to fail in order to test the new feature, I edited the CloudFormation template and set the default value for EC2 instance type to invalid value of "t22.small".&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0l6hxjt0c9cug1jfg98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy0l6hxjt0c9cug1jfg98.png" alt="Alt Text" width="800" height="333"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I want to create a stack from this template. On the CloudFormation console, I uploaded the edited template.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7azbe5312xv1ugkd23pf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7azbe5312xv1ugkd23pf.png" alt="Alt Text" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then, I entered the name of the stack and fill the parameter values. One of the parameter in the template is to choose the web server EC2 instance type. Since I had defaulted this value to "t22.small", the same was set, as shown below. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fssm2z7tclqid9sha8ufu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fssm2z7tclqid9sha8ufu.png" alt="image" width="800" height="525"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, on the next screen in &lt;strong&gt;Stack failure options&lt;/strong&gt;, I see a new option to select &lt;strong&gt;Preserve successfully provisioned resources&lt;/strong&gt; to keep the resources, in case of errors, the resources that have already been created. Failed resources are always rolled back to the last known stable state. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqc6z5tbebc1cw40tijx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flqc6z5tbebc1cw40tijx.png" alt="image" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I then review the chosen configurations and click &lt;strong&gt;Create stack&lt;/strong&gt; button. &lt;/p&gt;

&lt;p&gt;The &lt;strong&gt;stack&lt;/strong&gt; creation process starts and after few mins it fails because of an error. The creation of the WebServer EC2 instance failed as I had selected an invalid EC2 instance type. The details can be viewed on &lt;strong&gt;Events&lt;/strong&gt; tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2uh6d1o54tpqcqjp54no.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2uh6d1o54tpqcqjp54no.png" alt="image" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since I chose the configuration to preserve the provisioned resources, the &lt;strong&gt;WebServerSecurityGroup&lt;/strong&gt; that got created before the error are not rolled back and still present. On the &lt;strong&gt;Resources&lt;/strong&gt; tab, you can see it's status to be &lt;code&gt;CREATE_COMPLETE&lt;/code&gt;. While the status of &lt;strong&gt;WebServer&lt;/strong&gt; is in the  &lt;code&gt;CREATE_FAILED&lt;/code&gt; status.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyu6uv1l0b4u9ufb2nxqw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyu6uv1l0b4u9ufb2nxqw.png" alt="image" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The rollback is paused and I get the following options to proceed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Retry&lt;/strong&gt; – To retry the stack operation without any change. This option is useful if a resource failed to provision due to an issue outside the template. I can fix the issue and then retry from the point of failure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt; – To update the template or the parameters before retrying the stack creation. The stack update starts from where the last operation was interrupted by an error.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Rollback&lt;/strong&gt; – To roll back to the last known stable state. This is similar to default CloudFormation behaviour.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnb1ccnf456rr17ueftvs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnb1ccnf456rr17ueftvs.png" alt="image" width="800" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h5&gt;
  
  
  Fixing the error
&lt;/h5&gt;

&lt;p&gt;Since I know what caused the error (selecting invalid instance type for parameter &lt;strong&gt;InstanceType&lt;/strong&gt;), I choose &lt;strong&gt;Update&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I don't need to upload a modified template to fix this. In &lt;strong&gt;Parameters&lt;/strong&gt;, I choose a valid InstanceType to fix the error.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkn6i9rby58dxj4jec4m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbkn6i9rby58dxj4jec4m.png" alt="image" width="800" height="81"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;strong&gt;Change set preview&lt;/strong&gt;, the update will modify the EC2 Instance which is in &lt;code&gt;CREATE_FAILED&lt;/code&gt; state and tries to provision the EC2 Instance with the updated Instance type. Then, I choose &lt;strong&gt;Update stack&lt;/strong&gt; option.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzks46v48sdzrc2w3e7lq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzks46v48sdzrc2w3e7lq.png" alt="image" width="800" height="267"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This time, the creation of stack was successful with status &lt;strong&gt;UPDATE_COMPLETE&lt;/strong&gt;. Below is the screenshots of &lt;strong&gt;Events&lt;/strong&gt; and &lt;strong&gt;Resources&lt;/strong&gt; tabs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kha5gitbwkjlfx1gi0v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7kha5gitbwkjlfx1gi0v.png" alt="image" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tlb11e4zekd715ofnzx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6tlb11e4zekd715ofnzx.png" alt="image" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The expected output for the template that was used was a WordPress Website, which can be seen on &lt;strong&gt;Outputs&lt;/strong&gt; tab.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9i3ezfj4sby9lzmlmj1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb9i3ezfj4sby9lzmlmj1.png" alt="image" width="800" height="160"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The template I chose for my learning was quite simple and the new feature of retrying stack operations from the point of failure still saved me time. With more complex templates, I can imagine the amount of time that can be saved with this new capability of retrying stack operations. &lt;/p&gt;

&lt;p&gt;Thanks for reading the blog post. I wanted to write down my understanding of this new feature of CloudFormation. I am sure many AWS developers would appreciate this really helpful feature.&lt;/p&gt;

&lt;p&gt;Please point out any mistakes and provide your feedback. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>AWS Lambda Layers</title>
      <dc:creator>Chandrashekar Y M</dc:creator>
      <pubDate>Sun, 22 Aug 2021 04:36:51 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-lambda-layers-214</link>
      <guid>https://dev.to/aws-builders/aws-lambda-layers-214</guid>
      <description>&lt;p&gt;As part of my preparation towards AWS Certified Solutions Architect Professional (SAP-C01) certification exam, I am currently studying the details of AWS Lambda included in Serverless section of the exam blue print. Exam blue print or exam guide can be found &lt;a href="https://d1.awsstatic.com/training-and-certification/docs-sa-pro/AWS-Certified-Solutions-Architect-Professional_Exam-Guide.pdf" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;I want to share my learnings on Lambda layers as part of this blog. As you might be aware, AWS Lambda is a compute service that lets us to run our code without provisioning any virtual machines or compute infrastructure. It is a Function-as-a-Service offering from AWS. Lambda functions uses a runtime (example: Python) and runs in a runtime environment when it is invoked. As a service, you are billed for the duration that the functions runs.&lt;/p&gt;

&lt;p&gt;When a Lambda function is created, the code is packaged as a deployment package into .zip file. As you add libraries and other dependencies - which is referred by the Lambda function, creating and uploading the deployment package can slow down the development and execution. &lt;/p&gt;

&lt;p&gt;In November 2018, AWS introduced Lambda Layers. A Lambda layer  is a .zip file archive that contains additional code, data, libraries, custom runtime and configuration files. The .zip file archive can be loaded to Lambda layer from an S3 bucket or from your local machine. Any layer which is used are actually extracted into &lt;strong&gt;/opt&lt;/strong&gt; folder inside the runtime environment. Use of Lambda Layers promote code sharing and separation of responsibilities which enables faster development. &lt;/p&gt;

&lt;p&gt;Lambda Layers offers following advantages over not using them in first place:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Layers allow us to use new runtimes which are not currently supported by default with Lambda.&lt;/li&gt;
&lt;li&gt;Layers also allows libraries to be externalised which means contained in a separate package, enforcing separation of concerns if any, between dependencies and actual function code. &lt;/li&gt;
&lt;li&gt;Enables faster deployments, because code that needs to be packaged and uploaded is smaller. &lt;/li&gt;
&lt;li&gt;Layers can be reused by other Lambda functions within an AWS account and shared between AWS accounts or shared publicly with developer communities.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I also tested the example provided by AWS to see how layers work in real. Related AWS blog can be found &lt;a href="https://aws.amazon.com/blogs/aws/new-for-aws-lambda-use-any-programming-language-and-share-common-components/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I created a Lambda function using the following piece of Python code with Python 3.8 as runtime.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;numpy&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;scipy.spatial&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ConvexHull&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Using NumPy&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;random matrix_a =&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;matrix_a&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matrix_a&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;random matrix_b =&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;matrix_b&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matrix_b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;matrix_a * matrix_b = &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matrix_a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dot&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;matrix_b&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Using SciPy&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;num_points&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num_points&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;random points:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;points&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;np&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;rand&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;num_points&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;point&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;points&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;point&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;hull&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ConvexHull&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;points&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The smallest convex set containing all&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;num_points&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;points has&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hull&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;simplices&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sides,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;connecting points:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;simplex&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;hull&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;simplices&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;simplex&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;-&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;simplex&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Screenshot of the Lambda function:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa7088a9ydlifafqhkzp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwa7088a9ydlifafqhkzp.png" alt="image" width="800" height="613"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After I deployed my Lambda function and tested the same with invoking it, I received the following error:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1asj8jktfc3ngk22elps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1asj8jktfc3ngk22elps.png" alt="image" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The error was because there was no module named &lt;strong&gt;NumPy&lt;/strong&gt;. We haven't included their use in the Lambda function code. The default runtime we chose Python 3.8 does not include the &lt;strong&gt;NumPy&lt;/strong&gt; module. To resolve the error, we either have to create a deployment package including this module and upload to Lambda OR use Lambda Layers.&lt;/p&gt;

&lt;p&gt;To use Layers, I create a Layer from the corresponding section of the Lambda function:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkiriunq4aybp3hcxbg65.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkiriunq4aybp3hcxbg65.png" alt="image" width="800" height="90"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS provides following 3 options to configure a layer. I choose to use AWS provided layers. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqxtn3ndrmxpsxyve6ko.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqxtn3ndrmxpsxyve6ko.png" alt="image" width="800" height="532"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I selected the layer &lt;strong&gt;AWSLambda-Python38-ScyPy1x&lt;/strong&gt; which provides the required module &lt;strong&gt;NumPy&lt;/strong&gt; and also the &lt;strong&gt;SciPy&lt;/strong&gt; module which can be used for advanced spatial algorithms.&lt;/p&gt;

&lt;p&gt;Now, again I invoked the Lambda function. This time, the Lambda function was executed successfully by performing a matrix multiplication. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkubc77dbx3a70513w9lj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkubc77dbx3a70513w9lj.png" alt="image" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To conclude, I used a Lambda Layer to provide the required modules for the Lambda function. These modules were not uploaded manually as we used the externalized set of libraries provided by AWS. &lt;/p&gt;

&lt;p&gt;Thanks for reading this blog. I wanted to write down my understanding of the Lambda Layers feature.&lt;/p&gt;

&lt;p&gt;Please point out any mistakes and provide your feedback.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Cloud Resume Challenge - My Journey</title>
      <dc:creator>Chandrashekar Y M</dc:creator>
      <pubDate>Mon, 04 Jan 2021 07:49:47 +0000</pubDate>
      <link>https://dev.to/aws-builders/cloud-resume-challenge-my-journey-50db</link>
      <guid>https://dev.to/aws-builders/cloud-resume-challenge-my-journey-50db</guid>
      <description>&lt;h2&gt;
  
  
  My Story
&lt;/h2&gt;

&lt;p&gt;I'm a Systems Administrator and Configuration Management specialist as part of my day job. I have a minimum experience with coding/scripting. But I do know, how to read/understand a piece of code (JavaScript, Python) and what it is doing. &lt;/p&gt;

&lt;p&gt;As part of my journey through learning AWS and getting certified, I came across &lt;strong&gt;Cloud Resume Challenge&lt;/strong&gt; floated online (Linked/Reddit) by  &lt;strong&gt;Forrest Brazeal&lt;/strong&gt; (Cloud Architect, AWS Serverless Hero). Details of this challenge can be found &lt;a href="https://cloudresumechallenge.dev/instructions/" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I know I am too late to join this challenge (last date set by Forrest Brazeal was 31-July-2020 to get the code review) due to my preparation towards AWS certifications, but I still wanted to take it up and give it a try. I joined the Discord channel run by Forrest, to see how others have completed this challenge and take some help from that wonderful community. I would like to thank Chris Nagy, as I referred his &lt;a href="https://blog.heyitschris.com/" rel="noopener noreferrer"&gt;blog&lt;/a&gt; and git hub repos, whenever I needed help.&lt;/p&gt;

&lt;h2&gt;
  
  
  Now, the Challenge
&lt;/h2&gt;

&lt;h3&gt;
  
  
  AWS Certification
&lt;/h3&gt;

&lt;p&gt;Being completely new to AWS world, I achieved AWS Certified Cloud Practitioner Certification in Feb 2019. Then I went on to achieve AWS Certified Solutions Architect Associate (March 2020), AWS Certified SysOps Administrator (July 2020), AWS Certified Developer Associate (Oct 2020), Currently, I am preparing for AWS Certified Solution Architect Professional and would like to ace asap.&lt;/p&gt;

&lt;h3&gt;
  
  
  Front end- HTML / CSS
&lt;/h3&gt;

&lt;p&gt;I chose a template from &lt;a href="https://colorlib.com/" rel="noopener noreferrer"&gt;Colorlib&lt;/a&gt; and modified it by removing additional pages and links. I tried to keep it as minimalistic as possible. HTML/CSS experience from early days of my career + google helped with editing the template. The HTML page would also include a JavaScript snippet which will update and fetch the visitor count from the "back end". &lt;/p&gt;

&lt;h3&gt;
  
  
  Static S3 Website
&lt;/h3&gt;

&lt;p&gt;Based on the knowledge I acquired during preparation of AWS certifications, hosting a static website on S3 and using CloudFront for content distribution was easy. I purchased a domain name using AWS Route 53 and configured it to use the CloudFront distribution. Also, I made use of AWS Certification Manager(ACM) to procure an SSL certificate for the site.&lt;/p&gt;

&lt;h3&gt;
  
  
  Back-end
&lt;/h3&gt;

&lt;p&gt;Backend infrastructure and logic was needed to update and retrieve the visitor count from a database table. This involved use of AWS resources like API Gateway, Lambda and DynamoDB.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;DynamoDB: DynamoDB is AWS's fast and flexible NoSQL Database service offering for any scale. I created a simple DynamoDB table with one item to store and update the visitor count. Atomic Counter feature of DynamoDB comes in handy here. Atomic Counter  is a numeric attribute that is incremented, unconditionally, without interfering with other write requests. With an atomic counter, the updates are not idempotent. In other words, the numeric value increments each time you call UpdateItem operation. This operation is implemented in the Lambda function (more details below).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Lambda: AWS Lambda is a compute service that lets you run code without provisioning or managing servers. I created a python based Lambda function, which queries the DynamoDB table and updates the visitor count item. As mentioned earlier, I utilised the &lt;code&gt;update_item&lt;/code&gt; operation on the DynamoDB table to increment the numeric value. Since, I don't have Python scripting experience, I referred to the Lambda function code from git hub repos of Chris Nagy and Bansi Mendapara. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;API Gateway: Amazon API Gateway provides the option to create and manage APIs to back-end systems running on Amazon EC2, AWS Lambda, or any publicly addressable web service. In our case, the API Gateway exposes a REST API Endpoint, which will be called by Javascript snippet embedded in the front-end html page, on every page visit/refresh - to update and fetch the visitor count from a DynamoDB table, through Lambda function. Enabling CORS (Cross-Origin Resource Sharing) on the API Gateway resource is mandatory to fetch the response back, when it is called. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Back end - Infrastructure as Code
&lt;/h3&gt;

&lt;p&gt;I initially created back-end components like DynamoDB, Lambda and API Gateway separately using AWS Console and configured them to work together to update and provide the visitor count for the front end html. But the requirement was to make use of AWS SAM (Serverless Application Model) template to define these back end resources as Infrastructure as Code (IaC) and deploy them using SAM commands. I did have basic understating of SAM while preparing for AWS Certified Developer certification. Again, this blog &lt;a href="https://blog.heyitschris.com/posts/get-your-foot-in-the-door-with-sam/" rel="noopener noreferrer"&gt;post&lt;/a&gt; from Chris Nagy helped me to better understand use of SAM for serverless applications build and deployment. Next step, as an improvement, I want to create and deploy even the front-end sources (S3 bucket, CloudFront Distribution, Route53 records configuration) as a SAM template (IaC).&lt;/p&gt;

&lt;p&gt;Here is designer view of the CloudFormation template built and deployed by SAM:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fezf5y54oxuo21qhojtjf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fezf5y54oxuo21qhojtjf.png" alt="Alt Text" width="779" height="637"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Front end - CI/CD
&lt;/h3&gt;

&lt;p&gt;Another requirement is to use GitHub repositories to store the front-end / back-end code and make use of GitHub Actions to achieve continuous integration and deployment, or CI/CD. I never used GitHub Actions earlier, so it was new learning for me. GitHub Actions simplifies and automates many steps in creating/updating and deployment of the resources. For front-end CI/CD, I used GitHub Actions to configure AWS credentials, deploy the changes to S3 bucket (which stores html/css/js/images content) and then invalidating the CloudFront distribution. GitHub secrets were used to securely store environment variables like AWS Login Keys. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foxuy6l667tury4xsjt8f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Foxuy6l667tury4xsjt8f.png" alt="Alt Text" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Back end - CI/CD
&lt;/h3&gt;

&lt;p&gt;A separate GitHub repo was created to store the back-end code, which included the Lambda function, SAM Template and Python tests. Corresponding GitHub action is created to configure AWS credentials, run python tests, SAM build and SAM deploy commands. Again, GitHub secrets were used to securely store environment variables like AWS Login Keys. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgux3kspbl4mb0r2wvncp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fi%2Fgux3kspbl4mb0r2wvncp.png" alt="Alt Text" width="800" height="417"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is the final (not really!!) version of my one page resume: &lt;a href="https://www.chandraym.com/" rel="noopener noreferrer"&gt;https://www.chandraym.com/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>challenge</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
