<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: John Walker</title>
    <description>The latest articles on DEV Community by John Walker (@uzusan).</description>
    <link>https://dev.to/uzusan</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/uzusan"/>
    <language>en</language>
    <item>
      <title>Implementing Guardrails on AWS Bedrock AgentCore</title>
      <dc:creator>John Walker</dc:creator>
      <pubDate>Sat, 31 Jan 2026 21:56:23 +0000</pubDate>
      <link>https://dev.to/uzusan/implementing-guardrails-on-aws-bedrock-agentcore-acj</link>
      <guid>https://dev.to/uzusan/implementing-guardrails-on-aws-bedrock-agentcore-acj</guid>
      <description></description>
    </item>
    <item>
      <title>Creating a multi-agent deployment with Strands SDK and AWS Bedrock AgentCore</title>
      <dc:creator>John Walker</dc:creator>
      <pubDate>Sat, 31 Jan 2026 21:42:00 +0000</pubDate>
      <link>https://dev.to/uzusan/creating-a-multi-agent-deployment-with-strands-sdk-and-aws-bedrock-agentcore-2mcj</link>
      <guid>https://dev.to/uzusan/creating-a-multi-agent-deployment-with-strands-sdk-and-aws-bedrock-agentcore-2mcj</guid>
      <description></description>
    </item>
    <item>
      <title>Deploying Terraform Code via AWS CodeBuild and AWS CodePipeline</title>
      <dc:creator>John Walker</dc:creator>
      <pubDate>Sun, 02 Feb 2025 23:16:37 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploying-terraform-code-via-aws-codebuild-and-aws-codepipeline-2l0</link>
      <guid>https://dev.to/aws-builders/deploying-terraform-code-via-aws-codebuild-and-aws-codepipeline-2l0</guid>
      <description>&lt;p&gt;In a previous post i demonstrated how to use Terraform to deploy an Amazon API Gateway backed by Lambda. This was an example of how you can use Terraform as Infrastructure as Code to manage your resources in AWS.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/aws-builders/deploying-amazon-api-gateway-and-lambda-with-terraform-1i2o"&gt;https://dev.to/aws-builders/deploying-amazon-api-gateway-and-lambda-with-terraform-1i2o&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, the way of deploying the Terraform code to AWS was to manually run the terraform plan and apply steps locally each time we want to update the API Gateway or our Lambda code. We can improve on this and automate the deployment of the Terraform Code using AWS CodePipeline and CodeBuild.&lt;/p&gt;

&lt;p&gt;To do this, we will still need to the Deploy the CICD Pipeline initially locally (or from a machine in AWS), which will need to be monitored to update the pipeline itself, but this Pipeline will allow any code committed to the GitHub Repository to be deployed into AWS Automatically, and should only need minimal maintenance as it should not change on the same frequency as the API Gateway / Lambda Code.&lt;/p&gt;

&lt;p&gt;To ensure we don't just blindly deploy code that hasn't been checked, we'll split this out into stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;CodePipeline&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download Source Code from API Gateway Repository&lt;/li&gt;
&lt;li&gt;Run a Planning Step in AWS CodeBuild.

&lt;ul&gt;
&lt;li&gt;Download and Install Terraform&lt;/li&gt;
&lt;li&gt;Initialise the Terraform Environment with an S3 Backend&lt;/li&gt;
&lt;li&gt;Run the Terraform Plan, and save the output to an Artifact&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Send an Email via SNS to say the pipeline is awaiting approval

&lt;ul&gt;
&lt;li&gt;Await Manual Approval&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Run an Apply Step in AWS CodeBuild.

&lt;ul&gt;
&lt;li&gt;Download and Install Terraform&lt;/li&gt;
&lt;li&gt;Initialise the Terraform Environment with an S3 Backend&lt;/li&gt;
&lt;li&gt;Run the Terraform Apply using the Artifact from the Planning stage&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This is what we are creating in terms of the flow:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbo6a269rievod8wn7hnw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbo6a269rievod8wn7hnw.png" alt="A Flow Chart showing data coming from GitHub into AWS and into a CodePipline with a source stage, AWS CodeBuild for planning, a manual approval, another CodeBuild for applying with an SNS to email linked to the manual approval" width="800" height="510"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform - main.tf
&lt;/h3&gt;

&lt;p&gt;For the main terraform setup, we will need some prerequisites, in terms of the backend and provider etc.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }

  # S3 backend configuration
  backend "s3" {
    bucket = "REPLACE_WITH_YOUR_TERRAFORM_STATE_BUCKET"
    key    = "REPLACE_WITH_FOLDER_NAME/terraform.tfstate"
    region = var.aws_region
    encrypt = true
  }
}

provider "aws" {
  region = var.aws_region
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This sets up terraform to use the AWS provider and provides a location for the state file (that tracks the changes in your architecture) in S3. You should create an S3 bucket and provide it here, and insert a folder name (such as ci-cd-pipeline) for the state file to be stored in.&lt;/p&gt;

&lt;h3&gt;
  
  
  S3 Artifacts - main.tf
&lt;/h3&gt;

&lt;p&gt;For the codepipeline, we will need to pass artifacts between the different stages (mainly the plan file), so we need somewhere to store these artifacts during the pipelines run.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_s3_bucket" "codepipeline_bucket" {
  bucket_prefix = "tf-pipeline-artifacts-"
  force_destroy = true
}

resource "aws_s3_bucket_ownership_controls" "codepipeline_bucket" {
  bucket = aws_s3_bucket.codepipeline_bucket.id
  rule {
    object_ownership = "BucketOwnerPreferred"
  }
}

resource "aws_s3_bucket_acl" "codepipeline_bucket" {
  depends_on = [aws_s3_bucket_ownership_controls.codepipeline_bucket]
  bucket     = aws_s3_bucket.codepipeline_bucket.id
  acl        = "private"
}

resource "aws_s3_bucket_versioning" "codepipeline_bucket" {
  bucket = aws_s3_bucket.codepipeline_bucket.id
  versioning_configuration {
    status = "Enabled"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Notifications - main.tf
&lt;/h3&gt;

&lt;p&gt;When we have the approval step, we'll need to notify someone that an approval is ready, so we'll create an SNS topic to send an email.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_sns_topic" "approval_topic" {
  name = "terraform-approval-topic"
}

resource "aws_sns_topic_subscription" "approval_email" {
  topic_arn = aws_sns_topic.approval_topic.arn
  protocol  = "email"
  endpoint  = var.notification_email
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will set up a topic and subscribe an email to that topic (which we'll pass in via the variables file later on).&lt;/p&gt;

&lt;h3&gt;
  
  
  IAM Roles - main.tf
&lt;/h3&gt;

&lt;p&gt;We need to create permissions for our CodeBuild to perform its tasks, as well as the code pipeline and SNS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# CodeBuild IAM Role
resource "aws_iam_role" "codebuild_role" {
  name = "terraform-codebuild-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "codebuild.amazonaws.com"
        }
      }
    ]
  })
}

resource "aws_iam_role_policy" "codebuild_policy" {
  name = "terraform-codebuild-policy"
  role = aws_iam_role.codebuild_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ]
        Resource = "*"
      },
      {
        Effect = "Allow"
        Action = [
          "s3:GetObject",
          "s3:PutObject",
          "s3:GetObjectVersion",
          "s3:ListBucket"
        ]
        Resource = [
          aws_s3_bucket.codepipeline_bucket.arn,
          "${aws_s3_bucket.codepipeline_bucket.arn}/*",
          "arn:aws:s3:::REPLACE_WITH_YOUR_TERRAFORM_STATE_BUCKET",
          "arn:aws:s3:::REPLACE_WITH_YOUR_TERRAFORM_STATE_BUCKET/*"
        ]
      }
    ]
  })
}

# CodePipeline IAM Role
resource "aws_iam_role" "codepipeline_role" {
  name = "terraform-codepipeline-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "codepipeline.amazonaws.com"
        }
      }
    ]
  })
}

resource "aws_iam_role_policy" "codepipeline_policy" {
  name = "terraform-codepipeline-policy"
  role = aws_iam_role.codepipeline_role.id

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "s3:GetObject",
          "s3:PutObject",
          "s3:GetObjectVersion",
          "s3:GetBucketVersioning"
        ]
        Resource = [
          aws_s3_bucket.codepipeline_bucket.arn,
          "${aws_s3_bucket.codepipeline_bucket.arn}/*"
        ]
      },
      {
        Effect = "Allow"
        Action = [
          "codebuild:BatchGetBuilds",
          "codebuild:StartBuild"
        ]
        Resource = "*"
      },
      {
        Effect = "Allow"
        Action = [
          "sns:Publish"
        ]
        Resource = aws_sns_topic.approval_topic.arn
      }
    ]
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The codebuild policy will allow our codebuild project to create logs and to set and retrieve files from our s3 bucket.&lt;/p&gt;

&lt;p&gt;The code pipeline role is allowed to be assumed by the AWS CodePipeline service and it can also get S3 artifacts from our bucket, as well as start CodeBuild and send messages to our SNS Topic.&lt;/p&gt;

&lt;h3&gt;
  
  
  CodeBuild Projects - main.tf
&lt;/h3&gt;

&lt;p&gt;We need 2 CodeBuild Projects, one for each of our Plan and Apply steps.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Terraform Plan Project
resource "aws_codebuild_project" "terraform_plan" {
  name          = "terraform-plan"
  description   = "Run terraform plan"
  service_role  = aws_iam_role.codebuild_role.arn
  build_timeout = 15

  artifacts {
    type = "CODEPIPELINE"
  }

  environment {
    type                        = "LINUX_CONTAINER"
    compute_type                = "BUILD_GENERAL1_SMALL"
    image                       = "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
    privileged_mode             = false
  }

  logs_config {
    cloudwatch_logs {
      group_name = "terraform-plan-logs"
    }
  }

  source {
    type      = "CODEPIPELINE"
    buildspec = &amp;lt;&amp;lt;EOF
version: 0.2

phases:
  install:
    runtime-versions:
      python: 3.8
    commands:
      - wget https://releases.hashicorp.com/terraform/1.0.11/terraform_1.0.11_linux_amd64.zip
      - unzip terraform_1.0.11_linux_amd64.zip
      - mv terraform /usr/local/bin/
  pre_build:
    commands:
      - echo "Starting Terraform plan phase..."
      - terraform --version
  build:
    commands:
      - terraform init -input=false
      - terraform plan -input=false -out=tfplan
  post_build:
    commands:
      - echo "Completed Terraform plan phase"

artifacts:
  files:
    - tfplan
    - .terraform/**/*
    - '**/*'
EOF
  }
}

# Terraform Apply Project
resource "aws_codebuild_project" "terraform_apply" {
  name          = "terraform-apply"
  description   = "Run terraform apply"
  service_role  = aws_iam_role.codebuild_role.arn
  build_timeout = 15

  artifacts {
    type = "CODEPIPELINE"
  }

  environment {
    type                        = "LINUX_CONTAINER"
    compute_type                = "BUILD_GENERAL1_SMALL"
    image                       = "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
    privileged_mode             = false
  }

  logs_config {
    cloudwatch_logs {
      group_name = "terraform-apply-logs"
    }
  }

  source {
    type      = "CODEPIPELINE"
    buildspec = &amp;lt;&amp;lt;EOF
version: 0.2

phases:
  install:
    runtime-versions:
      python: 3.8
    commands:
      - wget https://releases.hashicorp.com/terraform/1.0.11/terraform_1.0.11_linux_amd64.zip
      - unzip terraform_1.0.11_linux_amd64.zip
      - mv terraform /usr/local/bin/
  pre_build:
    commands:
      - echo "Starting Terraform apply phase..."
      - terraform --version
  build:
    commands:
      - terraform init -input=false
      - terraform apply -input=false tfplan
  post_build:
    commands:
      - echo "Completed Terraform apply phase"

artifacts:
  files:
    - '**/*'
EOF
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first project is our Plan stage. Firstly we create the codebuild project, give it a name and service role. Then we apply a 15 minute time out and set the environment variables (linux and small container running amazon linux 2 is fine for our needs).&lt;/p&gt;

&lt;p&gt;We then set up the logs and the Source. This is where the main set up is for CodeBuild.&lt;/p&gt;

&lt;p&gt;CodeBuild is configured with a YAML file that allows commands to be run in phases. This is the buildspec.yml file (in this case we have it inline, but you could reference it from your repository). the full specifications are here &lt;a href="https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For our purposes, the important ones are the phases. These blocks run in a specific order, and execute the commands in those blocks. The order is install, pre_build, build, and post_build. You can run commands as different users, have complex error handling, error catching etc here.&lt;/p&gt;

&lt;p&gt;In our case we'll do the following. Both CodeBuild projects are nearly identical, just with the plan or apply specific logs, locations etc, and the only command difference is terraform plan vs terraform apply&lt;/p&gt;

&lt;p&gt;Install phase:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;set up python&lt;/li&gt;
&lt;li&gt;set up terraform

&lt;ul&gt;
&lt;li&gt;download terraform&lt;/li&gt;
&lt;li&gt;unzip terraform&lt;/li&gt;
&lt;li&gt;move it to the user executable directory&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Pre build phase:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;echo a message to screen&lt;/li&gt;
&lt;li&gt;output the terraform version to show terraform is working (this can help catch errors for debugging)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Build phase:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;initialise terraform (syncs up with our back end)&lt;/li&gt;
&lt;li&gt;plan or apply the terraform, outputting to a tfplan file in the plan stage, then using this same file in the apply stage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Post build phase:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;output a complete message&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We then use the Artifacts section to pass the tfplan file from the plan stage over to the apply stage.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Pipeline - main.tf
&lt;/h3&gt;

&lt;p&gt;Now that we have our components, we need to assemble them into a Code Pipeline.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_codepipeline" "terraform_pipeline" {
  name     = "terraform-deployment-pipeline"
  role_arn = aws_iam_role.codepipeline_role.arn

  artifact_store {
    location = aws_s3_bucket.codepipeline_bucket.bucket
    type     = "S3"
  }

  # Source Stage - Pull from GitHub
  stage {
    name = "Source"

    action {
      name             = "Source"
      category         = "Source"
      owner            = "ThirdParty"
      provider         = "GitHub"
      version          = "1"
      output_artifacts = ["source_output"]

      configuration = {
        Owner      = var.github_repo_owner
        Repo       = var.github_repo_name
        Branch     = var.github_branch
        OAuthToken = var.github_token
      }
    }
  }

  # Plan Stage - Run Terraform Plan
  stage {
    name = "Plan"

    action {
      name             = "TerraformPlan"
      category         = "Build"
      owner            = "AWS"
      provider         = "CodeBuild"
      version          = "1"
      input_artifacts  = ["source_output"]
      output_artifacts = ["plan_output"]

      configuration = {
        ProjectName = aws_codebuild_project.terraform_plan.name
      }
    }
  }

  # Approval Stage - Manual approval before apply
  stage {
    name = "Approval"

    action {
      name     = "Approval"
      category = "Approval"
      owner    = "AWS"
      provider = "Manual"
      version  = "1"

      configuration = {
        NotificationArn = aws_sns_topic.approval_topic.arn
        CustomData      = "Please review the terraform plan and approve to apply changes"
      }
    }
  }

  # Apply Stage - Run Terraform Apply
  stage {
    name = "Apply"

    action {
      name            = "TerraformApply"
      category        = "Build"
      owner           = "AWS"
      provider        = "CodeBuild"
      version         = "1"
      input_artifacts = ["plan_output"]

      configuration = {
        ProjectName = aws_codebuild_project.terraform_apply.name
      }
    }
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first section will set up the pipeline and its role and artifact store location.&lt;/p&gt;

&lt;p&gt;The next few sections are stages, we'll go through these one at a time.&lt;/p&gt;

&lt;h4&gt;
  
  
  Source Stage
&lt;/h4&gt;

&lt;p&gt;This source stage is used to link up to GitHub and pull down the source code, and output it as an artifact to be passed through to th next stage. This is currently set up to use OAuth with GitHub, but this can be changed to different providers and types of connections. (The CodeStar connection method is the current recommended way to do this, but i've stuck with OAuth here for simplicity)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/codepipeline/latest/userguide/integrations-action-type.html#integrations-source" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/codepipeline/latest/userguide/integrations-action-type.html#integrations-source&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Plan Stage
&lt;/h4&gt;

&lt;p&gt;This stage is set up to use our previously created CodeBuild for planning and links to the CodeBuild Project&lt;/p&gt;

&lt;h4&gt;
  
  
  Approval Stage
&lt;/h4&gt;

&lt;p&gt;This stage sets up a manual approval so you can go into the console and click approve, and sends out a notification to the email address to say this is ready.&lt;/p&gt;

&lt;h4&gt;
  
  
  Apply Stage
&lt;/h4&gt;

&lt;p&gt;Similar to Planning we link up to our Codebuild for the apply step.&lt;/p&gt;

&lt;h4&gt;
  
  
  Outputs - main.tf
&lt;/h4&gt;

&lt;p&gt;If required, you can output some information such as the codepipeline url and the sns topic arn for information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "codepipeline_url" {
  description = "URL to the CodePipeline console for the created pipeline"
  value       = "https://${var.aws_region}.console.aws.amazon.com/codesuite/codepipeline/pipelines/${aws_codepipeline.terraform_pipeline.name}/view?region=${var.aws_region}"
}

output "approval_topic_arn" {
  description = "ARN of the SNS topic used for pipeline approval notifications"
  value       = aws_sns_topic.approval_topic.arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Variables - variables.tf
&lt;/h3&gt;

&lt;p&gt;We've set up some variables in the main file to control deployments, covering the region and github information, these should go in a variables.tf file so you can pass the required parameters into the terraform.&lt;/p&gt;

&lt;p&gt;`variable "aws_region" {&lt;br&gt;
  description = "The AWS region to deploy resources into"&lt;br&gt;
  type        = string&lt;br&gt;
  default     = "eu-west-1"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable "github_repo_owner" {&lt;br&gt;
  description = "GitHub repository owner"&lt;br&gt;
  type        = string&lt;br&gt;
  default     = "REPO_OWNER_NAME"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable "github_repo_name" {&lt;br&gt;
  description = "GitHub repository name"&lt;br&gt;
  type        = string&lt;br&gt;
  default     = "REPO_NAME"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable "github_branch" {&lt;br&gt;
  description = "GitHub repository branch"&lt;br&gt;
  type        = string&lt;br&gt;
  default     = "main"&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable "github_token" {&lt;br&gt;
  description = "GitHub OAuth token"&lt;br&gt;
  type        = string&lt;br&gt;
  sensitive   = true&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;variable "notification_email" {&lt;br&gt;
  description = "Email address to receive approval notifications"&lt;br&gt;
  type        = string&lt;br&gt;
}`&lt;/p&gt;

&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This method of planning and applying terraform via CodeBuild can be useful when deploying terraform in an automated way whenever the repository containing the terraform is updated, while still keeping a manual approval step in place, and allowing the plan to be reviewed in the CodeBuild plan step logs.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscommunity</category>
      <category>terraform</category>
      <category>codepipeline</category>
    </item>
    <item>
      <title>Deploying Amazon API Gateway and Lambda with Terraform</title>
      <dc:creator>John Walker</dc:creator>
      <pubDate>Sat, 01 Feb 2025 04:35:16 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploying-amazon-api-gateway-and-lambda-with-terraform-1i2o</link>
      <guid>https://dev.to/aws-builders/deploying-amazon-api-gateway-and-lambda-with-terraform-1i2o</guid>
      <description>&lt;p&gt;AWS provides multiple ways to deploy infrastructure and code onto their platform, with differing levels of complexity and scalability.&lt;/p&gt;

&lt;p&gt;One of the most effective and safe ways is to use Infrastructure as Code (IaC), to define the required resources in a repeatable template that allows you to deploy your Infrastructure and keep it updated, while tracking any changes to ensure the environment stays the way you need it.&lt;/p&gt;

&lt;p&gt;AWS provides native CloudFormation for deploying code, however this post will concentrate on Terraform. Each type of IaC has its advantages and disadvantages, as do others like CDK, Serverless Framework, Pulumi and more.&lt;/p&gt;

&lt;p&gt;I've come to like Terraform for its more programming like features over CloudFormation, though I do still use CloudFormation in some cases. I also use CDK in some situations, it depends on the nature of what you are deploying.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architecture
&lt;/h3&gt;

&lt;p&gt;In this example, we'll be deploying an Amazon API Gateway with links to a Lambda Function.&lt;/p&gt;

&lt;p&gt;The files for deploying this example are available on my GitHub:&lt;br&gt;
&lt;a href="https://github.com/uzusan/apigateway-tf-blogpost" rel="noopener noreferrer"&gt;https://github.com/uzusan/apigateway-tf-blogpost&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This Lambda will emulate typical CRUD operations (Create, Read, Update, and Delete) as if it was connected to a database, mapping the operations to HTTP requests for POST, GET, PUT and DELETE methods respectively.&lt;/p&gt;

&lt;p&gt;Each request into the API Gateway is routed to the Lambda Function (These could be separate Lambdas for each operation) to perform the specific function. In this example, it will just return a JSON object with a text string showing what would be done in a full integration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fidqh9kx4igfjiktkccpc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fidqh9kx4igfjiktkccpc.png" alt="API Gateway to Lambda with CRUD architecture" width="784" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is the Lambda Function we will be deploying:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
from datetime import datetime

def handler(event, context):
    # Get the HTTP method from the event
    http_method = event['httpMethod']

    # Prepare the response based on the HTTP method
    message_map = {
        'GET': 'This would GET (read) an item from the database',
        'POST': 'This would POST (create) a new item in the database',
        'PUT': 'This would PUT (update) an existing item in the database',
        'DELETE': 'This would DELETE an item from the database'
    }

    message = message_map.get(http_method, 'Unsupported HTTP method')

    # Return the response
    return {
        'statusCode': 200,
        'headers': {
            'Content-Type': 'application/json'
        },
        'body': json.dumps({
            'message': message,
            'method': http_method,
            'timestamp': datetime.now().isoformat()
        })
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  API Gateway
&lt;/h3&gt;

&lt;p&gt;The API Gateway has integrations with the Lambda via a proxy.&lt;/p&gt;

&lt;p&gt;API Gateways are made up of 3 levels, the Stage (Production in this case), the resources, then the integrations.&lt;/p&gt;

&lt;p&gt;In this example, we are sending requests to the Production Stage (you could also host dev or test on the same endpoint for example), accessing the items resource (our main endpoint for dealing with our expected database), then we have multiple functions for interactions with those resources, for each request type.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp8630b4uhu891lcx6hv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxp8630b4uhu891lcx6hv.png" alt="API Gateway Detail" width="784" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Terraform
&lt;/h3&gt;

&lt;p&gt;In this example, we'll be deploying the Terraform from our local machine, however in a separate blog post, i'll detail how to set up a CI/CD pipeline to deploy this automatically via CodePipeline and CodeBuild.&lt;/p&gt;

&lt;p&gt;To start we'll need the following installed locally:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS CLI v2&lt;/li&gt;
&lt;li&gt;Terraform&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this example, we'll also need to pass in an existing S3 bucket to use for the state file for Terraform, allowing us to keep the state in a centralised place. This example doesn't cover state locking with DynamoDB or the more recent S3 State File locking directly, it just uses a single state file, which if only being used by one developer at a time is sufficient for our purposes.&lt;/p&gt;

&lt;p&gt;We'll use the following files from the Github repo &lt;a href="https://github.com/uzusan/apigateway-tf-blogpost" rel="noopener noreferrer"&gt;https://github.com/uzusan/apigateway-tf-blogpost&lt;/a&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;index.py&lt;/li&gt;
&lt;li&gt;lambda_function.zip (a zip file containing the above index.py)&lt;/li&gt;
&lt;li&gt;main.tf&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Terraform main.tf
&lt;/h3&gt;

&lt;p&gt;For this example, i'll keep everything in one place to make it easier to understand, but typically you would have separate variables, outputs and maybe a backend file. For more info on Terraform best practices, HashiCorp have a good guide here: &lt;a href="https://developer.hashicorp.com/terraform/language/modules/develop/structure" rel="noopener noreferrer"&gt;https://developer.hashicorp.com/terraform/language/modules/develop/structure&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Providers and Backend
&lt;/h4&gt;

&lt;p&gt;First we set up the AWS Provider to allow us to create resources on AWS and our S3 backend, so that our state file that tracks the state of resources can be stored on S3:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&amp;gt; 4.0"
    }
  }

  # S3 backend configuration
  backend "s3" {
    bucket = "BUCKETNAME_TO_BE_REPLACED"
    key    = "api-lambda/terraform.tfstate"
    region = "eu-west-1"
    encrypt = true
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;BUCKETNAME_TO_BE_REPLACED&lt;/code&gt; should be replaced with a suitable S3 Bucket (this can also be created via a separate Terraform File). Our key is just a unique name for the state file, and in this case i've put the region as eu-west-1, which i'll be using throughout.&lt;/p&gt;

&lt;p&gt;Next we have to setup our IAM Permissions, including a role which the Lambda will assume, and a policy attachment to attach the AWS Basic Execution Role policy to our new role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# IAM role for Lambda
resource "aws_iam_role" "lambda_role" {
  name = "crud_lambda_role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      }
    ]
  })
}

# IAM policy for CloudWatch Logs
resource "aws_iam_role_policy_attachment" "lambda_logs" {
  role       = aws_iam_role.lambda_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;aws_iam_role&lt;/code&gt; resource type will create an IAM role for us, and we can create an assume role policy that allows the Lambda Service to assume this role. Next we create a &lt;code&gt;aws_iam_role_policy_attachment&lt;/code&gt; that allows us to attach the AWSLambdaBasicExecutionRole, which will allow the Lambda to run and to write logs to CloudWatch.&lt;/p&gt;

&lt;p&gt;Next we deploy the Lambda using the &lt;code&gt;aws_lambda_function&lt;/code&gt; resource. This method uses a local file (lambda_function.zip) and creates the source code to be uploaded using the source_code_hash attribute. &lt;/p&gt;

&lt;p&gt;We also use the earlier role ARN (Amazon Resource Name) to refer to the lambda_role we just created.&lt;/p&gt;

&lt;p&gt;We use the handler attribute to point the main execution entry point for Lambda to be our handler function (index.handler refers to the index.py python file, and the handler &lt;code&gt;function def handler(event, context):&lt;/code&gt; defined in it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Lambda function
resource "aws_lambda_function" "crud_lambda" {
  filename          = "lambda_function.zip"
  function_name     = "crud_operations"
  role              = aws_iam_role.lambda_role.arn
  handler           = "index.handler"
  runtime           = "python3.9"

  source_code_hash = filebase64sha256("lambda_function.zip")
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next we create our API Gateway, using the &lt;code&gt;aws_api_gateway_rest_api&lt;/code&gt; resource type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# API Gateway
resource "aws_api_gateway_rest_api" "crud_api" {
  name        = "crud-api"
  description = "CRUD API Gateway"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then create the &lt;code&gt;items&lt;/code&gt; resource using the &lt;code&gt;aws_api_gateway_resource&lt;/code&gt; resource type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# API Gateway resource
resource "aws_api_gateway_resource" "items" {
  rest_api_id = aws_api_gateway_rest_api.crud_api.id
  parent_id   = aws_api_gateway_rest_api.crud_api.root_resource_id
  path_part   = "items"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Passing in the rest api we just created and with a parent_id of the root resource (we can use this to nest resources by pointing to another id other than root) and have the URL path have &lt;code&gt;items&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;We then want to set up our methods for GET, POST, UPDATE and DELETE, using the &lt;code&gt;aws_api_gateway_method&lt;/code&gt; resource. &lt;/p&gt;

&lt;p&gt;We can also use the http_method ANY here and have all of the requests route to our Lambda in a single method, however below I will demonstrate having each method linked separately, as this will allow us to have different methods linked to different Lambdas if required (you could have different methods handling your read-only vs write requests for example).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# GET method
resource "aws_api_gateway_method" "get" {
  rest_api_id   = aws_api_gateway_rest_api.crud_api.id
  resource_id   = aws_api_gateway_resource.items.id
  http_method   = "GET"
  authorization = "NONE"
}

# POST method
resource "aws_api_gateway_method" "post" {
  rest_api_id   = aws_api_gateway_rest_api.crud_api.id
  resource_id   = aws_api_gateway_resource.items.id
  http_method   = "POST"
  authorization = "NONE"
}

# PUT method
resource "aws_api_gateway_method" "put" {
  rest_api_id   = aws_api_gateway_rest_api.crud_api.id
  resource_id   = aws_api_gateway_resource.items.id
  http_method   = "PUT"
  authorization = "NONE"
}

# DELETE method
resource "aws_api_gateway_method" "delete" {
  rest_api_id   = aws_api_gateway_rest_api.crud_api.id
  resource_id   = aws_api_gateway_resource.items.id
  http_method   = "DELETE"
  authorization = "NONE"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For each we pass in the API Gateway ID, the &lt;code&gt;items&lt;/code&gt; resource ID and the method to be used (GET, PUT etc).&lt;/p&gt;

&lt;p&gt;For each of these methods, we then need to set up a Proxy to route calls to the Lambda (these all go to the same Lambda but could go to different Lambda functions) using the &lt;code&gt;aws_api_gateway_integration&lt;/code&gt; resource type:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Lambda integration for GET
resource "aws_api_gateway_integration" "lambda_get" {
  rest_api_id = aws_api_gateway_rest_api.crud_api.id
  resource_id = aws_api_gateway_resource.items.id
  http_method = aws_api_gateway_method.get.http_method

  integration_http_method = "POST"
  type                   = "AWS_PROXY"
  uri                    = aws_lambda_function.crud_lambda.invoke_arn
}

# Lambda integration for POST
resource "aws_api_gateway_integration" "lambda_post" {
  rest_api_id = aws_api_gateway_rest_api.crud_api.id
  resource_id = aws_api_gateway_resource.items.id
  http_method = aws_api_gateway_method.post.http_method

  integration_http_method = "POST"
  type                   = "AWS_PROXY"
  uri                    = aws_lambda_function.crud_lambda.invoke_arn
}

# Lambda integration for PUT
resource "aws_api_gateway_integration" "lambda_put" {
  rest_api_id = aws_api_gateway_rest_api.crud_api.id
  resource_id = aws_api_gateway_resource.items.id
  http_method = aws_api_gateway_method.put.http_method

  integration_http_method = "POST"
  type                   = "AWS_PROXY"
  uri                    = aws_lambda_function.crud_lambda.invoke_arn
}

# Lambda integration for DELETE
resource "aws_api_gateway_integration" "lambda_delete" {
  rest_api_id = aws_api_gateway_rest_api.crud_api.id
  resource_id = aws_api_gateway_resource.items.id
  http_method = aws_api_gateway_method.delete.http_method

  integration_http_method = "POST"
  type                   = "AWS_PROXY"
  uri                    = aws_lambda_function.crud_lambda.invoke_arn
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For each method, we pass in the rest api ID, the &lt;code&gt;items&lt;/code&gt; resource ID and the http method used (we take this from each of the &lt;code&gt;aws_api_gateway_method&lt;/code&gt;'s we just set up.&lt;/p&gt;

&lt;p&gt;Each integration type uses POST as we are passing the parameters passed in from API Gateway over to Lambda as a POST with parameters. The original method will be passed over as part of the event message (which you can see in index.py where we extract it with &lt;code&gt;http_method = event['httpMethod']&lt;/code&gt; ). We also pass in the type of Proxy and the Lambda's URI, which we can get as the invoke_arn from the function we created earlier.&lt;/p&gt;

&lt;p&gt;Next we need to allow the API Gateway and Lambda to talk to each other:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
# Lambda permission for API Gateway
resource "aws_lambda_permission" "api_gw" {
  statement_id  = "AllowAPIGatewayInvoke"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.crud_lambda.function_name
  principal     = "apigateway.amazonaws.com"
  source_arn    = "${aws_api_gateway_rest_api.crud_api.execution_arn}/*/*"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here we allow the API Gateway to invoke the Lambda we've created and pass in the specific function name and only allow the principal (the user/system allowed to use this permission) to be api gateway. We also restrict the source to being the API Gateway API we just created (all stages and resources, with the /&lt;em&gt;/&lt;/em&gt;, we could restrict this to dev for example with /dev/*).&lt;/p&gt;

&lt;p&gt;Finally we create an API Gateway Deployment, then attach the Stage to that deployment using the &lt;code&gt;aws_api_gateway_deployment&lt;/code&gt; and &lt;code&gt;aws_api_gateway_stage&lt;/code&gt; resource types:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# API Gateway deployment
resource "aws_api_gateway_deployment" "crud_deployment" {
  depends_on = [
    aws_api_gateway_integration.lambda_get,
    aws_api_gateway_integration.lambda_post,
    aws_api_gateway_integration.lambda_put,
    aws_api_gateway_integration.lambda_delete
  ]

  rest_api_id = aws_api_gateway_rest_api.crud_api.id
}

# API Gateway stage
resource "aws_api_gateway_stage" "crud_stage" {
  deployment_id = aws_api_gateway_deployment.crud_deployment.id
  rest_api_id   = aws_api_gateway_rest_api.crud_api.id
  stage_name    = "prod"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;code&gt;aws_api_gateway_deployment&lt;/code&gt; we technically only need to pass in the ID of the rest api to be deployed, but here we add a depends on, to ensure each of the &lt;code&gt;aws_api_gateway_integration&lt;/code&gt; resources are created first, before we deploy the API, ensuring we don't deploy with empty integrations.&lt;/p&gt;

&lt;p&gt;We then create a stage with &lt;code&gt;aws_api_gateway_stage&lt;/code&gt; to create our production stage, passing in the rest api and the deployment we just created.&lt;/p&gt;

&lt;p&gt;Finally we output the API Gateway URL so we can do some testing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Output the API Gateway URL
output "api_url" {
  value = "${aws_api_gateway_stage.crud_stage.invoke_url}/items"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deploying the Terraform
&lt;/h3&gt;

&lt;p&gt;Now that we have the main.tf file created, and our index.py file for the Lambda, we can deploy to AWS.&lt;/p&gt;

&lt;p&gt;First step is to ensure we have the AWS CLI (v2) installed, and Terraform installed. I won't cover these here, but if you make sure you can communicate with AWS via the CLI (use &lt;code&gt;aws configure&lt;/code&gt; to check your credentials are correct if using access keys), then run the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This should then configure terraform, and you should see output similar to the below (Make sure you see the s3 backend message, if it's not present, you may not have correctly set up your AWS CLI):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Using previously-installed hashicorp/aws v4.67.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then create a plan for Terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform plan
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If this is the first time running, you should see a lot of creation messages such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_api_gateway_deployment.crud_deployment will be created
  + resource "aws_api_gateway_deployment" "crud_deployment" {
      + created_date  = (known after apply)
      + execution_arn = (known after apply)
      + id            = (known after apply)
      + invoke_url    = (known after apply)
      + rest_api_id   = (known after apply)
    }
....
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One for each of the resources we are going to deploy. If there are no errors we can run apply:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After which you should see creation messages:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws_iam_role.lambda_role: Creating...
aws_api_gateway_rest_api.crud_api: Creating...
aws_api_gateway_rest_api.crud_api: Creation complete after 1s

...

Apply complete! Resources: 16 added, 0 changed, 0 destroyed.

Outputs:

api_url = "URL"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At the end, we will have an API URL output that we can use to test the API.&lt;/p&gt;

&lt;p&gt;Using the API URL, we can export as an env variable to save typing it each time:&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;&lt;code&gt;export API_URL="https://XXXXX.execute-api.eu-west-1.amazonaws.com/prod/items"&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;p&gt;This allows us to then run each method:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GET&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X GET $API_URL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"message": "This would GET (read) an item from the database", "method": "GET", "timestamp": "2025-02-01T04:23:08.867240"}%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;POST&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X POST $API_URL \
  -H "Content-Type: application/json" \
  -d '{"name": "Test Item", "description": "This is a test item"}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"message": "This would POST (create) a new item in the database", "method": "POST", "timestamp": "2025-02-01T04:26:15.214840"}%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;UPDATE&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X PUT $API_URL \
  -H "Content-Type: application/json" \
  -d '{"name": "Updated Item", "description": "This item has been updated"}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"message": "This would PUT (update) an existing item in the database", "method": "PUT", "timestamp": "2025-02-01T04:28:16.492926"}%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;DELETE&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -X DELETE $API_URL
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Which returns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"message": "This would DELETE an item from the database", "method": "DELETE", "timestamp": "2025-02-01T04:28:49.461580"}%
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;This example can be used to expand other types of resources or Lambda functions deployed via Terraform, and in the next Blog Entry we will use this Terraform and deploy it via a Code Pipeline with a CodeBuild instance, allowing us to automate the execution of the Terraform when triggered by a commit to a GitHub repository.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscommunity</category>
      <category>apigateway</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Controlling Cloud Costs: Strategies for keeping on top of your AWS cloud spend</title>
      <dc:creator>John Walker</dc:creator>
      <pubDate>Wed, 21 Jun 2023 20:27:47 +0000</pubDate>
      <link>https://dev.to/aws-builders/controlling-cloud-costs-strategies-for-keeping-on-top-of-your-aws-cloud-spend-2a8i</link>
      <guid>https://dev.to/aws-builders/controlling-cloud-costs-strategies-for-keeping-on-top-of-your-aws-cloud-spend-2a8i</guid>
      <description>&lt;p&gt;Cloud computing is one of the most cost effective way of running computing and other resources there is, with potential huge savings compared to on-premise datacenters and other methods.&lt;/p&gt;

&lt;p&gt;Moving to a pay as you go model from a up front or contract model can be very cost-effective, but with moving to the cloud, or starting out on the cloud, can come with a lot of complexities, and with how easy it is to spin up resources, you can end up spending more than you realise.&lt;/p&gt;

&lt;p&gt;In this article, I will cover how to manage some of these complexities when it comes to costs, specifically on AWS (Amazon Web Services). I will cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tips and Strategies to get the most out of your Cloud usage&lt;/li&gt;
&lt;li&gt;Common cost drivers and challenges&lt;/li&gt;
&lt;li&gt;Monitoring and alerting for unexpected cost spikes&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Cloud Costs Overview
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F965w62ce6274hqdlbiys.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F965w62ce6274hqdlbiys.png" alt="Cloud Cost Overview" width="800" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Firstly, I want to explain the pay as you go model (PAYG) i mentioned earlier. In AWS you pay for the resources that you use. Which can be a bit of a double edged sword. &lt;/p&gt;

&lt;p&gt;As AWS is PAYG, you are paying for the resources you use, but this is generally split into different sections, so that you pay for exactly what you use in each section. As an example, spinning up a Virtual Machine (EC2 instance), will see you paying for the Instance itself, Storage, Backups, Data Transfer, Licensing (for Windows), potentially support, and potentially more, depending on the setup. This does mean that you don't end up paying for resources you don't use, but it makes it complex to keep on top of.&lt;/p&gt;

&lt;p&gt;Next I want to touch on Managed vs Unmanaged services. In AWS there are a number of services such as Amazon Relational Database Service (RDS) that can be used to host Databases, such as MySQL, PostgreSQL, Microsoft SQL Server and more, without having to manage the underlying infrastructure. So you would take care of the Database management, but leave the underlying infrastructure, including server management and patching to AWS. Alternatively you can go down the unmanaged route and host your own database installation on an EC2 Instance for example.&lt;/p&gt;

&lt;p&gt;As mentioned above, costs in AWS can be much more broken down that expected, and each aspect of a service may come with its own pricing structure, including all different aspects. A good way of finding out how something will break down is the AWS Pricing Calculator: &lt;a href="https://calculator.aws/#/" rel="noopener noreferrer"&gt;https://calculator.aws/#/&lt;/a&gt;. This is a tool from AWS, which can help you understand the costs involved with various services.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3bqs0dbs8nreg77b9lg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa3bqs0dbs8nreg77b9lg.png" alt="Pricing Calculator screenshot, showing EC2 cost breakdown" width="800" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above example section, you can see the breakdown for an EC2 instance, including instance pricing, metrics, and storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e9ch89yxf3utklyjnwy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e9ch89yxf3utklyjnwy.png" alt="Pricing Calculator screenshot, showing instance pricing options" width="800" height="241"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above example section, you can see the different pricing options for EC2, including on-demand, reserved instance pricing and spot instances.&lt;/p&gt;

&lt;h2&gt;
  
  
  Management Strategies
&lt;/h2&gt;

&lt;p&gt;Knowing what you are using is key to understanding and controlling your cloud costs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ongoing monitoring and management of costs&lt;/li&gt;
&lt;li&gt;AWS Cost Explorer, Budgets, Billing Tools&lt;/li&gt;
&lt;li&gt;Cost Allocation / Tagging&lt;/li&gt;
&lt;li&gt;Budget Management

&lt;ul&gt;
&lt;li&gt;Per Project&lt;/li&gt;
&lt;li&gt;Per Team&lt;/li&gt;
&lt;li&gt;Per Service&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  FinOps Foundation Principles
&lt;/h2&gt;

&lt;p&gt;The FinOps Foundation &lt;a href="https://www.finops.org/" rel="noopener noreferrer"&gt;https://www.finops.org/&lt;/a&gt; is a foundation dedicated to advancing the use of FinOps (Financial Operations) in the clound, with the following main principles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Teams need to collaborate&lt;/li&gt;
&lt;li&gt;Decisions are driven by business value of cloud&lt;/li&gt;
&lt;li&gt;Everyone takes ownership for their cloud usage&lt;/li&gt;
&lt;li&gt;FinOps data should be accessible and timely&lt;/li&gt;
&lt;li&gt;A centralised team drives FinOps&lt;/li&gt;
&lt;li&gt;Take advantage of the variable cost model of the cloud&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://www.finops.org/framework/principles/" rel="noopener noreferrer"&gt;https://www.finops.org/framework/principles/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;These principles show the need for organisations to work together on controlling cloud costs, an ensuring that cloud spend is accounted for.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reviews
&lt;/h2&gt;

&lt;p&gt;The pace of Cloud Innovation is fast, new releases and updates come on a daily basis.&lt;br&gt;
Regular reviews of your costs can help you keep on top of them. Questions to ask:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Are we still using everything we are being charged for?&lt;/li&gt;
&lt;li&gt;Do we know what everything we are being charged for is and who is using it?&lt;/li&gt;
&lt;li&gt;Is there a better way to achieve the same outcomes? For example, would moving to a managed solution lower manual intervention time spent?&lt;/li&gt;
&lt;li&gt;Can we leverage other payment models, such as upfront reserved instances?&lt;/li&gt;
&lt;li&gt;Monitor for unexpected changes in cost

&lt;ul&gt;
&lt;li&gt;Daily budgeting checks&lt;/li&gt;
&lt;li&gt;Monitor based on expected costs&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Be aware of potential spikes in costs based on usage&lt;/li&gt;

&lt;li&gt;Adding tags to workloads can help break down costs to assist in monitoring&lt;/li&gt;

&lt;li&gt;Third-party tools can help, such as Infracost for changes in costs based on Terraform deployments&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  DevOps and Infrastructure as Code
&lt;/h2&gt;

&lt;p&gt;DevOps provides a method for continually deploying workloads to the cloud, and provides monitoring and feedback loops for improvement.&lt;/p&gt;

&lt;p&gt;Defining all of your workloads in code, using Infrastructure as Code allows you to know what resources are in use, and eliminate unexpected costs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hidden / Obscure Costs
&lt;/h2&gt;

&lt;p&gt;With PAYG, you pay for what you use, but sometimes it’s not always immediately obvious how it breaks down&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data Transfer – typically out from the cloud&lt;/li&gt;
&lt;li&gt;NAT Gateways especially – for some services you can use private links to reduce traffic over the internet.&lt;/li&gt;
&lt;li&gt;Lambda is billed based on 2 factors. The memory and the Duration of the execution time.&lt;/li&gt;
&lt;li&gt;S3 cost breakdown dependant on file access frequency, size, retrieval requests&lt;/li&gt;
&lt;li&gt;Line items that can add up.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As an example, here is an S3 calculation from the AWS Pricing Calculator, showing a common use:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbdscy1uk8oq45gdd1sg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbdscy1uk8oq45gdd1sg.png" alt="S3 Cost Calculation" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is a more complex example, with multiple levels of Tiering for different storage requirements, such as moving some data to the Infrequent Access Tier, for lesser accessed data, which can save up to 40% on storage costs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedbxkir00hl97mcvfsgm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fedbxkir00hl97mcvfsgm.png" alt="S3 costs breakdown - complex example" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For Lambda, a very useful tool to help optimise is the AWS Lambda Power Tuning tool, released by Alex Casalboni, Developer Advocate at AWS: &lt;a href="https://github.com/alexcasalboni/aws-lambda-power-tuning" rel="noopener noreferrer"&gt;https://github.com/alexcasalboni/aws-lambda-power-tuning&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This tool allows you to figure out what size of memory allocation to give your lambda based on an example execution. Somewhat counterintuitively, you may save money overall by allocating more memory to a Lambda Function.&lt;/p&gt;

&lt;p&gt;The reason for this, is that you pay for both the memory allocation and execution time. Having more memory (and virtual CPUs) available to the Lambda, can allow it to finish running your function in a shorter period of time. This means while you pay more for the memory, you pay less for the execution costs. &lt;/p&gt;

&lt;p&gt;Alex gives the example in the README of the GitHub project above of a Lambda that takes 35 seconds to run while fiven 128mb of memory, but runs in only 3 seconds when given 1.5GB of memory, saving 14% overall. &lt;/p&gt;

&lt;h2&gt;
  
  
  Example Savings
&lt;/h2&gt;

&lt;p&gt;Below I've listed a few example savings you can use on AWS, depending on your use case. The goal with cost optimisation on AWS is to always use just what you need and no more. So for example, if you have an EC2 instance that is only using 25% of its memory and CPU, you may be able to go down to a lower tier of EC2 instance with no impact to your application, while lowering your costs.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Right Sizing – Move up or down a size based on the actual usage of the server&lt;/li&gt;
&lt;li&gt;For AWS NAT Gateways, use endpoints such as S3 Gateway or DynamoDB to reduce traffic going out over the internet&lt;/li&gt;
&lt;li&gt;Elastic Ips not in use are charged, as are remaps:

&lt;ul&gt;
&lt;li&gt;$0.005 per additional IP address associated with a running instance per hour on a pro rata basis&lt;/li&gt;
&lt;li&gt;$0.005 per Carrier IP address not associated with a running instance per hour on a pro rata basis&lt;/li&gt;
&lt;li&gt;$0.00 per Carrier IP address remap for the first 100 remaps per month&lt;/li&gt;
&lt;li&gt;$0.10 per Carrier IP address remap for additional remaps over 100 per month&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Older snapshots can build up over time&lt;/li&gt;

&lt;li&gt;S3 data not in use should be moved down tiers, such as to Infrequent access or to glacier&lt;/li&gt;

&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>costs</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Microservices and Eventbridge</title>
      <dc:creator>John Walker</dc:creator>
      <pubDate>Tue, 14 Mar 2023 12:33:14 +0000</pubDate>
      <link>https://dev.to/aws-builders/microservices-and-eventbridge-48hj</link>
      <guid>https://dev.to/aws-builders/microservices-and-eventbridge-48hj</guid>
      <description>&lt;h2&gt;
  
  
  Microservices
&lt;/h2&gt;

&lt;p&gt;Microservices are a software development pattern that breaks down a large application into smaller independent services.&lt;/p&gt;

&lt;p&gt;• Small subsets of functionality but self-contained&lt;/p&gt;

&lt;p&gt;• Size is variable but should generally be small enough to fit in a single developers head&lt;/p&gt;

&lt;p&gt;• Functionality of the Microservice itself shouldn’t depend on other Microservices. User flow within the application should use multiple microservices&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjqdy7wgl1z8ld1rxdop.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdjqdy7wgl1z8ld1rxdop.png" alt="Microservice vs Monolith" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Microservice Types
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ov4c6k63y6n6361rjfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ov4c6k63y6n6361rjfk.png" alt="Some examples from Amazon and Netflix of Microservices and their communication paths." width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Some examples from Amazon and Netflix of Microservices and their communication paths.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Microservices, like other software development methods, have had various ways of working developed over the years and choosing the right method of development that matches your business needs is critical.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Event-Driven Microservices&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Domain Driven Design&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Communication Methods:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;API Communication&lt;/li&gt;
&lt;li&gt;Decoupled Messaging&lt;/li&gt;
&lt;li&gt;Pub/Sub&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Why Use Microservices?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Microservices have their own design considerations, but they also come with several benefits.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The team size and codebase for each microservice can be small. It should be able to be entirely understood by one person.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Focus on one specific business outcome or aspect and perform 1 task&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Development autonomy. Each microservice is separate, so having different programming languages, technologies, and development methods is possible.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Allows alignment with the bounded context of the problem domain. The problem a microservice is trying to solve can align with business outcomes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Loosely coupled architecture, allowing for separate scaling and consideration of each microservice.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;When not to use Microservices?&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Microservices are a good software development model for many use cases, but they are not always the best option.&lt;/p&gt;

&lt;p&gt;It's important to consider the best method for your software development. Sometimes a hybrid of monolithic development and microservices is the right option.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Microservices should be independent single tasks. In some cases, this isn’t possible, for an overall task to complete, multiple tasks may need to be completed, and they can’t easily be separated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Separating services adds some latency as each microservice needs to communicate via API or Events to proceed to the next step. This might be a blocker in latency-sensitive applications or even cause data synchronisation issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;With distributed systems, debugging and tracing becomes more complex&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are methods for resolving or mitigating many of these issues, but these are also considerations when deciding on a software development method. Microservices give many benefits, but there are challenges and complexity considerations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event-Driven Services
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuczrhbkhqz5vpo1el0wp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuczrhbkhqz5vpo1el0wp.png" alt="Event Driven Design" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of the ways Microservices are used is through event-driven design. In this pattern, a producer is either a microservice or some other source of events, ranging from S3 to 3rd Party applications, that produce an event when something happens, providing a stream of events.&lt;/p&gt;

&lt;p&gt;Once these events are produced, they can be put into a queue or an event bus, and other services or microservices interested in dealing with that event can process them.&lt;/p&gt;

&lt;p&gt;This can take various forms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Single queue, where an event is produced and consumed by usually a single target&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Pub / Sub pipes or Event buses, where all events are put into a pipe, and multiple different interested targets can process the events or be notified of the event happening&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Streaming, where all events are passed through a data stream where they are processed as they pass through, usually in chunks of events rather than one at a time&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Event-Driven Service Examples
&lt;/h2&gt;

&lt;p&gt;Single Queue:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1a972k48md0wa79ay5t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1a972k48md0wa79ay5t.png" alt="Image description" width="720" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Streaming:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnys2s3rsbq9lz90l76qm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnys2s3rsbq9lz90l76qm.png" alt="Image description" width="800" height="334"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pub/Sub:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh20bxb9cbb8mgcym0u1o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh20bxb9cbb8mgcym0u1o.png" alt="Image description" width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  EventBridge
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyrrlhw4yx9ic03kvuvl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyrrlhw4yx9ic03kvuvl.png" alt="Eventbridge Flow" width="463" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Eventbridge is AWS’s service that provides an event bus that can be used to handle events from various sources.&lt;/p&gt;

&lt;p&gt;It can be used to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Send requests to multiple consumers&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create an event lifecycle record&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Integrate between different components, including 130 event sources and over 35 targets, as well as third parties.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create cross-account event pipelines by sending events to Eventbridge buses in other accounts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide filtering of events to trigger consumers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Provide a schema registry for events to track and version events, allowing for producer-consumer contracts.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Microservice Considerations
&lt;/h2&gt;

&lt;p&gt;Driving microservice design takes the same amount of consideration as other software development methods, with relevant requirements for both your business and application to be considered.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Required scalability and use (Are you over-architecting something with 20 microservices used once a year?)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Business structure – can your development and operations team support a move to microservices?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;How closely aligned to your business processes should microservices be? Are those processes well-defined?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Do you want to follow the domain-driven design, where your business needs define the outcomes of the application and its boundaries?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Is your application suitable for microservices, or are there parts that should be in a distributed monolith?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Each microservice, whether API-based, Event-Based or otherwise, should adhere to a contract for each version so that consumers of the event or requesters know the expected inputs and outputs for communicating with the microservice. Contracts should not be changed in a specific version. Any required changes should require a new version, and old versions should run alongside to allow consumers to update to the new version. Breaking changes in microservices have a larger impact than in other development types.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Microservice Anti-patterns
&lt;/h2&gt;

&lt;p&gt;While microservices can provide a whole host of benefits, there are potential pitfalls to watch out for when developing microservices.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Assuming that just because you now use microservices, it solves all your problems&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Making microservices the goal in itself, rather than using them as a tool to solve business problems. It's no use changing everything to microservices at the expense of developing features and fixing customer issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Piecemeal adoption, having just one team work on one aspect without co-ordination isn’t likely to work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Uncoordinated adoption. If teams creating microservices don’t collaborate, different approaches and implementations could result, such as one team expecting streaming and another expecting pub/sub.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Focusing more on the technology than the overall service design and how the services communicate&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Not changing organisational processes or structure to follow microservice patterns. For example, moving to microservices won't work if you still do six monthly deployments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Not separating tasks correctly and just building multiple monoliths instead of single-focus microservices&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Event Planning
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhess9qqw959ouk20nuev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhess9qqw959ouk20nuev.png" alt="Event Planning Spreadsheet" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Planning how your microservices communicate is a key step when moving to microservices.&lt;/p&gt;

&lt;p&gt;Above is an example based on architecture by AWS, where the planning method has two main sections:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Planning what the microservices are and how they fit together by looking at the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What input events trigger this microservice&lt;/li&gt;
&lt;li&gt;What the service does&lt;/li&gt;
&lt;li&gt;What actions the service takes&lt;/li&gt;
&lt;li&gt;What output events the microservice produces&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Then you can drill down into each event and list:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What attributes are in each event&lt;/li&gt;
&lt;li&gt;Which service owns/produces which event&lt;/li&gt;
&lt;li&gt;What the action that each Microservice was part of&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  Eventbridge Scheduler
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsdox08z9pb1oq7eipdh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsdox08z9pb1oq7eipdh.png" alt="Eventbridge Scheduler Capabilities" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Eventbridge Scheduler is a newer service from AWS that allows you to invoke tasks based on defined parameters. This allows you to perform tasks within your application and scheduled background tasks.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Integrates with 200+ services and over 6000 AWS API calls&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Schedule tasks such as customer reminders, delayed actions, prompts to continue if the customer hasn’t performed an action&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Perform cron-based tasks, one-time events, fixed-rate events, time-specific events&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add retries, dead letter queues and more&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Service Coupling
&lt;/h2&gt;

&lt;p&gt;Microservices are designed to be loosely coupled, in terms of their dependencies on each other, so that each service can act independently of another and not have to consider what is happening in other services.&lt;/p&gt;

&lt;p&gt;However, this isn’t 100% the full story. Microservices are still coupled in certain ways, but mainly through semantic coupling.&lt;/p&gt;

&lt;p&gt;This means that a microservice depends on events coming in being in a specific format so it knows how to deal with them, and similarly, any microservice that consumes events produced by another microservice needs to know the event format won't change.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Service Contract – events produced by a microservice shouldn’t change without warning, so contracts are important, with versioning being key&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Schemas should be available for each event so that other consumers of these events know what the format is and what data is available.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Eventbridge Events
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6omsvz5lc2vv0gv1nvq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs6omsvz5lc2vv0gv1nvq.png" alt="Event Example" width="800" height="501"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Eventbridge uses a defined format for Events to be put onto the event bus.&lt;/p&gt;

&lt;p&gt;An example of an event is shown above, and it is a JSON format document with specific minimum information to be considered valid:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A &lt;strong&gt;detail&lt;/strong&gt; JSON object. (Can be blank, using “{}” )&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A &lt;strong&gt;detail-type&lt;/strong&gt; string identifies the type of event&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A &lt;strong&gt;source&lt;/strong&gt; string identifies the source. This cannot start with 'aws', but other values are ok.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;However, adding in extra fields is recommended:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Id – a unique id per event&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Version – event version number&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Time – time the event was created&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And more detail to help identify the event and provide the information required by the consumers about the event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/jbesw/eventbridge-sam-example" rel="noopener noreferrer"&gt;https://github.com/jbesw/eventbridge-sam-example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This GitHub repo from James Beswick – Principal Developer Advocate for AWS Serverless demonstrates setting up an event bus, and multiple microservices that outputs an event and consumes it, using the AWS SAM framework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Further Reading:
&lt;/h2&gt;

&lt;p&gt;AWS Samples:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aws-samples" rel="noopener noreferrer"&gt;https://github.com/aws-samples&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building Event-Driven architectures on AWS Workshop:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/63320e83-6abc-493d-83d8-f822584fb3cb/en-US" rel="noopener noreferrer"&gt;https://catalog.us-east-1.prod.workshops.aws/workshops/63320e83-6abc-493d-83d8-f822584fb3cb/en-US&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="http://Microservices.io" rel="noopener noreferrer"&gt;Microservices.io&lt;/a&gt;: (Patterns, explanations of terminology and more)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://microservices.io/index.html" rel="noopener noreferrer"&gt;https://microservices.io/index.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Software Architecture: The Hard Parts:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.oreilly.com/library/view/software-architecture-the/9781492086888/" rel="noopener noreferrer"&gt;https://www.oreilly.com/library/view/software-architecture-the/9781492086888/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building Event-Driven Microservices:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.oreilly.com/library/view/building-event-driven-microservices/9781492057888/" rel="noopener noreferrer"&gt;https://www.oreilly.com/library/view/building-event-driven-microservices/9781492057888/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eventbridge</category>
      <category>microservices</category>
    </item>
    <item>
      <title>Deploying Docker Containers to Lambda using AWS SAM and CodePipeline (Part 1)</title>
      <dc:creator>John Walker</dc:creator>
      <pubDate>Sat, 01 Oct 2022 01:56:14 +0000</pubDate>
      <link>https://dev.to/aws-builders/deploying-docker-containers-to-lambda-using-aws-sam-and-codepipeline-part-1-4nm5</link>
      <guid>https://dev.to/aws-builders/deploying-docker-containers-to-lambda-using-aws-sam-and-codepipeline-part-1-4nm5</guid>
      <description>&lt;p&gt;Deploying to AWS lambda has many advantages, covering cost and speed of execution. There are many options for deployment, including through the console, AWS CLI, CloudFormation and AWS SAM. However, the standard Lambda package size has a limit of 50mb zipped and 250mb unzipped, which in some cases can cause issues, especially with some machine learning frameworks, where the dependencies can hit this limit before you've even written any code.&lt;/p&gt;

&lt;p&gt;To solve this issue, you can deploy your code inside a docker container, which has an effective limit of up to 10GB.&lt;/p&gt;

&lt;p&gt;Part 1 of this solution will go through a number of AWS Services to show how to use AWS SAM to deploy a docker container to AWS Lambda, based on the Hello World Template.&lt;/p&gt;

&lt;p&gt;Part 2 will show how to add this to a CodeBuild and CodePipeline and deploy automatically via GitHub.&lt;/p&gt;

&lt;p&gt;Part 1 will use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS SAM&lt;/li&gt;
&lt;li&gt;AWS Lambda&lt;/li&gt;
&lt;li&gt;AWS API Gateway&lt;/li&gt;
&lt;li&gt;Docker&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Part 2 will then use:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Github&lt;/li&gt;
&lt;li&gt;AWS Github App&lt;/li&gt;
&lt;li&gt;AWS CodeStar Connection&lt;/li&gt;
&lt;li&gt;AWS Code Pipeline&lt;/li&gt;
&lt;li&gt;AWS CodeBuild&lt;/li&gt;
&lt;li&gt;AWS CloudFormation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is what our deployment will look like after both parts are completed:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcalqlv151f2oho1pnqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcalqlv151f2oho1pnqb.png" alt="Deployment Architecture, using CodePipeline, CodeBuild, SAM CLI and Lambada" width="800" height="575"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first step is to set up AWS Access Keys: &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then once you have the Access Key and Secret Key, you need to set them up in a terminal, by either setting up a profile, the AWS CLI directly using 'aws configure' or Environment Variables.&lt;/p&gt;

&lt;p&gt;The core of this deployment method is to use the AWS SAM CLI. This deployment can be started with the sample hello world application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-getting-started-hello-world.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-getting-started-hello-world.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You will need the AWS CLI and the AWS SAM CLI:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="noopener noreferrer"&gt;AWS CLI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html" rel="noopener noreferrer"&gt;AWS SAM CLI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can also create a blank application, however some of the template.yaml file from the hello world sample application will be useful, and we'll be using it as the base of this guide.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#Step 1 - Download a sample application
sam init

#Step 2 - Build your application
cd sam-app
sam build

#Step 3 - Deploy your application
sam deploy --guided
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will deploy your Lambda Application and create an API Gateway for you. The guided deploy will have some options to choose, make sure you allow role creation, and at this step, allow deployment without authorisation (which you should set up later if you don't want a public API) as sam will not deploy without it.&lt;/p&gt;

&lt;p&gt;You could stop here, but this guide will show you how to turn the Lambda deployment into a Docker Container, and then automate the deployment through the use of Code Pipeline, Code Build and link it to Github.&lt;/p&gt;

&lt;p&gt;Within the sam-app folder that is generated, the first file you will need to update is the template.yaml file. The section that creates the Lambda Function is with the type AWS::Serverless::Function.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      CodeUri: hello_world/
      Handler: app.lambda_handler
      Runtime: python3.9
      Architectures:
        - x86_64
      Events:
        HelloWorld:
          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
          Properties:
            Path: /hello
            Method: get
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This deploys the Lambda as a python 3.9 application and zipped up, with the x86_64 architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  Containerising the Lambda
&lt;/h2&gt;

&lt;p&gt;To change this to use docker we need to make some changes, and define an external docker_file. The main changes are the PackageType, ImageConfig and Metadata. We also remove the CodeUri, Handler and runtime as these are defined in the docker file.&lt;/p&gt;

&lt;p&gt;We've also added a HelloWorldAPI section for the API, though one is implicitly defined. If you leave this out you will see some warnings about reserved keywords. This section is also where you can add Cors later on if needed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Resources:
  HelloWorldAPI:
    Type: AWS::Serverless::Api
    Properties:
      Name: Hello World API
      StageName: Prod
  HelloWorldFunction:
    Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
    Properties:
      PackageType: Image
      ImageConfig:
        Command: ["app.lambda_handler"]
      Events:
        HelloWorld:
          Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
          Properties:
            RestApiId: !Ref HelloWorldAPI
            Path: /hello
            Method: get
    Metadata:
      Dockerfile: hello_world_docker
      DockerContext: .
      DockerTag: v1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then create a docker file that creates our docker container.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM public.ecr.aws/lambda/python:3.9

COPY hello_world/. ./

RUN python3.9 -m pip install -r requirements.txt

CMD ["app.lambda_handler"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This uses the Python 3.9 image from the Elastic Container Repository, then copies our code into the home directory of the container, runs python to install the requirements.txt (which just contains the requests module for now) and then sets the lambda_handler to be the entry point.&lt;/p&gt;

&lt;p&gt;We then need to build the container with sam:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam build --use-container -t template.yaml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can then deploy the container to Lambda using:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sam deploy --stack-name sam-app --resolve-s3 --capabilities CAPABILITY_IAM --resolve-image-repos
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;(you may need to remove any stored config file from any previous sam build, likely to be called samconfig.toml or it may override, especially the s3 parameters).&lt;/p&gt;

&lt;p&gt;This will deploy your lambda to a stack named sam-app, work out the s3 bucket to use for deployment, give permission to create IAM roles for deployment, and work out which ECR Repository to use.&lt;/p&gt;

&lt;p&gt;With this deployed, you can go to the API Gateway URL mentioned in the output of the sam deploy command. You should see:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"message": "hello world"}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Part 2 will create these commands inside a CodeBuild environment, allowing a Code Pipeline to take the code from a GitHub repository, then use the SAM CLI inside CodeBuild to build the container and deploy it automatically.&lt;/p&gt;

&lt;p&gt;You can also use the aws cloudformation describe-stacks command inside CodeBuild to get the API generated, and use command line tools like sed to then replace this URL into other files, such as a front end (which could then also be deployed by CodeBuild, or another pipeline).&lt;/p&gt;

&lt;p&gt;Part 2 coming soon.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>communitybuilder</category>
    </item>
  </channel>
</rss>
