<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Oksana Horlock</title>
    <description>The latest articles on DEV Community by Oksana Horlock (@oksanah).</description>
    <link>https://dev.to/oksanah</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/oksanah"/>
    <language>en</language>
    <item>
      <title>Setting up local AWS development environment with Localstack</title>
      <dc:creator>Oksana Horlock</dc:creator>
      <pubDate>Fri, 20 Jan 2023 07:19:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/setting-up-local-aws-development-environment-with-localstack-27oe</link>
      <guid>https://dev.to/aws-builders/setting-up-local-aws-development-environment-with-localstack-27oe</guid>
      <description>&lt;p&gt;When Cloud services are used in an application, it might be tricky to mock them during local development. Some approaches include: 1) doing nothing thus letting your application fail when it makes a call to a Cloud service; 2) creating sets of fake data to return from calls to AWS S3, for example; 3) using an account in the Cloud for development purposes. A nice in-between solution is using Localstack, a Cloud service emulator. Whereas the number of services available and the functionality might be a bit limited compared to the real AWS environment, it can still work very well for local development.&lt;/p&gt;

&lt;p&gt;This article will describe how to set Localstack up for local development in Docker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Docker-compose setup:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the services section of our &lt;em&gt;docker-compose.yml&lt;/em&gt; we have Localstack container definition:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;localstack:
    image: localstack/localstack:latest
    hostname: localstack
    environment:
      - SERVICES=s3,sqs
      - HOSTNAME_EXTERNAL=localstack
      - DATA_DIR=/tmp/localstack/data
      - DEBUG=1
      - AWS_ACCESS_KEY_ID=test
      - AWS_SECRET_ACCESS_KEY=test
      - AWS_DEFAULT_REGION=eu-central-1
    ports:
      - "4566:4566"
    volumes:
      - localstack-data:/tmp/localstack:rw
      - ./create_localstack_resources.sh:/docker-entrypoint-initaws.d/create_localstack_resources.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Although we don’t need to connect to any AWS account, we do need dummy AWS variables (with any value). We specify which services we want to run using Localstack – in this case it’s SQS and S3.&lt;/p&gt;

&lt;p&gt;We also need to set HOSTNAME_EXTERNAL because SQS API needs the container to be aware of the hostname that it can be accessed on.&lt;/p&gt;

&lt;p&gt;Another point is that that we cannot use the entrypoint definition because Localstack has a directory &lt;em&gt;docker-entrypoint-initaws.d&lt;/em&gt; from where shell scripts are run when the container starts up. That’s why we’re mapping the container volume to a folder wherer those scripts are. In our case &lt;em&gt;create_localstack_resources.sh&lt;/em&gt; will create all the necessary S3 buckets and the SQS queue:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EXPECTED_BUCKETS=("bucket1" "bucket2" "bucket3")
EXISTING_BUCKETS=$(aws --endpoint-url=http://localhost:4566 s3 ls --output text)

echo "creating buckets"
for BUCKET in "${EXPECTED_BUCKETS[@]}"
do
  echo $BUCKET
  if [[ $EXISTING_BUCKETS != *"$BUCKET"* ]]; then
    aws --endpoint-url=http://localhost:4566 s3 mb s3://$BUCKET
  fi
done

echo "creating queue"
if [[ $EXISTING_QUEUE != *"$EXPECTED_QUEUE"* ]]; then
    aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name my-queue\
    --attributes '{
      "RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:eu-central-1:000000000000:my-dlq-queue\",\"maxReceiveCount\":\"3\"}",,
      "VisibilityTimeout": "120"
    }'
fi
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that AWS CLI command syntax is different to the real AWS CLI (otherwise you’d create resources in the account for which you have the credentials set up!), and includes Localstack endoint flag: &lt;em&gt;–endpoint-url=&lt;a href="http://localhost:4566"&gt;http://localhost:4566&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this example I'm using use Scala with Play framework, and therefore have &lt;em&gt;.conf&lt;/em&gt; files. In &lt;em&gt;local.conf&lt;/em&gt; file we have the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws {
   localstack.endpoint="http://localstack:4566"
   region = "eu-central-1"
   s3.bucket1 = "bucket1"
   s3.bucket2 = "bucket2"
   sqs.my_queue = "my-queue"
   sqs.queue_enabled = true
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The real &lt;em&gt;application.conf&lt;/em&gt; file has resource names injected at the instance startup. They live in an autoscaling group launch template where they are created by Terraform (out of scope of this post).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Initializing SQS client based on the environment&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The example here is for creating an SQS client. Below are snippets most relevant to the topic.&lt;/p&gt;

&lt;p&gt;In order to initialize the SQS Service so that it can be injected into other services we can do this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lazy val awsSqsService: QueueService = createsSqsServiceFromConfig()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;em&gt;createsSqsServiceFromConfig.scala&lt;/em&gt; we check if the configuration has a Localstack endpoint and if so, we build LocalStack client:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;protected def createsSqsServiceFromConfig(): QueueService = {
  readSqsClientConfig().map { config =&amp;gt;
  val sqsClient: SqsClient = config.localstackEndpoint match {
    case Some(endpoint) =&amp;gt; new LocalStackSqsClient(endpoint, config.region)
    case None =&amp;gt; new AwsSqsClient(config.region)
  }
  new SqsQueueService(config.queueName, sqsClient)
 }.getOrElse(fakeAwsSqsService)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;readSqsClientConfig.scala&lt;/em&gt; is used to get configuration values from &lt;em&gt;.conf&lt;/em&gt; files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private def readSqsClientConfig = {
   val sqsName = config.get[String]("aws.sqs.my_queue")
   val sqsRegion = config.get[String]("aws.region") 
   val localStackEndpoint = config.getOptional[String]("aws.localstack.endpoint")
   SqsClientConfig(sqsName, sqsRegion, localStackEndpoint)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally, LocalStackSqsClient initialization looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;class LocalStackSqsClient(endpoint: String, region:String) extends SqsClient with Logging {
    private val sqsEndpoint = new EndpointConfiguration(endpoint, region)
    private val awsCreds = new BasicAWSCredentials("test", "test")
    private lazy val sqsClientBuilder = AmazonSQSClientBuilder.standard()
      .withEndpointConfiguration(sqsEndpoint)
      .withCredentials(new AWSStaticCredentialsProvider(awsCreds))
    private lazy val client = sqsClientBuilder.build()


override def BuildClient(): AmazonSQS = {
        log.debug("Initializing LocalStack SQS service")
        client
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Real AWS Client for the test/live environment (a snippet):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AmazonSQSClientBuilder.standard()
      .withCredentials(new DefaultAWSCredentialsProviderChain)
      .withRegion(region)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Notice that we need fake &lt;em&gt;BasicAWSCredentials&lt;/em&gt; that allows us to pass in dummy AWS access key and secret key and then we use &lt;em&gt;AWSStaticCredentialsProvider&lt;/em&gt;, an implementation of &lt;em&gt;AWSCredentialsProvider&lt;/em&gt; that just wraps static &lt;em&gt;AWSCredentials&lt;/em&gt;. When real AWS environment is used, instead of &lt;em&gt;AWSStaticCredentialsProvider&lt;/em&gt; we use &lt;em&gt;DefaultAWSCredentialsProviderChain&lt;/em&gt;, which picks the EC2 Instance Role if it’s unable to find credentials by any other methods.&lt;/p&gt;

&lt;p&gt;And that’s it. Happy coding!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>localstack</category>
      <category>cloud</category>
      <category>scala</category>
    </item>
    <item>
      <title>Merry Christmas from an AWS Community Builder</title>
      <dc:creator>Oksana Horlock</dc:creator>
      <pubDate>Tue, 20 Dec 2022 21:05:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/merry-christmas-from-an-aws-community-builder-il4</link>
      <guid>https://dev.to/aws-builders/merry-christmas-from-an-aws-community-builder-il4</guid>
      <description>&lt;p&gt;It's the most wonderful time of the year...&lt;br&gt;
And a good reason to write another blog post, this time about the AWS Community Builders Program and my experience with it. Make sure you check the posts from fellow Community Builders too.&lt;/p&gt;

&lt;h2&gt;
  
  
  What surprises you most about the community builders program?
&lt;/h2&gt;

&lt;p&gt;I think when I joined I wasn't expecting there to be quite a lot of rewards, such as when you create a really cool project, when you create a lot of content or when your content gets a lot of views, to name but a few.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s your background and your experience with AWS?
&lt;/h2&gt;

&lt;p&gt;I started my career in tech in 2015. Before that I had done a variety of jobs, including waitressing, translation and teaching English. I started off doing web development and then did full-stack development. I've been working with AWS for just over 2.5 years now. Before that I had no idea of the Cloud and AWS. In my current job we work with AWS a lot, and when I first started there in 2020 I really wanted to learn what it is and why it's so popular. It was a steep learning curve for me, but certification courses that I took, as well as various AWS resources in the Training portal helped me a lot. I liked AWS so much that I decided to take up a Site Reliability Engineer position at my company because I knew it would mean getting involved with AWS to a greater extent. I believe that finding out about AWS has been a life changer for me.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s the biggest benefit you see from the program?
&lt;/h2&gt;

&lt;p&gt;For me, it is the access to people from all over the world, including AWS employees, who are as passionate about AWS as I am, and do their best to help out with any questions I might have on my cloud journey. Even though I've never met most of the program members, it does feel like a friendly, supportive and inclusive community.&lt;/p&gt;

&lt;h2&gt;
  
  
  What’s the next swag item that you would like to get?
&lt;/h2&gt;

&lt;p&gt;I'm always cold, so a pair of gloves, a hat or a scarf would definitely not go amiss :)&lt;/p&gt;

&lt;h2&gt;
  
  
  What are you eating for dinner today? Share the recipe!
&lt;/h2&gt;

&lt;p&gt;I'm going out for my work's Christmas party, and one of the dishes I will be eating is &lt;em&gt;Slow roasted soy marinated pork belly - butterbean and sage cassoulet lemongrass anise and cider broth&lt;/em&gt;. Here is an &lt;a href="https://www.bbcgoodfood.com/recipes/slow-roast-belly-pork"&gt;example recipe:&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Is there anything else you would like to say about the Community Builders program in 2022?
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;To the organizers/program leaders:&lt;/em&gt; thank you very much for creating a safe place for us to learn and ask questions, for putting in a lot of effort into our lifelong learning and for being there when we need you.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;To potential applicants:&lt;/em&gt; If you like to/want to learn about Amazon Web Services and what they can offer, and share your learnings with the wider world, please consider applying for a place in this program. It's a great feeling to be a part of this group of people with such a varied experience who can help you and who you can help solve AWS-related issues. It will undoubtedly be beneficial for your personal and professional development, like it was for me.&lt;/p&gt;

&lt;p&gt;Merry Christmas, everyone!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>community</category>
      <category>cbchristmas2022</category>
    </item>
    <item>
      <title>How to Terraform multiple security group with varying configuration</title>
      <dc:creator>Oksana Horlock</dc:creator>
      <pubDate>Fri, 23 Sep 2022 15:26:09 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-terraform-multiple-security-group-with-varying-configuration-1638</link>
      <guid>https://dev.to/aws-builders/how-to-terraform-multiple-security-group-with-varying-configuration-1638</guid>
      <description>&lt;p&gt;Recently I had to work on standardizing security configuration for some servers. These were created manually, as were all security groups associated with them.&lt;/p&gt;

&lt;p&gt;We wanted to ensure that we knew exactly what ports were open for which server, and ported the configuration of the security groups to Terraform with a view to removing the old security groups and applying Terraform changes to create new ones.&lt;/p&gt;

&lt;p&gt;This blog post explains how to create several security groups with varying configuration.&lt;/p&gt;

&lt;p&gt;Firstly, I put all the configuration into variables.tf file :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "config" {
   default = {
    "server1" = {
       ports  = [
        {
          from = 3000
          to = 3000
          source="0.0.0.0/0"
        },
        {
          from = 3000
          to = 3000
          source="::/0"
        },
         {
          from = 25
          to = 25
          source="0.0.0.0/0"
        },
        {
          from = 587
          to = 587
          source ="0.0.0.0/0"
        },
        {
          from = 1433
          to = 1433
          source="sg-1234"
        },
        {       
          from = 0
          to = 65535
          source= "1.2.3.4/32"
        }
      ]
    },
     "server2" = {
      ports = [       
         {
          from = 2001
          to = 2001
          source="0.0.0.0/0"
        },
        {
          from = 2001
          to = 2001
          source="::/0"
        },
         {
          from = 24001
          to = 24001
          source="0.0.0.0/0"
        },
        {
          from = 24001
          to = 24001
          source="::/0"
        },
        {
          from = 1433
          to = 1433
          source="sg-1234"
        }
      ]
    }
     "server3" = {
        ports = null
    }, 
     "server4" = {
        ports = {
          from = 1433
          to = 1433
          source="sg-1234"
        },
     },
     "server5" = {
      ports = null
     }
   } 
 }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This variable contains ports/ranges that are open only on some instances.&lt;br&gt;
The traffic source of port 1433 includes the ID of an already existing security group.&lt;/p&gt;

&lt;p&gt;At the beginning I tried to create 2 types of elements in the map: one for ports and another one for the range of ports but then realised it's easier to put everything into one type of element.&lt;/p&gt;

&lt;p&gt;By default, each server has ports 80 and 443 open to all traffic, and port 3389 (Remote Desktop) open from a specific IP.&lt;/p&gt;

&lt;p&gt;After creating the variable with configuration for each server, I defined a security group for each server using Terraform &lt;a href="https://www.terraform.io/language/meta-arguments/for_each"&gt;&lt;em&gt;for_each&lt;/em&gt; meta argument&lt;/a&gt;. The name and tags of each security group created in this way contain the name of the server so that it's easily identifiable:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_security_group" "server_access_sg" {
  for_each = var.config
  name = "${each.key}-sg"
  description = "The security group for ${each.key}"
  vpc_id = data.aws_vpc.default.id

  tags = {
    "Server" = "${each.key}"
    "Provider" = "Terraform"
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Within each of the security groups I also used in-line &lt;em&gt;ingress&lt;/em&gt; block to create security group rules:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "http"
    cidr_blocks = ["0.0.0.0/0", "::/0"]
  }

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "https"
    cidr_blocks = ["0.0.0.0/0", "::/0"]
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, for each of the ports I created a dynamic ingress block using the &lt;a href="https://www.terraform.io/language/expressions/splat"&gt;splat expression&lt;/a&gt;, which is basically a simplified version of the &lt;em&gt;for_each&lt;/em&gt; loop.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dynamic "ingress" {
  for_each = each.value.ports[*]
  content {
    from_port   =  ingress.value.from
    to_port     =  ingress.value.to
    protocol    = "tcp"
    cidr_blocks = ingress.value.from != 1433 ? [ ingress.value.source] : null 
    ipv6_cidr_block = ingress.value.source=="::/0" ? [ingress.value.source] : null
    security_groups =   ingress.value.from == 1433 ? [ ingress.value.source] : null 
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Please note that this is a nested loop, and it's looping on the elements of &lt;em&gt;each.value&lt;/em&gt; element of the first loop. For example, all the configuration inside the squiggly brackets for server1 is the value.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"server1" = {
      ports = [...]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Because it's a nested loop, and &lt;em&gt;"each"&lt;/em&gt; is used to refer to the elements of the parent loop, in order to populate the values, we use "ingress".&lt;/p&gt;

&lt;p&gt;Another thing worth pointing out is the conditional creation of &lt;em&gt;cidr_blocks/security_groups&lt;/em&gt; attributes. In this case security_groups argument needs to be created only when the rule for port 1433 is being defined. Therefore I set the attribute to null when one of the arguments is not needed.&lt;/p&gt;

&lt;p&gt;Finally, I created an egress rule to allow all outgoing traffic for each security group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;egress {
    from_port   = 0
    to_port     = 0
    protocol    = -1
    cidr_blocks = ["0.0.0.0/0", "::/0"]
  } 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Happy learning, and if you have any suggestions on improving the code above, please feel free to leave a comment :) &lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Clean 'em! Getting rid of unused AMIs using Python Lambda and Terraform</title>
      <dc:creator>Oksana Horlock</dc:creator>
      <pubDate>Tue, 11 Jan 2022 22:33:49 +0000</pubDate>
      <link>https://dev.to/aws-builders/clean-em-getting-rid-of-unused-amis-using-python-lambda-and-terraform-4ekg</link>
      <guid>https://dev.to/aws-builders/clean-em-getting-rid-of-unused-amis-using-python-lambda-and-terraform-4ekg</guid>
      <description>&lt;p&gt;We are all aware that in the AWS-cloud world of today, immutable infrastructure and deployments are preferrable. It is also a fact that if we use immutable deployments, it means we often create multiple Amazon Machine Images (AMIs). To reduce storage costs we might want to delete (or deregister, in AWS speak) these AMIs and associated storage volumes.&lt;/p&gt;

&lt;p&gt;In this blog post I will describe how to set up an AMI cleaner for unused images.&lt;/p&gt;

&lt;p&gt;The main part is a Lambda function. It checks the images and deletes them and accompanying EBS snapshots. The function is written in Python, and it uses Boto3, an AWS SDK for Python. It also relies on JMESPath, the query language of the AWS CLI for querying JSON (more on it &lt;a href="https://jmespath.org/"&gt;here&lt;/a&gt;).  The function takes the following in the "event" argument:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;regions (list of strings)&lt;/em&gt;: in what region you'd like to run the cleaner&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;max_ami_age_to_prevent_deletion (number)&lt;/em&gt;: if an AMI is older than the specified value, it can safely be deleted&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;ami_tags (a map of strings where each object has a tag key and tag value)&lt;/em&gt;: if an image has the specified tags, it could be a candidate for deletion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Let's have a look at the helper methods that are used in the Lambda:&lt;/p&gt;

&lt;p&gt;1) A method to find AMIs used in autoscaling groups:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def imagesInASGs(region):
  amis = []
  autoscaling = boto3.client('autoscaling', region_name=region)
  print(f'Checking autoscaling groups in region {region}...')
  paginator = autoscaling.get_paginator('describe_auto_scaling_groups')

  page_iterator = paginator.paginate(
    PaginationConfig = {'PageSize': 10}
  )  
  filtered_asgs = page_iterator.search(f"AutoScalingGroups[*].[Instances[?LifecycleState == 'InService'].[InstanceId, LaunchTemplate.LaunchTemplateId,LaunchTemplate.Version]]")

  for key_data in filtered_asgs:
    matches = re.findall(r"'(.+?)'",str(key_data))
    instance_id = matches[0]
    template = matches[1]
    version = matches[2]
    print(f"Template found: {template} version {version}")

    if (template == ""):
      send_alert(f"AMI cleaner failure", f"Failed to find launch template that was used for instance {instance_id}")
      return

    ec2 = boto3.client('ec2', region_name = region)
    launch_template_versions = ec2.describe_launch_template_versions(
      LaunchTemplateId=template, 
      Versions=[version]
    );  
    used_ami_id = launch_template_versions["LaunchTemplateVersions"][0]["LaunchTemplateData"]["ImageId"]
    if not used_ami_id:
      send_alert(f"AMI cleaner failure", f"Failed to find AMI for launch template {template} version {version}")
      return    
    amis.append(used_ami_id)
  return amis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, by using boto3 we paginate through autoscaling groups in a region. And then we use an equivalent of AWS CLI query to get the details of the autoscaling groups that are most interesting for us:&lt;br&gt;
&lt;code&gt;filtered_asgs = page_iterator.search(f"AutoScalingGroups[*].[InstanceId, LaunchTemplate.LaunchTemplateId,LaunchTemplate.Version]]")&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The result we get is a string, and by using this regex: &lt;code&gt;"'(.+?)'"&lt;/code&gt; we break down the string into separate variables.&lt;/p&gt;

&lt;p&gt;After that we use boto3 ec2 client to extract the AMI Id used in autoscaling groups, and save this value into an array.&lt;/p&gt;

&lt;p&gt;2) The next function will get AMI Ids that are used in running EC2s, including those that were not launched using autoscaling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def imagesUsedInEC2s(region):
  print(f'Checking instances that are not in ASGs in region {region}...')
  amis = []
  ec2_resource = boto3.resource('ec2', region_name = region)
  instances = ec2_resource.instances.filter(
    Filters=
    [
      {
        'Name': 'instance-state-name',
        'Values': [ 'running' ]
      }
    ])
  for instance in list(instances):
      amis.append(instance.image_id)

  return amis
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3) A method that creates AMI filters in the correct format. We pass in values as a &lt;em&gt;map(string)&lt;/em&gt; in Terraform, and we need to convert these values into JMESPath format, which is the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
   'Name': 'tag:CatName',
   'Values': [ 'Boris' ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The method itself looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def makeAmiFilters(ami_tags):
  filters = [
    {
      'Name': 'state',
      'Values': ['available']
    }
  ]
  for tag in ami_tags:
    filters.append({'Name': f'tag:{key}', 'Values':[f'{value}'] })
  return filters
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4) A function that sends a message to an SNS topic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def send_alert(subject, message):
  sns.publish(
    TargetArn=os.environ['sns_topic_arn'], 
    Subject=subject, 
    Message=message)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5) The main function, or the handler:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def lambda_handler(event, context):
  amis_in_use = []
  total_amis_deleted = 0
  total_snapshots_deleted = 0
  try:
    regions = event['regions']
    max_ami_age_to_prevent_deletion = event['max_ami_age_to_prevent_deletion']

    filters = makeAmiFilters(event['ami_tags'])

    for region in regions:
      amis_in_use = list(set(imagesInASGs(region) + imagesUsedInEC2s(region)))
      ec2 = boto3.client('ec2', region_name = region)
      amis = ec2.describe_images(
        Owners = ['self'],
        Filters = filters
      ).get('Images')
      for ami in amis:
        now = datetime.now()
        ami_id = ami['ImageId']
        img_creation_datetime = datetime.strptime(ami['CreationDate'], '%Y-%m-%dT%H:%M:%S.%fZ')
        days_since_creation = (now - img_creation_datetime).days

        if ami_id not in amis_in_use and days_since_creation &amp;gt; max_ami_age_to_prevent_deletion:
          ec2.deregister_image(ImageId = ami_id)
          total_amis_deleted += 1

          for ebs in ami['BlockDeviceMappings']:
            if 'Ebs' in ebs:
              snapshot_id = ebs['Ebs']['SnapshotId']              
              ec2.delete_snapshot(SnapshotId=snapshot_id)
              total_snapshots_deleted += 1

    print(f"Deleted {total_amis_deleted} AMIs and {total_snapshots_deleted} EBS snapshots")

  except Exception as e:
    send_alert(f"AMI cleaner failure", e)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt;&lt;br&gt;
CloudWatch Events rule that triggers on schedule has the above Lambda function as a target. In this example, the function will run on the first day of every month:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudwatch_event_rule" "trigger" {
  name = "${var.name_prefix}-ami-cleaner-lambda-trigger"
  description = "Triggers that fires the lambda function"
  schedule_expression = "cron(0 0 1 * ? *)"
  tags = var.tags
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The event target specifies an input to pass into the Lambda function, among other parameters (the values here are purely for example purposes):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_cloudwatch_event_target" "clean_amis" {
  rule = aws_cloudwatch_event_rule.trigger.name
  arn = aws_lambda_function.ami_cleaner.arn
  input = jsonencode({
    ami_tags_to_check= {
     "Environment"="UAT"
     "Application"="MyApp"
    }
    regions = ["us-east-2", "eu-west-1"]
    max_ami_age_to_prevent_deletion = 7
  })
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you'd like to create a test event for this Lambda function, you'll need to enter the following into the test event field:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "regions": ["us-east-2", "eu-west-1"],
  "max_ami_age_to_prevent_deletion": 7,
  "ami_tags_to_check": {
    "Environment": "UAT"
    "Application": "MyApp"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The function itself needs to have the following Terraform resources defined:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_function" "ami_cleaner" {
  filename = "${path.module}/lambda.zip"
  function_name = "ami-cleaner-lambda"
  role = aws_iam_role.iam_for_lambda.arn
  handler = "lambda_function.lambda_handler"
  runtime = "python3.8"
  source_code_hash = data.archive_file.lambda_zip.output_base64sha256
  tags = var.tags

  environment {
    variables = {
      sns_topic_arn = var.sns_topic_arn
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lambda_permission" "allow_cloudwatch_to_call_ami_cleaner" {
  statement_id  = "AllowExecutionFromCloudWatch"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.ami_cleaner.function_name
  principal     = "events.amazonaws.com"
  source_arn    = "arn:aws:events:&amp;lt;region&amp;gt;:&amp;lt;account_id&amp;gt;:rule/ami-cleaner-lambda-trigger*"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "archive_file" "lambda_zip" {
  type        = "zip"
  source_file = "${path.module}/lambda.py"
  output_path = "${path.module}/lambda.zip"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Using &lt;em&gt;archive_file&lt;/em&gt; data source in Terraform is convenient because you won't need to create a zip with the function manually when you update it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lambda IAM Policy&lt;/strong&gt;&lt;br&gt;
For the Lambda function to perform the described operations on resources, the following IAM actions need to be allowed in the policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"ec2:DescribeImages", 
"ec2:DescribeInstances",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeLaunchTemplateVersions",

"ec2:DeregisterImage",
"ec2:DeleteSnapshot",
"autoscaling:DescribeAutoScalingGroups",
"sns:Publish"   
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In order to not allow the function to delete any AMIs and snapshots but only those with a specific tag, we can create Terraform policy statement dynamically and restrict the policy to allow removal of resources only if they have a certain tag key and value:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_iam_policy_document" "ami_cleaner_policy_doc" {
...
  dynamic "statement" {
    for_each = var.ami_tags_to_check
      content {
        actions = [
        "ec2:DeregisterImage",
        "ec2:DeleteSnapshot"
        ]
        resources = ["*"]
        condition {
          test     = "StringLike"
          variable = "aws:ResourceTag/${statement.key}"
          values = [statement.value]
        }        
        effect = "Allow"      
    }
  }   
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Of course, a lot of the values in Terraform can be set as variables. In this case, we can pass the following values as variables to the AMI cleaner module:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;tags&lt;/li&gt;
&lt;li&gt;regions&lt;/li&gt;
&lt;li&gt;sns_topic_arn&lt;/li&gt;
&lt;li&gt;ami_tags_to_check&lt;/li&gt;
&lt;li&gt;max_ami_age_to_prevent_deletion&lt;/li&gt;
&lt;li&gt;schedule_expression&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;SUMMARY&lt;/strong&gt;&lt;br&gt;
Hopefully, this post exemplifies how to do AMI cleanup based on tags, in multiple AWS regions. I have learnt a lot from this piece of work, and I hope someone will learn something new about AWS or Terraform too.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscommunity</category>
      <category>python</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Creating a WordPress blog using AWS Lightsail and Cloudflare</title>
      <dc:creator>Oksana Horlock</dc:creator>
      <pubDate>Sat, 18 Dec 2021 22:10:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/creating-a-wordpress-blog-using-aws-lightsail-and-cloudflare-22mk</link>
      <guid>https://dev.to/aws-builders/creating-a-wordpress-blog-using-aws-lightsail-and-cloudflare-22mk</guid>
      <description>&lt;p&gt;I started my own blog a little more than one year ago. I  had wanted to have my own blog for a while and had a lot of ideas about how I wanted to create it and experiment with different tools and services. However, since having a child, the time I have for learning and exploring has become a really precious commodity. So I knew that the sooner I launched the website, the better. The final little nudge was reading Steve Gordon’s post about blogging which you can find &lt;a href="https://www.stevejgordon.co.uk/become-a-better-developer-through-blogging-part-1" rel="noopener noreferrer"&gt;here&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;This post describes how I set up a simple site for my blog. Time constraint has been the main reason why I chose using WordPress and AWS Lightsail. I was quite surprised by how easy and quick it was to set everything up. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prerequisites:&lt;/strong&gt; AWS account&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating a server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I followed this blog post up to part 6: &lt;a href="https://aws.amazon.com/getting-started/hands-on/launch-a-wordpress-website/" rel="noopener noreferrer"&gt;https://aws.amazon.com/getting-started/hands-on/launch-a-wordpress-website/&lt;/a&gt;. to create a WordPress instance and attach a static IP to it. It is very easy and clear. FYI, Bitnami is an application stack that lets you host a WordPress website. So when you select the WordPress blueprint in AWS Lightsail, you are installing all the applications necessary to run WordPress on your server.&lt;/p&gt;

&lt;p&gt;I had bought a template to use, so after logging in to WordPress admin dashboard, I navigated to Appearance-&amp;gt;Themes-&amp;gt;Add new -&amp;gt;Upload, and uploaded my theme. The website was up on the internet and could be navigated to on the Internet by using its IP address! Wasn’t it supereasy?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Registering a domain name/creating DNS records&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I didn’t have a domain name, so I registered oxiehorlock.com using AWS Route 53, which is a Domain Name Service. Before you register a domain, you should check the pricing here: &lt;a href="https://d32ze2gidvkk54.cloudfront.net/Amazon_Route_53_Domain_Registration_Pricing_20140731.pdf" rel="noopener noreferrer"&gt;https://d32ze2gidvkk54.cloudfront.net/Amazon_Route_53_Domain_Registration_Pricing_20140731.pdf&lt;/a&gt;.  It varies depending on the top-level domain (the last part of the url, for example, .com or .org.uk). When registering a domain, a public hosted zone is created for you. This hosted zone has records about routing Internet traffic for your domain/subdomains. Two records will be created by default:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;type NS record – a nameserver record; it tells the Internet where to go to find out a domain’s IP address. There are several NS values – this is to ensure that if one name server is not available, the queries can go to another one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;type SOA – a Start of Authority record; it contains some DNS information about the hosted zone, such as name name server that created the record, a serial number that you change when you can increment when you update the zone, retry interval and so on (info on record types AWS supports is &lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html" rel="noopener noreferrer"&gt;here&lt;/a&gt;)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Following that I created an A record in my hosted zone. An A record is the most fundamental record and it routes traffic to a resource such as a web server. I mapped my brand spanking new domain name to the static IP of my WordPress instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyxon2x0cucldzh27bh9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnyxon2x0cucldzh27bh9.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After that oxiehorlock.com was navigable on the Internet.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Making the site secure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First of all, I restricted access for port 22 to my own IP address (so that nobody else could SSH into the instance). I also didn’t want anyone to be able to access the empty blog so I restricted HTTP/HTTPS access to my own IP address too. These rules can be changed using the Networking tab of the Lightsail Console:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuyrs6f7lxjej24tgy07.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkuyrs6f7lxjej24tgy07.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I followed these tutorials to create an SSL certificate, firewall rules and securing the site using Cloudflare:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-using-lets-encrypt-certificates-with-wordpress#link-the-lets-encrypt-certificate-files-in-the-apache-directory-wordpress" rel="noopener noreferrer"&gt;https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-using-lets-encrypt-certificates-with-wordpress#link-the-lets-encrypt-certificate-files-in-the-apache-directory-wordpress&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dannys.cloud/hardening-a-wordpress-website-on-aws-lightsail" rel="noopener noreferrer"&gt;https://dannys.cloud/hardening-a-wordpress-website-on-aws-lightsail&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;Useful tip:&lt;/em&gt; since the instance runs on Linux, I had to use CLI. Since I didn't have a lot of experience with it at the point I was doing the setup, it was slightly tricky to edit and save files. The easiest way for me was to run &lt;em&gt;sudo nano path/to/file&lt;/em&gt; command, edit the file, type &lt;em&gt;Ctrl+X&lt;/em&gt;, and then Y or N, or &lt;em&gt;Ctrl+C&lt;/em&gt; to Cancel.&lt;/p&gt;

&lt;p&gt;I also removed Bitnami banner from the bottom right hand corner of the site pages by following the steps from this guide: &lt;a href="https://docs.bitnami.com/aws/how-to/bitnami-remove-banner/" rel="noopener noreferrer"&gt;https://docs.bitnami.com/aws/how-to/bitnami-remove-banner/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Although there is no need to change your DNS from Route53 to Cloudflare, after some time I did it for the sake of experiment, since I had not had much experience with anything to do with DNS before. &lt;br&gt;
To be able to use Cloudflare for DNS management, AWS name servers for the NS record in your hosted zone need to be changed to Cloudflare name servers. You would think that you would just go to the hosted zone, select the NS record from the list, and edit it, right? However, after waiting for a couple of days for the record to be updated, I started investigating what was wrong. It turned out the records needed to be changed from the Registered domains page:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlg3v85n1ejgpzki0az5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlg3v85n1ejgpzki0az5.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the moment you change your DNS provider, you will need to put all the DNS records there.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Costs&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The pricing for deploying a WordPress on AWS Lightsail in the way that I did comprises:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;domain registration cost ($12)&lt;/li&gt;
&lt;li&gt;a monthly instance plan ($3.50)&lt;/li&gt;
&lt;li&gt;half a dollar per hosted zone per month&lt;/li&gt;
&lt;li&gt;DNS queries less than half a dollar per 1 000 000 000 queries (this will also include you using the site, for example amending the theme).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I sincerely hope my experience will help somebody out there to make deploying a WordPress website on AWS Lightsail a plan sailing.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscommunity</category>
      <category>lightsail</category>
    </item>
    <item>
      <title>How to set up CodeBuild test reports in CDK Pipelines (C#)</title>
      <dc:creator>Oksana Horlock</dc:creator>
      <pubDate>Wed, 01 Sep 2021 11:52:54 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-set-up-codebuild-test-reports-in-cdk-pipelines-c-465p</link>
      <guid>https://dev.to/aws-builders/how-to-set-up-codebuild-test-reports-in-cdk-pipelines-c-465p</guid>
      <description>&lt;p&gt;I'm so happy to get into writing again - we’ve had a few challenging months: we had to self-isolate several times, the whole family was ill with a stomach bug, and our son is going through the terrible twos. So blogging, talks and working on professional development had to be put on the backburner.&lt;/p&gt;

&lt;p&gt;I finally had some time to finish writing this blog post about CDK Pipelines I had been working on probably since the beginning of the year. I had been trying to figure out how to make CodeBuild test reports work with CDK Pipelines. Last week when I got back to this and started working on it again, I saw that the API that was used in Developer Preview has been updated (more information on it &lt;a href="https://github.com/aws/aws-cdk/blob/master/packages/%40aws-cdk/pipelines/ORIGINAL_API.md" rel="noopener noreferrer"&gt;here&lt;/a&gt;). And now it looks like it is easier to plug in the reports to be used with this high level construct. While the old API is still in use, I will focus on the new API.&lt;/p&gt;

&lt;p&gt;The purpose of this blog post is to demonstrate the set-up of CodeBuild test reports in CDK Pipelines for C#.&lt;/p&gt;

&lt;p&gt;I have written a simple .NET Core application which returns the day of the week when you pass in a date in the query string. There are also a couple of XUnit tests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class UnitTests
{
    [Fact]
    public void DateInPast_ReturnsCorrectResult()
    {
        var controller = new HomeController();
        var date = new DateTime(1983, 2, 3);
        var expected = $"{String.Format("{0:d}", date)} was Thursday";
        var actual = controller.Get(date) as OkObjectResult;
        Assert.Equal(expected, actual.Value);
    }

    [Fact]
    public void DateInFuture_ReturnsCorrectResult()
    {
        var controller = new HomeController();
        var date = new DateTime(2033, 12, 9);
        var expected = $"{String.Format("{0:d}", date)} will be Friday";            
        var actual = controller.Get(date) as OkObjectResult;
        Assert.Equal(expected, actual.Value);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The file tree looks like this:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnn4gnjtmrxmgwoxyuol0.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnn4gnjtmrxmgwoxyuol0.JPG" alt="file tree"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk32fvyz5h6nl5rllm94u.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk32fvyz5h6nl5rllm94u.JPG" alt="file tree"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For the task of creating CodeBuild test reports only without actually deploying the app, we will only work with &lt;em&gt;CdkPipelinesPipelineStack.cs&lt;/em&gt;. In my case this was the file created automatically on &lt;code&gt;cdk init&lt;/code&gt;, and it will contain the main pipeline.&lt;/p&gt;

&lt;p&gt;Firstly, before we build the pipeline, we need to create a connection to our Github repo and get its ARN. I wrote a post about it a while back – &lt;a href="https://oxiehorlock.com/2021/03/15/cdk-pipelines-and-github-fun/" rel="noopener noreferrer"&gt;AWS CDK Adventure Part 2: CDK Pipelines and GitHub fun&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace CdkPipelines
{
    public class CdkPipelinesPipelineStack : Stack
    {
        internal CdkPipelinesPipelineStack(Construct scope, string id, IStackProps props = null) : base(scope, id, props)
        {
            var connectionArn = "arn:aws:codestar-connections:eu-west-1:01234567890:connection/12ae43b8-923e-4a01-ba4e-274454669859";
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then create a report group:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var reportGroup = new ReportGroup(this, "MyReports", new ReportGroupProps
{
    ReportGroupName = "MyReports"
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, we use the &lt;em&gt;CodePipeline&lt;/em&gt; construct in the &lt;em&gt;Amazon.CDK.Pipelines&lt;/em&gt; namespace to create the pipeline. If we didn’t want to have any CodeBuild reports, we would set up the pipeline like so:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var pipeline = new Amazon.CDK.Pipelines.CodePipeline(this, "WhatDayOfWeekPipeline", new CodePipelineProps
{
    PipelineName = "WhatDayOfWeekPipeline",
    SelfMutation = false,
    Synth = new ShellStep("synth", new ShellStepProps()
    {
        Input = CodePipelineSource.Connection("OksanaH/CDKPipelines", "main", new ConnectionSourceOptions()
        {
            ConnectionArn = connectionArn
        }),
        InstallCommands = new string[] { "npm install -g aws-cdk" },
        Commands = new string[] { 
            "cd App", "dotnet restore WhatDayOfWeekTests/WhatDayOfWeekTests.csproj",
            "dotnet test -c release WhatDayOfWeekTests/WhatDayOfWeekTests.csproj --logger trx --results-directory ./testresults",
            "cd ..", 
            "cdk synth" }
    })
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;One of the &lt;em&gt;CodePipelineProps&lt;/em&gt; is &lt;em&gt;SelfMutation&lt;/em&gt;: when set to false, it’s quite handy when doing development work – you can just run &lt;code&gt;cdk deploy&lt;/code&gt; and your local changes to the pipeline will be deployed bypassing the GitHub repo.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Synth&lt;/em&gt; property is used to set up the pipeline to pull from the GitHub repo, and also run the commands needed to produce the cloud assembly.&lt;/p&gt;

&lt;p&gt;In order to set up the reports, we need to customize the CodeBuild project, and it can be done by using &lt;em&gt;CodeBuildStep&lt;/em&gt; class instead of &lt;em&gt;ShellStep&lt;/em&gt;. &lt;em&gt;CodeBuildStepProps&lt;/em&gt; class, in turn, has a &lt;em&gt;PartialBuildSpec&lt;/em&gt; property, which we can use to define the reports. The reports part of a &lt;em&gt;buildspec.yml&lt;/em&gt; file usually looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;version: 0.2
phases:
  ...

reports:
  XUnitTestResults:
    file-format: VisualStudioTrx
    files:
      - '**/*'
    base-directory: './testresults'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In CDK for C# the value of &lt;em&gt;PartialBuildSpec&lt;/em&gt; has to be created using &lt;em&gt;Dictionary&lt;/em&gt;, and the reports bit translated to CDK is below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var reports = new Dictionary&amp;lt;string, object&amp;gt;()
{                
    {
        "reports", new Dictionary&amp;lt;string, object&amp;gt;()
        {
            {
                reportGroup.ReportGroupArn, new Dictionary&amp;lt;string,object&amp;gt;()
                {
                    { "file-format", "VisualStudioTrx" },
                    { "files", "**/*" },
                    { "base-directory", "App/testresults" }
                }
            }
        }
    }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Another thing that needs to be created to be able to work with CodeBuild test reports is a policy, otherwise you might see an error like this when you try to deploy the stack:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwle1t074486fd5qt1tnh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwle1t074486fd5qt1tnh.png" alt="auth error"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The policy allows several report-related actions on the report group we have created:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var policyProps = new PolicyStatementProps()
{
    Actions = new string[] { 
        "codebuild:CreateReportGroup",
        "codebuild:CreateReport",
        "codebuild:UpdateReport",
        "codebuild:BatchPutTestCases",
        "codebuild:BatchPutCodeCoverages" 
    },
    Effect = Effect.ALLOW,
    Resources = new string[] { reportGroup.ReportGroupArn }
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, we can define necessary &lt;em&gt;CodeBuildStepProps&lt;/em&gt; to set up reports:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var step = new CodeBuildStep("Synth", new CodeBuildStepProps
{
    Input = CodePipelineSource.Connection("OksanaH/CDKPipelines", "main", new ConnectionSourceOptions()
    {
        ConnectionArn = connectionArn
    }),
    PrimaryOutputDirectory = "cdk.out",
    InstallCommands = new string[] { "npm install -g aws-cdk" },
    Commands = new string[] { 
        "cd App", 
        "dotnet restore WhatDayOfWeekTests/WhatDayOfWeekTests.csproj",                        
        "dotnet test -c release WhatDayOfWeekTests/WhatDayOfWeekTests.csproj --logger trx --results-directory ./testresults",
        "cd ..",
        "cdk synth"  
    },
    PartialBuildSpec = BuildSpec.FromObject(reports),
    RolePolicyStatements = new PolicyStatement[] { new PolicyStatement(policyProps) }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, what is left to do is to use the &lt;em&gt;CodeBuildStep&lt;/em&gt; as the value of the &lt;em&gt;Synth&lt;/em&gt; property:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var pipeline = new Amazon.CDK.Pipelines.CodePipeline(this, "WhatDayOfWeekPipeline", new CodePipelineProps
{
    PipelineName = "WhatDayOfWeekPipeline",
    Synth = step
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that we can commit the changes, run &lt;code&gt;cdk deploy&lt;/code&gt; and check the CodeBuild test report in the console:&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmx98edcm5tsm3ar4ke6.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmx98edcm5tsm3ar4ke6.JPG" alt="codebuild test reports"&gt;&lt;/a&gt;&lt;br&gt;
Beautiful!&lt;/p&gt;

&lt;p&gt;Useful links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/cdk/latest/guide/cdk_pipeline.html" rel="noopener noreferrer"&gt;AWS Documentation on CDK Pipelines&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cdkworkshop.com/40-dotnet/70-advanced-topics/100-pipelines.html" rel="noopener noreferrer"&gt;CDK Workshop&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/aws/aws-cdk/tree/master/packages/%40aws-cdk/pipelines" rel="noopener noreferrer"&gt;CDK Pipelines in GitHub&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>cdk</category>
      <category>devops</category>
    </item>
    <item>
      <title>Efficient copying between DynamoDB tables using Parallel Scans and Batch Write</title>
      <dc:creator>Oksana Horlock</dc:creator>
      <pubDate>Sat, 07 Aug 2021 06:09:50 +0000</pubDate>
      <link>https://dev.to/aws-builders/efficient-copying-between-dynamodb-tables-using-parallel-scans-and-batch-write-1ec2</link>
      <guid>https://dev.to/aws-builders/efficient-copying-between-dynamodb-tables-using-parallel-scans-and-batch-write-1ec2</guid>
      <description>&lt;p&gt;Recently we had a situation where we needed to copy a large amount of data from a DynamoDB table into another one in a different account. Originally we used a Scan function to get items with a ExclusiveStartKey/LastEvaluatedKey  marker to check if more records needed to be obtained. Then we used PutItem API call to insert data into the destination table. Because there were several hundred thousand records in the table, it took several hours to copy them; at that point we decided that we needed to use a faster method to copy the data. We achieved a massive improvement of the copying times by using Parallel Scans and Writing Items in Batch. This post will provide an example of how to use them and compare the results of the methods we used.&lt;/p&gt;

&lt;p&gt;For demo purposes, I’m using the same account, rather than different ones. I have written a method to create a simple DynamoDB table programmatically. It will contain data about temperatures in 2020 in a few cities:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private static async Task&amp;lt;TableDescription&amp;gt; CreateTable(string name)
{
    var request = new CreateTableRequest
    {
        AttributeDefinitions = new List&amp;lt;AttributeDefinition&amp;gt;()
        {
            new AttributeDefinition{
                AttributeName = "City",
                AttributeType = "S"
            },
            new AttributeDefinition{
                AttributeName = "Date",
                AttributeType = "S"
            }
        },
        TableName = name,
        ProvisionedThroughput = new ProvisionedThroughput
        {
            ReadCapacityUnits = 15,
            WriteCapacityUnits = 15
        },
        KeySchema = new List&amp;lt;KeySchemaElement&amp;gt;
        {
            new KeySchemaElement
            {
                AttributeName="City",
                KeyType="HASH"
            },
            new KeySchemaElement
            {
                AttributeName="Date",
                KeyType="Range"
            }
        }
    };
    var response = await client.CreateTableAsync(request);
    return response.TableDescription;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I use the method above to create several tables (and also wait to ensure the table I need to populate first has been created):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;await CreateTable(sourceTableName);
await CreateTable(destinationTableNameSlow);
await CreateTable(destinationTableNameFast);

var describeTableRequest = new DescribeTableRequest
{
    TableName = sourceTableName
};

DescribeTableResponse describeTableResponse;
do
{
    System.Threading.Thread.Sleep(1000);
    describeTableResponse = await client.DescribeTableAsync(describeTableRequest);
}
while (describeTableResponse.Table.TableStatus != TableStatus.ACTIVE);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then I populate the source table with some data about temperatures:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for (var i = 0; i &amp;lt;= 100; i++)
{
    var putItemRequest = new PutItemRequest
    {
        TableName = sourceTableName,
        Item = new Dictionary&amp;lt;string, AttributeValue&amp;gt;()
        {
            {"City", new AttributeValue{S=cities[(new Random()).Next(cities.Length)]  } },
            {"Date", new AttributeValue{S=GetRandom2020Date()  } },
            {"Highest", new AttributeValue{N=(new Random()).Next(20,30).ToString()  } },
            {"Lowest", new AttributeValue{N=(new Random()).Next(1,10).ToString()  } }
        }
    };
    await client.PutItemAsync(putItemRequest);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now, the most interesting part starts. First, I call a method to copy the data slowly. I use Scan and PutItem API calls.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private static async Task CopySlowly()
{
    Stopwatch sw = new Stopwatch();
    sw.Start();
    var request = new ScanRequest
    {
        TableName = sourceTableName
    };

    var result = await client.ScanAsync(request, default);
    foreach (var item in result.Items)
    {
        var putItemRequest = new PutItemRequest
        {
            TableName = destinationTableNameSlow,
            Item = item
        };

        await client.PutItemAsync(putItemRequest, default);
    }
    sw.Stop();
    Console.Write($"Copy slow - {sw.ElapsedMilliseconds} milliseconds elapsed");
    Console.ReadLine();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since the demo table only has 100 items, I’m not using ExclusiveStartKey/LastEvaluatedKey with the Scan operation; those are definitely necessary for large tables as Scan only gets maximum of 1MB of data.&lt;/p&gt;

&lt;p&gt;I then call another method to copy data using Parallel Scans. I specify how many parallel worker threads I want to create by using totalSegments variable. In this case it was set to 3, but you can probably have as many as you like):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private static void CopyFast()
{
    Stopwatch sw = new Stopwatch();            
    sw.Start();
    Task[] tasks = new Task[totalSegments];
    for (int segment = 0; segment &amp;lt; totalSegments; segment++)
    {
        int tmpSegment = segment;
        tasks[segment] = Task.Run(() =&amp;gt; ScanSegment(tmpSegment));
    }

    Task.WaitAll(tasks);
    sw.Stop();

    Console.WriteLine($"Copy fast - {sw.ElapsedMilliseconds} milliseconds elapsed");
    Console.ReadLine();
}
private static async Task ScanSegment(int tmpSegment)
{
    var request = new ScanRequest
    {
        TableName = sourceTableName,
        Segment = tmpSegment,
        TotalSegments = totalSegments,
    };

    var result = await client.ScanAsync(request);

    for (var i = 0; i &amp;lt; result.Items.Count; i += 25)
    {
        var items = result.Items.Skip(i).Take(25).ToArray();
        var req = new BatchWriteItemRequest
        {
            RequestItems = new Dictionary&amp;lt;string, List&amp;lt;WriteRequest&amp;gt;&amp;gt;
            {
                {
                    destinationTableNameFast,
                    items.Select(i =&amp;gt; new WriteRequest(new PutRequest(i))).ToList()
                }
            }
        };

        await client.BatchWriteItemAsync(req);
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While simple Scan accesses one partition at a time, when using Parallel Scans several worker threads are created, and each of them scans a certain segment of the table. BatchWriteItem operation allows you to create a PutItem request for up to 25 items at a time.&lt;/p&gt;

&lt;p&gt;As a result, the difference in copying speed is noticeable:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lzu18lffb5vf8qnb8rj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2lzu18lffb5vf8qnb8rj.jpg" alt="Result"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Another point to mention is that this task has been helpful in preparation for the AWS Developer Associate exam which I took (and passed!) in February 2021 – there were a couple of questions about Parallel Scans and BatchWriteItem in practice tests, and I was very happy that I had come across this scenario at work and knew the correct answers straightaway!&lt;/p&gt;

&lt;p&gt;Happy learning!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>dynamodb</category>
      <category>awssdk</category>
      <category>csharp</category>
    </item>
    <item>
      <title>Creating S3 Object Lambda with CDK for C#</title>
      <dc:creator>Oksana Horlock</dc:creator>
      <pubDate>Mon, 14 Jun 2021 23:06:52 +0000</pubDate>
      <link>https://dev.to/aws-builders/creating-s3-object-lambda-with-cdk-for-c-24lo</link>
      <guid>https://dev.to/aws-builders/creating-s3-object-lambda-with-cdk-for-c-24lo</guid>
      <description>&lt;p&gt;The moment I learnt that S3 Object Lambda was out, I knew I’d want to experiment with it. Why? For two reasons really – at work we have quite a few scenarios when the same objects in S3 need to be presented in different shapes or forms, data extracted, or content transformed. So I’ve volunteered to speak about it and its use cases. The second reason is to practise using AWS CDK for C# more – I’ve mentioned a few times that there are very few examples in C#, and I thought it’s be a good idea to provide one more and hopefully make someone’s life easier.&lt;/p&gt;

&lt;p&gt;In a nutshell, S3 Object Lambda allows you to amend the data that you usually get by using S3 Get requests. The main characters in the story are:&lt;br&gt;
An S3 Bucket where we drop files we want to transform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var bucket = new Bucket(this, "xmlBucket", new BucketProps
{
    BucketName = "oxies-xml-bucket",
    RemovalPolicy = RemovalPolicy.DESTROY
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A Lambda function which will do the transformation. My Lambda function does a simple XML transformation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.Json.JsonSerializer))]
namespace TransformXML.Lambda
{    public class Handler
    {
        protected async Task&amp;lt;HttpResponseMessage&amp;gt; Transform(JObject request, ILambdaContext context)
        {
            try
            { 
                var s3Client = new AmazonS3Client();

                var input3Url = request["getObjectContext"]["inputS3Url"].ToString();
                var reqRoute = request["getObjectContext"]["outputRoute"].ToString();
                var token = request["getObjectContext"]["outputToken"].ToString();                

                using var httpClient = new HttpClient();
                var original = await httpClient.GetAsync(input3Url);

                var content = await original.Content.ReadAsStringAsync();

                var receivedXml = XDocument.Parse(content);
                var transformedXml = new XElement("article", receivedXml.Root.Element("body").Value);

                var toSend = new WriteGetObjectResponseRequest()
                {
                    Body = ToStream(transformedXml),
                    RequestRoute = reqRoute,
                    RequestToken = token
                };
                var response = await s3Client.WriteGetObjectResponseAsync(toSend);
            }
            catch (Exception ex)
            {
                context.Logger.Log($"ERROR: {ex.Message}; {ex.StackTrace}");              
            }
            return new HttpResponseMessage() { StatusCode = System.Net.HttpStatusCode.OK };            
        }

        private Stream ToStream(XElement onlyBodyXML)
        {
            return new MemoryStream(Encoding.UTF8.GetBytes(onlyBodyXML.ToString()));
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What is worth noting here is that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The request that is sent from the Lambda contains &lt;em&gt;getObjectContext&lt;/em&gt; property that contains inputs3Url which you use to get the original object&lt;/li&gt;
&lt;li&gt;You need to use a new &lt;em&gt;WriteGetObjectResponseRequest&lt;/em&gt; method that is used to include the transformed content. &lt;em&gt;WriteGetObjectResponseAsync&lt;/em&gt; sends the transformed object when Object Lambda Access Points are called.&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;RequestToken&lt;/em&gt; allows the Lambda to connect the response with the caller.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Before we start looking at the main part of the CDK stack, I’m going to use namespace aliases to save some typing:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;using S3ObjectLambdaCfnAccessPoint = Amazon.CDK.AWS.S3ObjectLambda.CfnAccessPoint;
using S3ObjectLambdaCfnAccessPointProps = Amazon.CDK.AWS.S3ObjectLambda.CfnAccessPointProps;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We then define the resources in the stack:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var function = new Function(this, "XMLTransformBody", new FunctionProps
{
    Runtime = Runtime.DOTNET_CORE_3_1,
    Code = Code.FromAsset("./TransformXMLLambda/bin/Release/netcoreapp3.1/publish"),
    Handler = "TransformXML.Lambda::TransformXML.Lambda.Handler::Transform",
    FunctionName = "XMLTransform",
    Timeout = Duration.Minutes(1)
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We also need to give the Lambda execution role appropriate permissions like so (if you don't, ERROR: Forbidden will be returned from &lt;em&gt;WriteGetObjectResponseAsync&lt;/em&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var policy = new PolicyStatement(new PolicyStatementProps
{
    Effect = Effect.ALLOW,
    Actions = new[] { "s3-object-lambda:WriteGetObjectResponse" },
    Resources = new[] { "*" }
});

function.AddToRolePolicy(policy);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And finally, the S3 Object Lambda Access Point. This is the access point that should be used in the application when making a &lt;em&gt;GetObject&lt;/em&gt; request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;var objectLambdaAccessPoint = new S3ObjectLambdaCfnAccessPoint(this, "S3ObjectLambdaAccessPoint", new S3ObjectLambdaCfnAccessPointProps
{
    Name = "transformxml",
    ObjectLambdaConfiguration = new S3ObjectLambdaCfnAccessPoint.ObjectLambdaConfigurationProperty()
    {
        CloudWatchMetricsEnabled = true,

        SupportingAccessPoint = supportingAccessPoint,

        TransformationConfigurations = new object[]
        {
            new S3ObjectLambdaCfnAccessPoint.TransformationConfigurationProperty()
            {
                Actions = new string[] { "GetObject" },

                ContentTransformation = new Dictionary&amp;lt;string, object&amp;gt;()
                {
                    { 
                        "AwsLambda", new Dictionary&amp;lt;string, string&amp;gt;()
                        {
                            {"FunctionArn", function.FunctionArn }
                        } 
                    }
                }
            }
        }
    }
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A few things here: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The Actions array will always have only one element "GetObject" since it's the only operation supported with the Object Lambda.&lt;/li&gt;
&lt;li&gt;With CDK in C# you need to use a Dictionary when there are Javascript arrays or untyped objects. I must say this bit is so much simpler in Typescript!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here are some other findings:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;This error occurred when I changed the name of the Access Point to contain some capital letters or hyphens. So I just left it in lowercase.
&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffayl2w4dmofc6q39j2u7.JPG" alt="error"&gt;
&lt;/li&gt;
&lt;li&gt;In the &lt;em&gt;ContentTransformation&lt;/em&gt; container you can also send &lt;em&gt;FunctionPayload&lt;/em&gt;, and customize the behaviour of the function based on that payload.&lt;/li&gt;
&lt;li&gt;To use the Object Lambda Access Point all you need to do is to replace the &lt;em&gt;BucketName&lt;/em&gt; value of &lt;em&gt;GetObjectRequest&lt;/em&gt; with the ARN of the Object Lambda Access Point:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GetObjectRequest request = new GetObjectRequest
{
    BucketName = "arn:aws:s3-object-lambda:us-east-1:&amp;lt;account-id&amp;gt;:accesspoint/transformxml",
    Key = "example.xml"
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The full example is in my &lt;a href="https://github.com/OksanaH/s3ObjectLambda" rel="noopener noreferrer"&gt;Github repo&lt;/a&gt;&lt;br&gt;
It took me a while to build the stack since I’ve not worked with L1 Constructs much before. A huge thanks to &lt;a href="https://twitter.com/petrabarus" rel="noopener noreferrer"&gt;Petra Novandi&lt;/a&gt; and the CDK team for giving me a hand, helping me to learn how S3 Object Lambda works, improving my knowledge of the CDK and enabling me to share my learnings with the world.&lt;/p&gt;

&lt;p&gt;Useful resources:&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/transforming-objects.html" rel="noopener noreferrer"&gt;Transforming objects with S3 Object Lambda&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.youtube.com/watch?v=GgLQWG0ifeI" rel="noopener noreferrer"&gt;Demo - S3 Object Lambda | AWS Events&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/cdk/latest/guide/work-with-cdk-csharp.html" rel="noopener noreferrer"&gt;Working with CDK in C#&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>lambda</category>
      <category>awscdk</category>
    </item>
    <item>
      <title>Useful tips to prepare for the AWS Certified Developer Associate Exam</title>
      <dc:creator>Oksana Horlock</dc:creator>
      <pubDate>Sat, 29 May 2021 21:04:46 +0000</pubDate>
      <link>https://dev.to/aws-builders/useful-tips-to-prepare-for-the-aws-certified-developer-associate-exam-4b0f</link>
      <guid>https://dev.to/aws-builders/useful-tips-to-prepare-for-the-aws-certified-developer-associate-exam-4b0f</guid>
      <description>&lt;p&gt;Since I started working with Amazon Web Services (which is 1 year and 2 months ago), I knew I'd want to take an exam. I find that the process for exam preparation works very well for me, and allows me to have a structured learning programme. Besides, because of the multitude of AWS offerings, I wanted to limit the scope in some way, and even though the exams that I took still cover quite a wide range of services and products, I knew what I needed to focus on from the exam guide.&lt;/p&gt;

&lt;p&gt;Since I've been working as a developer for 5 years, I thought it'd be logical to take the AWS Certified Developer Associate Exam. I must admit that I had started preparing for the exam way too early - just a few months after I had my first hands-on AWS experience, and after having had a look at a few courses and exam questions, I realised I was missing foundational knowledge about the Cloud. That's why I decided to start with AWS Certified Cloud Practitioner Exam. It was very helpful for me because during the preparation process I gained indispensable knowledge of AWS Cloud concepts, security and the core services which we use at work.&lt;/p&gt;

&lt;p&gt;After that I started preparing for the AWS Certified Developer Associate Exam, which is quite technical, and requires some knowledge of AWS CLI and API methods. Below is the advice I'd give to anyone who would like to take this exam:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The most important thing is &lt;strong&gt;hands-on experience&lt;/strong&gt; using the AWS CLI, the console or with SDK (all of these can come up in the exam). In my experience, I think sometimes when you're not familiar with a technology, you tend not to pick up the tasks related to it, and worry that if you do, your colleagues will have to spend too much time explaining things to you. I had to change my way of thinking and pick up AWS related work even though at the beginning I had little idea of what that work would involve. Luckily, my teammates were very supportive, and little by little I was able to do tasks on AWS with more independence. Some of the useful tasks that I did that could come up in the exam were using Parallel Scans in DynamoDb and defining IAM permissions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use &lt;strong&gt;AWS learning resources&lt;/strong&gt; to the full. It's quite amazing how many excellent resources of different types there are to help you prepare for the exams. My favourite ones are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://aws.amazon.com/training/self-paced-labs/"&gt;self-paced labs&lt;/a&gt; because they have different formats such as Hands-On Labs or Quest, and different levels. Some resources are not free here but you can pay for them with AWS credits. One way to get them is to attend AWS events.&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://www.aws.training/LearningLibrary?query=&amp;amp;filters=Language%3A1&amp;amp;from=0&amp;amp;size=15&amp;amp;sort=_score"&gt;AWS Learning library&lt;/a&gt;. This is a part of the AWS Training and Certification portal, so you need to register there.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Attend &lt;a href="https://aws.amazon.com/events/"&gt;&lt;strong&gt;AWS events&lt;/strong&gt;&lt;/a&gt;. There are specific Training and Certification events, Dev Days and Online Tech Talks, as well as &lt;a href="https://aws.amazon.com/developer/community/usergroups/"&gt;AWS User Groups&lt;/a&gt; around the world, and they are free! Another good thing is that if you register for an event, you can later get access to the presentation slides and/or recordings. AWS Reinvent (and some other events) include Jams and Jam Lounges - they are sets of realistic AWS challenges of varying levels. You can do them on your own or in a team in an auto-provisioned AWS environment. You can also compete with other teams if you like. An example of task that came up in a Jam was to install a Cloudwatch Agent on a server.  Finally, my favourite AWS event has been &lt;a href="https://aws.amazon.com/gameday/?nc1=h_ls"&gt;AWS GameDay&lt;/a&gt; - an event where you have to solve a real world problem using AWS solutions in an account provisioned for you. &lt;a href="https://aws.amazon.com/events/summits/online/emea/"&gt;AWS Summit Online&lt;/a&gt; will include some hands-on labs so be sure to register soon.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Blogging&lt;/strong&gt; turned out to be of great help during the exam preparation to reinforce the knowledge in written form. Even though the posts weren't very long, I found that when I was writing up about an AWS service or an aspect of it, I wanted to investigate things a bit more to ensure that what I wrote was correct. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There are a lot of learning platforms that provide AWS exam preparation courses and practice tests. While some of them are excellent, others might not be that great. I think everyone who is preparing to sit an AWS exam should make use of &lt;a href="https://d1.awsstatic.com/training-and-certification/docs-dev-associate/AWS-Certified-Developer-Associate_Sample-Questions.pdf"&gt;&lt;strong&gt;AWS-provided sample exam questions&lt;/strong&gt;&lt;/a&gt; - the explanation of why each option is correct or incorrect is really handy and can itself serve as a source of learning. There is also a practice test that you can purchase in the training portal. It costs $20, and although you can take it as many times as you like, you have to pay every time. Because it's not free, I think it's best saved till the end of your preparation period.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It's very important to be committed to learning. We all have busy lives, families and other matters to attend to, so passive learning such as listening to AWS podcasts or recordings of events described earlier can still help you to get ready to achieve AWS certification. I found this to be of great assistance when I wanted to get general understanding about a specific service I hadn't come across yet because albeit technical, the topics of some podcasts and events are high level and allow you to focus on a bigger picture while you're on the go. Attention Spanish speakers - an AWSome podcast is &lt;a href="https://aws-espanol.buzzsprout.com/"&gt;Charlas Técnicas&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I passed both AWS Certified Cloud Practitioner and Developer Associate exam. The skillsI obtained during the exam preparation is invaluable to me in my job (and it's nice to have the validation of your knowledge), and I feel that I have made a big step from where I was last year (a total AWS beginner with no knowledge of the Cloud whatsoever) to where I'm now - being able to develop, deploy, and debug AWS-based applications, and understand and use the most important Amazon Web Services and best practices. I'd encourage Software Developers who work with AWS or are only starting to do so, to challenge themselves, take the AWS Certified Developer Associate exam and I'm sure you'll deepen your knowledge of AWS Cloud.&lt;/p&gt;




</description>
      <category>aws</category>
      <category>exams</category>
    </item>
  </channel>
</rss>
