<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Joris Conijn</title>
    <description>The latest articles on DEV Community by Joris Conijn (@nr18).</description>
    <link>https://dev.to/nr18</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nr18"/>
    <language>en</language>
    <item>
      <title>Fixing oversized artifacts AWS CDK Pipelines</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Thu, 29 May 2025 17:12:13 +0000</pubDate>
      <link>https://dev.to/aws-builders/fixing-oversized-artifacts-aws-cdk-pipelines-33nc</link>
      <guid>https://dev.to/aws-builders/fixing-oversized-artifacts-aws-cdk-pipelines-33nc</guid>
      <description>&lt;p&gt;I built a workload using AWS CDK, and the CodePipeline stopped working at the worst moment. It was right at the end of the sprint; we had done multiple deployments before. But at this moment, the moment you might recognize. When this last PR is released, we will have met our deadline! Kaboom, the pipeline stops working, and the reason for the failure is not related to the changes you made.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxebia.com%2Fwp-content%2Fuploads%2F2025%2F05%2Ffixing-oversized-artifacts-in-aws-cdk-pipelines-codepipeline.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxebia.com%2Fwp-content%2Fuploads%2F2025%2F05%2Ffixing-oversized-artifacts-in-aws-cdk-pipelines-codepipeline.png" title="Fixing oversized artifacts AWS CDK Pipelines 2" alt="CodePipeline Failure Max Artifact Size" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Some background
&lt;/h2&gt;

&lt;p&gt;When you use CDK, you can write infrastructure as code using your favorite programming language. CDK comes with certain constructs that make your life easier. For example, you can easily create a CodePipeline that will fetch the code from your repository. It will then synthesize the CDK code into CloudFormation. The CodePipeline will then upload all assets to an S3 bucket, and the CloudFormation templates will be deployed to the target accounts. This CloudFormation template will contain references to the assets on S3. When the template is deployed, the assets are fetched from the S3 Bucket.&lt;/p&gt;

&lt;h2&gt;
  
  
  What has happened?
&lt;/h2&gt;

&lt;p&gt;I ran into the problem that CodePipeline will pass an artifact from step to step. This artifact is a single zipped file, which makes it easy to pass it along the pipeline.&lt;/p&gt;

&lt;p&gt;When CodePipeline wants to deploy the CloudFormation templates. It uploads the zipped file, and it knows where in the zipped file the template is located. However, all other assets are also in the zipfile; this could be the code that you want to run in Lambda functions or even Lambda Layers. You can imagine that when your workload starts to become more mature, there will be more assets in this zipfile.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is that a problem?
&lt;/h2&gt;

&lt;p&gt;Well, the artifact has exceeded the maximum size! With CDK projects, this is something that you will run into sooner or later. Usually, you only add stuff to your projects and rarely remove things. And, as you already learned, this always happens at a moment that you don’t expect at all. And in my case, at the worst possible time, right before my deadline.&lt;/p&gt;

&lt;h2&gt;
  
  
  How can we fix this?
&lt;/h2&gt;

&lt;p&gt;The answer is quite simple. As I already explained, the problem is with the size of the artifact being passed to the CloudFormation deploy action. We just have to reduce the size after all the assets in the artifact are uploaded to S3.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;private stripAssets(pipeline: pipelines.CodePipeline) {
 const policies = [
   new iam.PolicyStatement({
     effect: iam.Effect.ALLOW,
     resources: [&amp;lt;code&amp;gt;arn:aws:s3:::${this.artifactBucket.bucketName}/*],
     actions: ['s3:PutObject'],
   }),
 ];

 if (this.artifactBucket.encryptionKey) {
   policies.push(
     new iam.PolicyStatement({
       effect: iam.Effect.ALLOW,
       resources: [this.artifactBucket.encryptionKey.keyArn],
       actions: ['kms:GenerateDataKey'],
     }),
   );
 }
 pipeline.addWave('BeforeStageDeploy', {
   pre: [
     new pipelines.CodeBuildStep('StripAssetsFromAssembly', {
       input: pipeline.cloudAssemblyFileSet,
       commands: [
         'S3_PATH=${CODEBUILD_SOURCE_VERSION#"arn:aws:s3:::"}',
         'ZIP_ARCHIVE=$(basename $S3_PATH)',
         'echo $S3_PATH',
         'echo $ZIP_ARCHIVE',
         'ls',
         'rm -rfv asset.*',
         'zip -r -q -A $ZIP_ARCHIVE *',
         'ls',
         'aws s3 cp $ZIP_ARCHIVE s3://$S3_PATH',
       ],
       rolePolicyStatements: policies,
     }),
   ],
 });
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What happens in the code sample?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;We make sure that we have the &lt;code&gt;s3:PutObject&lt;/code&gt; rights on the artifact bucket. (We need to overwrite the existing artifact.)&lt;/li&gt;
&lt;li&gt;We need the &lt;code&gt;kms:GenerateDataKey&lt;/code&gt; rights. (Optional, I am using KMS encryption on the bucket.)&lt;/li&gt;
&lt;li&gt;We add a &lt;code&gt;CodeBuildStep&lt;/code&gt; that will consume the artifact.&lt;/li&gt;
&lt;li&gt;The artifact is downloaded and unzipped by CodeBuild.&lt;/li&gt;
&lt;li&gt;We figure out the origin of the artifact.&lt;/li&gt;
&lt;li&gt;We remove all assets.&lt;/li&gt;
&lt;li&gt;We will zip the artifact.&lt;/li&gt;
&lt;li&gt;We will upload the artifact over the original.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Because we placed the &lt;code&gt;CodeBuildStep&lt;/code&gt; in the pre-section it will be placed right before the deployment of the CloudFormation deploy action.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Hitting CodePipeline’s artifact limit can stall your sprint at the worst moment. By inserting a pre-deploy CodeBuildStep that strips the bulky assets after they’re safely uploaded to S3, you shrink the artifact without changing your workflow. This quick fix keeps your CDK pipeline flowing, your CloudFormation stacks deploying, and your release deadlines intact.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/grayscale-photo-of-road-closed-3907990/" rel="noopener noreferrer"&gt;Ellie Burgin&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/fixing-oversized-artifacts-in-aws-cdk-pipelines/" rel="noopener noreferrer"&gt;Fixing oversized artifacts AWS CDK Pipelines&lt;/a&gt; appeared first on &lt;a href="https://xebia.com" rel="noopener noreferrer"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>technology</category>
    </item>
    <item>
      <title>AWS CDK and the Hidden Risks to Least Privilege</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Mon, 28 Apr 2025 09:28:38 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-cdk-and-the-hidden-risks-to-least-privilege-aa9</link>
      <guid>https://dev.to/aws-builders/aws-cdk-and-the-hidden-risks-to-least-privilege-aa9</guid>
      <description>&lt;p&gt;Have we given up on the least privileged principle? Personally, I am a big fan of it. But let’s be honest, it can also be tough to follow the principle strictly. With the rise of CDK, it became even harder.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why does CDK make it harder?
&lt;/h2&gt;

&lt;p&gt;CDK is a great tool to use when you are developing your infrastructure. You can easily build resources using the level 1 and level 2 constructs. So far, so good. The problem lies within the level 2 constructs. They are somewhat opinionated about how you should use them. For example, when you want to store secrets in the Cloud, you create a secret in Secret Manager. This secret needs to be encrypted, so we will use KMS. This can easily be achieved through CDK:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;secret&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;secretsmanager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Secret&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Secret&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
 &lt;span class="na"&gt;secretName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`MySecret`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;My super Secret&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="na"&gt;encryptionKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;kmsKey&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="na"&gt;generateSecretString&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="na"&gt;secretStringTemplate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;stringify&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;myUser&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt; &lt;span class="p"&gt;}),&lt;/span&gt;
   &lt;span class="na"&gt;generateStringKey&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;password&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
 &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see in the code example. We are using a customer-managed KMS key. We do this because we want to control who can use the key for decryption. The fact that you pass the key into the Secret construct means that CDK will help you and adds a policy to your KMS key. The policy that gets added is the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"AWS"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::000000000000:root"&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="s2"&gt;"kms:CreateGrant"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="s2"&gt;"kms:Decrypt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="s2"&gt;"kms:DescribeKey"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="s2"&gt;"kms:Encrypt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="s2"&gt;"kms:GenerateDataKey*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="s2"&gt;"kms:ReEncrypt*"&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="nl"&gt;"StringEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
     &lt;/span&gt;&lt;span class="nl"&gt;"kms:ViaService"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"secretsmanager.eu-west-1.amazonaws.com"&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
 &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Those familiar with KMS policies will notice that any principal can now use the key if it is used through the secrets manager service. At first, you might think it is not that bad or convenient. But the problem with this is that any principal with an allow statement on the &lt;code&gt;secretsmanager:GetSecretValue&lt;/code&gt; action will be able to read your secret.&lt;/p&gt;

&lt;h2&gt;
  
  
  It gets worse
&lt;/h2&gt;

&lt;p&gt;You are storing the secret for a reason. You probably need to read it somewhere in your workload. To do that, you need to create a role and grant the role the rights to read the secret. This is easily done with CDK.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;secret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;grantRead&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;role&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But this simple statement again changes the KMS policy. It will add the following policy.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Effect"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Allow"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Principal"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"AWS"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::000000000000:role/MyRole"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Action"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"kms:Decrypt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"kms:Encrypt"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"kms:GenerateDataKey*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="s2"&gt;"kms:ReEncrypt*"&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Resource"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"Condition"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"StringEquals"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"kms:ViaService"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"secretsmanager.eu-west-1.amazonaws.com"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At least it’s specific to the role, and this is what you want. But remember the previous policy? It already allows all principals to use this key.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to work around this?
&lt;/h2&gt;

&lt;p&gt;Sadly, the only way around this is to use the level 1 constructs. These constructs are not opinionated and are a one-to-one mapping to the CloudFromation resources. With those level 1 constructs, you can specify the KMS key without changing the key policy for you. The flip side is that you do need to allow all principals who need to read the secret in your KMS key policy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, while CDK greatly simplifies infrastructure development, it can unintentionally weaken strict least privilege practices, especially around KMS key policies. You may need to step down to level 1 constructs to maintain tighter control, trading some convenience for the precision and security your workloads deserve.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/black-handled-key-on-key-hole-101808/" rel="noopener noreferrer"&gt;AS Photography&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/aws-cdk-and-the-hidden-risks-to-least-privilege/" rel="noopener noreferrer"&gt;AWS CDK and the Hidden Risks to Least Privilege&lt;/a&gt; appeared first on &lt;a href="https://xebia.com" rel="noopener noreferrer"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>technology</category>
    </item>
    <item>
      <title>Optimizing OpenSearch Ingestion: Ensuring Reliability, Efficiency, and Cost Savings</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Tue, 11 Mar 2025 12:15:38 +0000</pubDate>
      <link>https://dev.to/aws-builders/optimizing-opensearch-ingestion-ensuring-reliability-efficiency-and-cost-savings-1o98</link>
      <guid>https://dev.to/aws-builders/optimizing-opensearch-ingestion-ensuring-reliability-efficiency-and-cost-savings-1o98</guid>
      <description>&lt;p&gt;Ingesting data into an OpenSearch cluster looks easy if you read the documentation. The truth is it is easy, but it all depends on how much you care about the data you are ingesting. Let me go one step back. Why do we even use OpenSearch? With the rise of AI, you also need a knowledge base. These knowledge bases can be hosted in OpenSearch. However, to use the OpenSearch database, you must also fill it out with data.&lt;/p&gt;

&lt;p&gt;I recently was challenged to create a database for GenAI purposes. We had a dataset that needed to be loaded into OpenSearch. &lt;a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ingestion.html" rel="noopener noreferrer"&gt;Amazon OpenSearch Ingestion&lt;/a&gt; sounds like a service that can help you with that.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxebia.com%2Fwp-content%2Fuploads%2F2025%2F03%2Foptimizing-opensearch-ingestion-ensuring-reliability-efficiency-and-cost-savings-Ingestion.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxebia.com%2Fwp-content%2Fuploads%2F2025%2F03%2Foptimizing-opensearch-ingestion-ensuring-reliability-efficiency-and-cost-savings-Ingestion.jpg" title="Optimizing OpenSearch Ingestion: Ensuring Reliability, Efficiency, and Cost Savings" alt="Ingestion Example from AWS" width="800" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Preparing the data
&lt;/h2&gt;

&lt;p&gt;First of all, the data is never in the format that you want it to be. First, we will need to transform it into something that we can ingest. For this use-case, I used StepFunctions (but &lt;a href="https://xebia.com/blog/avoid-costly-loops-in-aws-step-functions/" rel="noopener noreferrer"&gt;avoid costly loops&lt;/a&gt;) each document coming in will trigger an execution. In my case, I wanted to be able to search the document with semantic meaning. In order to do this efficiently, we will need to chunk the documents into smaller bits. This way, we can generate embeddings for each chunk, which can be used for semantic search. You can create a single file with all chunks, and each chunk will have the same metadata set; the only difference is the chunk text and chunk identification. Once we have this file, we will place it on S3, S3 will send an S3 Event Trigger to an SQS Queue, and the OpenSearch Ingestion pipeline will ingest it into the OpenSearch database.&lt;/p&gt;

&lt;h2&gt;
  
  
  So far so good
&lt;/h2&gt;

&lt;p&gt;There is nothing wrong with the approach I just described. This is a textbook example coming from the AWS documentation pages. But an ingestion pipeline will cost you money even when you don’t ingest anything. For this reason, I developed a Lambda function that would stop the pipeline when no messages are in the queue. It will also start the pipeline when there are messages in the queue. This was a nice optimization from a cost perspective.&lt;/p&gt;

&lt;p&gt;After some testing, we expected 10k documents in the system, but we only had around 5k. Where did the other documents go? The dead-letter queues were empty, and we did not have any log traces of any failures whatsoever… It turned out that when you stop the ingestion pipeline, all messages in the buffer are lost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Acknowledgments to the rescue
&lt;/h2&gt;

&lt;p&gt;You can argue a lot about the fact that when you stop a pipeline, the documents in the buffer are lost. But that is the reality that I needed to deal with. You can enable persistent buffering, which would at least help with the data loss part. But you still need to start the pipeline to continue the job. I looked more in the direction of SQS. The whole purpose of SQS is to queue messages and ensure they are processed. After some investigation, I noticed two options I wanted to share.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;version&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2'&lt;/span&gt;
&lt;span class="na"&gt;s3_pipeline&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;s3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;workers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
      &lt;span class="na"&gt;notification_type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;sqs&lt;/span&gt;
      &lt;span class="na"&gt;notification_source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;s3&lt;/span&gt;
      &lt;span class="na"&gt;codec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;json&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;{}&lt;/span&gt;
      &lt;span class="na"&gt;compression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;none&lt;/span&gt;
      &lt;span class="na"&gt;acknowledgments&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;records_to_accumulate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;100&lt;/span&gt;
      &lt;span class="na"&gt;on_error&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;retain_messages&lt;/span&gt;
      &lt;span class="na"&gt;sqs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;queue_url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://sqs.eu-west-1.amazonaws.com/000000000000/MyQueue&lt;/span&gt;
        &lt;span class="na"&gt;maximum_messages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt;
        &lt;span class="na"&gt;visibility_timeout&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;60s&lt;/span&gt;
        &lt;span class="na"&gt;poll_delay&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;0s&lt;/span&gt;
        &lt;span class="na"&gt;visibility_duplication_protection&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;aws&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eu-west-1&lt;/span&gt;
        &lt;span class="na"&gt;sts_role_arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::000000000000:role/MyRole&lt;/span&gt;
  &lt;span class="na"&gt;sink&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;opensearch&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;serverless&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
        &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;https://vpcxxxxxx.eu-west-1.es.amazonaws.com&lt;/span&gt;
        &lt;span class="na"&gt;index&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;documents-${/index}&lt;/span&gt;
        &lt;span class="na"&gt;bulk_size&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;5&lt;/span&gt;
        &lt;span class="na"&gt;max_retries&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
        &lt;span class="na"&gt;dlq&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;s3&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;bucket&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MyBucketName&lt;/span&gt;
            &lt;span class="na"&gt;key_path_prefix&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dlq/&lt;/span&gt;
            &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eu-west-1&lt;/span&gt;
            &lt;span class="na"&gt;sts_role_arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::000000000000:role/MyRole&lt;/span&gt;
        &lt;span class="na"&gt;aws&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;region&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;eu-west-1&lt;/span&gt;
          &lt;span class="na"&gt;sts_role_arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;arn:aws:iam::000000000000:role/MyRole&lt;/span&gt;
        &lt;span class="na"&gt;document_id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${/chunk_id}&lt;/span&gt;
        &lt;span class="na"&gt;actions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;delete&lt;/span&gt;
            &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/operation == "delete"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;update&lt;/span&gt;
            &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/operation == "update"&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;index&lt;/span&gt;
            &lt;span class="na"&gt;when&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;/operation == "index"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The configuration options are &lt;code&gt;acknowledgments&lt;/code&gt; and &lt;code&gt;visibility_duplication_protection&lt;/code&gt;. The former will keep the message in the queue until it has been ingested into the sink, and the latter will ensure that the message stays in flight as long as it is in the buffer. These options will ensure that the SQS queue is used as intended. When a failure occurs, the message will reoccur in the queue and be retried. After x number of attempts, it will go to the dead-letter queue. These settings ensured that all 10k documents in the batch were ingested as expected.&lt;/p&gt;

&lt;p&gt;The pipeline would still be stopped once the SQS Queue is empty. To prevent this from happening I also included the &lt;code&gt;ApproximateNumberOfMessagesNotVisible&lt;/code&gt; next to the &lt;code&gt;ApproximateNumberOfMessages&lt;/code&gt; metric.&lt;/p&gt;

&lt;h2&gt;
  
  
  Dealing with updates and deletes
&lt;/h2&gt;

&lt;p&gt;I also wanted to share how we deal with updates and deletions. Ingestion of a new document is easy, but updates and deletions are different. When you have ingested a document with 15 chunks, and the updated version only has 12, you need to delete 3 and potentially update some chunks. For example, this might occur when you delete a paragraph in the middle of a text.&lt;/p&gt;

&lt;p&gt;First, you need to know if the document has already been ingested. A search action on the OpenSearch database can do this. This search action returns all the chunks already present in the database. You can use logic to determine what chunk needs to be updated and which needs to be deleted. We add three additional metadata fields in each chunk: &lt;code&gt;index&lt;/code&gt;, &lt;code&gt;chunk_id&lt;/code&gt;, and &lt;code&gt;operation&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;index&lt;/code&gt; field determines the index to which the document is being ingested. The &lt;code&gt;chunck_id&lt;/code&gt; will become the internal document id, and the &lt;code&gt;operation&lt;/code&gt; field will determine if the chunk needs to be indexed, updated, or deleted.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Ingesting data into OpenSearch sounds straightforward, but the real challenge lies in ensuring reliability, efficiency, and cost-effectiveness. While the textbook approach works, real-world scenarios often demand optimizations—like dynamically stopping the ingestion pipeline to save costs or leveraging SQS acknowledgments to guarantee message processing.&lt;/p&gt;

&lt;p&gt;By implementing these adjustments, we achieved a more resilient ingestion process that ensures all documents make it into OpenSearch as expected. The combination of Step Functions for document transformation, SQS for reliable queuing, and metadata tagging for updates and deletions provided a scalable and maintainable solution. At the end of the day, ingestion isn’t just about getting data into the system—it’s about making sure the right data gets there, stays there, and can be efficiently queried when needed.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/man-in-brown-leather-jacket-using-binoculars-3811807/" rel="noopener noreferrer"&gt;Andrea Piacquadio&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/optimizing-opensearch-ingestion-ensuring-reliability-efficiency-and-cost-savings/" rel="noopener noreferrer"&gt;Optimizing OpenSearch Ingestion: Ensuring Reliability, Efficiency, and Cost Savings&lt;/a&gt; appeared first on &lt;a href="https://xebia.com" rel="noopener noreferrer"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>technology</category>
    </item>
    <item>
      <title>Cross-Stack RDS User Provisioning and Schema Migrations with AWS Lambda</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Tue, 04 Mar 2025 13:59:53 +0000</pubDate>
      <link>https://dev.to/aws-builders/cross-stack-rds-user-provisioning-and-schema-migrations-with-aws-lambda-3p73</link>
      <guid>https://dev.to/aws-builders/cross-stack-rds-user-provisioning-and-schema-migrations-with-aws-lambda-3p73</guid>
      <description>&lt;p&gt;Have you ever been in a situation where you want to provision or configure things cross-stack? Splitting these into logical stacks is always good when dealing with more complex environments. I already shared this in one of my &lt;a href="https://xebia.com/blog/streamlining-workflows-with-feature-branches-and-logical-stacks/" rel="noopener noreferrer"&gt;previous blogs&lt;/a&gt;. But this also introduces a different problem!&lt;/p&gt;

&lt;h2&gt;
  
  
  Granting access to your database
&lt;/h2&gt;

&lt;p&gt;When you use DynamoDB, you have to use IAM to grant access. However, other databases like MySQL also have an internal authentication method. You have a few options when you create the database in one stack and a different stack wants to access the database.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use the credentials that you created at deployment time.&lt;/li&gt;
&lt;li&gt;Use identity and access management (AWS IAM).&lt;/li&gt;
&lt;li&gt;Create a local user per use-case.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, all options will work in the end, but they do have their pros and cons. I am a big fan of the least privileged principle. So, for me, using the credentials you created at deployment time is too privileged. With these credentials, you can create databases and tables. This also means that you can drop tables and delete databases. You can compare these credentials with the root credentials of a Linux system or the root account for your AWS account. The root account is there for a reason. That reason is not for day-to-day work. You should use those credentials for administrative tasks, not operational tasks.&lt;/p&gt;

&lt;p&gt;You could use AWS IAM, and this will give us the ability to be more least privileged. This is because you need to create a user in the database. That user will receive grants on what it can do on the database and tables. You can create a read-only role if you only need to read from the database. If you also need to write, you can simply add that permission. Regardless of whether you go for option 2 or 3, you still need to provision the user and the permissions in the database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisioning RDS users
&lt;/h2&gt;

&lt;p&gt;After you created an RDS instance, you configured the database’s “root” credentials. The &lt;a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html" rel="noopener noreferrer"&gt;Secrets Manager&lt;/a&gt; should generate these credentials. Users should never have these credentials, but if they do, they are there just in case. Since we don’t want to use the root credentials, we need a user to access the database through our application. For this, we can use a provisioner lambda function. This lambda function creates the local users in the database. You can simply invoke the Lambda function as a custom resource using the same template as the RDS instance.&lt;/p&gt;

&lt;p&gt;The Lambda function can retrieve the “root” credentials from Secrets Manager. It can use those credentials to connect to the database, create it, and load the initial schema. Next, it will create the user and the grants that the user needs. You don’t need to assign a password if you plan to use IAM. If you don’t want to use IAM, you should create the credentials in Secrets Manager and pass them dynamically into the custom resource. Afterward, your user is ready to use your application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Provisioning RDS users cross-stack
&lt;/h2&gt;

&lt;p&gt;But what if you have multiple stacks and want to use the RDS instance created in a different stack? The same principle applies. You still need to create the provisioner with the RDS instance. But instead of creating the custom resource in the template of the RDS instance, you create it in the other template. For this, you need the ARN of the Lambda function. You can use a naming schema or store the ARN in an SSM Parameter.&lt;/p&gt;

&lt;p&gt;The advantage of this is that you can invoke the creation of database users in the template where they are used. Plus, if you make the provisioner smart enough, you can pass on the grants the user will need. This way, you encapsulate the user, the rights, and the application into the same template. This will reduce the maintenance load on your application and its infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Schema migrations
&lt;/h2&gt;

&lt;p&gt;One of the other advantages of having a provisioner is that you can also manage schema migrations. Mutating the schema can be invoked with the same ease as the user creation. This gives you the ability to perform these migrations during the CloudFormation deployments. You do need to make sure that the application is backward compatible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Splitting infrastructure into multiple stacks keeps things organized, but it also introduces challenges like managing database access and schema changes. Using a Lambda function as a provisioner, you can automate user creation and schema migrations as part of your CloudFormation deployments. This approach keeps everything tightly scoped, ensures the least privilege principle, and reduces operational overhead. Whether you use IAM authentication or local users, making the provisioner smart enough to handle both ensures flexibility. With schema migrations baked into your deployments, you get a reliable and repeatable way to evolve your database without manual intervention.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/text-on-computer-monitor-10725897/" rel="noopener noreferrer"&gt;Muhammed Ensar&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/cross-stack-rds-user-provisioning-and-schema-migrations-with-aws-lambda/" rel="noopener noreferrer"&gt;Cross-Stack RDS User Provisioning and Schema Migrations with AWS Lambda&lt;/a&gt; appeared first on &lt;a href="https://xebia.com" rel="noopener noreferrer"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>technology</category>
    </item>
    <item>
      <title>Securing S3 Downloads with ALB and Cognito Authentication</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Fri, 28 Feb 2025 08:09:16 +0000</pubDate>
      <link>https://dev.to/aws-builders/securing-s3-downloads-with-alb-and-cognito-authentication-10i1</link>
      <guid>https://dev.to/aws-builders/securing-s3-downloads-with-alb-and-cognito-authentication-10i1</guid>
      <description>&lt;p&gt;Securing an endpoint used to be hard. Nowadays, with the cloud, it’s quite easy. You only need to know how! Assume you have files on S3 that you like to share. You could make the object publicly available. This would allow your users to download the file using their browsers simply. If you need to scale it, you can add CloudFront. This would cache the content closer to your users, making sure that your users have the best performance. But what if you want to control who can download the file? For this, you will need authentication and authorization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Authentication vs Authorization
&lt;/h2&gt;

&lt;p&gt;Authentication is all about identifying who you are. First, we need to make sure that we know who the user is. Once we establish who the user is, we can see if the user can access the content. The latter is authorization. AWS has a service called Cognito that allows you to manage a pool of users. These users can originate from other sources like Google, Facebook, and your own identity provider. Or, if you don’t want to use identity providers, you can also create users directly in the user pool.&lt;/p&gt;

&lt;p&gt;You can also create groups, and based on these groups, you can manage the authorization. For example, you could make a group called developers. All users within this group should be allowed to fetch the build report hosted on S3.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the endpoint
&lt;/h2&gt;

&lt;p&gt;With the Cognito User Pool in place, we need a way to validate the user during the request. I am using an Application Load Balancer to invoke a Lambda function. This function can then check if the user can access the report. The logic is quite simple: if the user is part of the developer’s group, the user can read the report.&lt;/p&gt;

&lt;p&gt;In this case, we can use the native Cognito integration of the application load balancer. What this will do is the following:&lt;/p&gt;

&lt;p&gt;If the user is unauthenticated, navigate to the Cognito-hosted UI. If you use your identity provider, you will be redirected to your company’s login page. If you host the users from the user pool, a login form is shown to the user. After the user has logged a redirect, the user is now authenticated. The load balancer will now invoke the target group with the request.&lt;/p&gt;

&lt;p&gt;We also want to check if the user is in the developer group. We will use a Lambda function to check this. The following code would do the trick:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import base64
import json
from typing import List

def decode(data: str) -&amp;gt; dict:
    return json.loads(base64.b64decode(data.split('.')[1]).decode('utf-8'))

def resolve_groups(groups: str) -&amp;gt; List[str]:
    return list(map(lambda group: group.strip(), groups[1:-1].split(',')))

def handler(event, context) -&amp;gt; dict:
    code = 403
    description = "403 Access Denied"
    body = "Access Denied"
    user = decode(event["headers"]["x-amzn-oidc-data"])
    groups = resolve_groups(user.get('custom:groups', '[]'))

    if 'developers' in groups:
        code = 200
        description = "200 OK"
        body = f"Hi {user.get('name')}, you should be able to download the report"

    return {
        "statusCode": code,
        "statusDescription": description,
        "isBase64Encoded": False,
        "headers": {"Content-Type": "json; charset=utf-8"},
        "body": body,
    }

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The listener on the application load balancer and the user pool client can be configured as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Listener:
    Type: AWS::ElasticLoadBalancingV2::Listener
    Properties:
      LoadBalancerArn: !Ref LoadBalancer
      Port: 443
      Protocol: HTTPS
      Certificates:
        - CertificateArn: arn:aws:acm:eu-west-1:111122223333:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
      DefaultActions:
        - AuthenticateCognitoConfig:
            OnUnauthenticatedRequest: authenticate
            Scope: openid
            UserPoolArn: arn:aws:cognito-idp:eu-west-1:111122223333:userpool/eu-west-1_xXXXxxxx
            UserPoolClientId: !Ref UserPoolClient
          Order: 1
          Type: authenticate-cognito
        - Order: 2
          TargetGroupArn:
            Ref: LambdaTarget
          Type: forward

  UserPoolClient:
    Type: AWS::Cognito::UserPoolClient
    Properties:
      AllowedOAuthFlows:
        - code
      AllowedOAuthFlowsUserPoolClient: true
      AllowedOAuthScopes:
        - profile
        - phone
        - email
        - openid
        - aws.cognito.signin.user.admin
      CallbackURLs:
        - https://&amp;lt;MyDomainName&amp;gt;/oauth2/idpresponse
      ClientName: MyClient
      GenerateSecret: true
      LogoutURLs:
        - https://&amp;lt;MyDomainName&amp;gt;/logout
      SupportedIdentityProviders:
        - COGNITO
      UserPoolId: eu-west-1_xXXXxxxx

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You also need to setup the lambda function as followed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  LambdaSecurityGroup:
    Type: AWS::EC2::SecurityGroup
    Properties:
      SecurityGroupEgress:
        - CidrIp: 0.0.0.0/0
          Description: Allow all outbound traffic by default
          IpProtocol: "-1"
      SecurityGroupIngress:
        - CidrIp: 0.0.0.0/0
          Description: Allow HTTPS access
          FromPort: 443
          IpProtocol: tcp
          ToPort: 443
      VpcId: !Ref VPCAsParameter

  LambdaRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          - Action: sts:AssumeRole
            Effect: Allow
            Principal:
              Service: lambda.amazonaws.com
      ManagedPolicyArns:
        - !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
        - !Sub arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole

  LambdaFunction:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        S3Bucket: my-bucket
        S3Key: path/to/code.zip
      Handler: index.handler
      Role: !GetAtt LambdaRole.Arn
      Runtime: python3.12
      VpcConfig:
        SecurityGroupIds:
          - !GetAtt LambdaSecurityGroup.GroupId
        SubnetIds:
          - !Ref Subnet1
          - !Ref Subnet2
          - !Ref Subnet3

  LambdaPermissions:
    Type: AWS::Lambda::Permission
    Properties:
      Action: lambda:InvokeFunction
      FunctionName: !GetAtt LambdaFunction.Arn
      Principal: elasticloadbalancing.amazonaws.com

  LambdaTarget:
    Type: AWS::ElasticLoadBalancingV2::TargetGroup
    DependsOn:
      - LambdaPermissions
    Properties:
      TargetType: lambda
      Targets:
        - Id: !GetAtt LambdaFunction.Arn

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will be prompted to log in when you navigate to the load balancer. Afterward, you can see that the lambda function was invoked. This is a very simple example, but the idea is that you can extend the lambda function with the logic you need. For example, you could create a pre-signed URL for the report on S3 and redirect the user directly to that URL. This will then download the report automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Securing access to your S3 files doesn’t have to be complicated. By leveraging AWS Cognito, an Application Load Balancer, and a simple Lambda function, you can control exactly who gets access to your files—without exposing them to the public. With this setup, authentication is handled seamlessly, and authorization is as simple as checking group memberships. From here, you can expand the functionality further, such as generating pre-signed URLs for downloads or adding more granular permissions. The cloud makes it easy—you just need to know how!&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/security-logo-60504/" rel="noopener noreferrer"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/securing-s3-downloads-with-alb-and-cognito-authentication/" rel="noopener noreferrer"&gt;Securing S3 Downloads with ALB and Cognito Authentication&lt;/a&gt; appeared first on &lt;a href="https://xebia.com" rel="noopener noreferrer"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>technology</category>
    </item>
    <item>
      <title>Become a documentation ninja</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Wed, 19 Feb 2025 07:34:42 +0000</pubDate>
      <link>https://dev.to/aws-builders/become-a-documentation-ninja-4c2n</link>
      <guid>https://dev.to/aws-builders/become-a-documentation-ninja-4c2n</guid>
      <description>&lt;p&gt;Writing documentation sucks! But there are ways to make it easier and maybe even fun! In one of my &lt;a href="https://xebia.com/blog/make-writing-documentation-part-of-your-pull-request/" rel="noopener noreferrer"&gt;previous blogs&lt;/a&gt;, I explained how you can embed your documentation in your pull requests and why you should consider doing it, too. In this blog, I want to go a bit deeper into the syntax you can use to improve your documentation pages on Confluence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Syntax? You mean Markdown, right?
&lt;/h2&gt;

&lt;p&gt;Yes, Markdown is the documentation syntax that Mark uses. However, it has some extensions that are still valid in Markdown format and allow you to use the macros from Confluence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linking to other pages
&lt;/h2&gt;

&lt;p&gt;When you write documentation, you probably have different pages explaining different things. You can logically split areas, making it easier to navigate and read. But sometimes, you want to link from one page to another. The problem is that you don’t know the URL yet since that is a Confluence page. You could publish the page and look up the link, which kills the motivation to write documentation. Assume we have a page like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# My Confluence Page&lt;/span&gt;

My description of the page
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And we want to link to this page. You can simply do this with the following syntax:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;Read this on &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;My Confluence Page&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;ac:&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; or &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;read&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sx"&gt;ac:My&lt;/span&gt; Confluence Page) the page.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see, you can link to the page title itself. Or, you can use a custom word, like “read” and link to the page. When mark uploads the page to Confluence the links will be resolved automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Linking to Jira issues
&lt;/h2&gt;

&lt;p&gt;Next to linking to pages, you will probably also refer to Jira issues. However, you could link to Jira issues using native markdown. When you use the Jira reference macro, the type, title, and status of the issue are shown. This gives immediate feedback to the reader on what the status is. What you need to do is to define a macro in the top of your markdown file and mention the issue as plain text. Here is an example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&amp;lt;!-- Macro: MYJIRA-&lt;span class="se"&gt;\d&lt;/span&gt;+
     Template: ac:jira:ticket
     Ticket: ${0} →

&lt;span class="gh"&gt;# My page that contains a Jira ticket reference&lt;/span&gt;

See task MYJIRA-123.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, &lt;code&gt;MYJIRA&lt;/code&gt; is the Jira project code. The macro definition matches everything that starts with &lt;code&gt;MYJIRA-&lt;/code&gt; and is followed by a set of digits. You can then simply mention the issues in your markdown file, and they will be converted into Jira macros automatically.&lt;/p&gt;

&lt;h2&gt;
  
  
  Hide code snippets in collapsable macros
&lt;/h2&gt;

&lt;p&gt;Especially when you write documentation, you may need to hide parts of the code snippets. One example that I use is for OS-dependent instructions. Assume you want to provide instructions on using your product on the command line. In that case, you could give instructions for Windows, Linux, and MacOS. To use this, you first need to define the macro. Then, use the macro to build the collapsable snippet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxebia.com%2Fwp-content%2Fuploads%2F2025%2F02%2Fbecome-a-documentation-ninja-expand.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxebia.com%2Fwp-content%2Fuploads%2F2025%2F02%2Fbecome-a-documentation-ninja-expand.png" title="Become a documentation ninja 3" alt="Example of an collapsable snippet" width="800" height="252"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&amp;lt;!-- Macro: :expand-start:(?s)(.&lt;span class="ge"&gt;*?):\n(.*&lt;/span&gt;?)&lt;span class="se"&gt;\n&lt;/span&gt;:expand-end:
     Template: ac:expand
     Title: ${1}
     Body: ${2} --&amp;gt;
&lt;span class="gh"&gt;# My page with command line instructions&lt;/span&gt;

Before you start, you must ensure you set the environment variable:

:expand-start:Unix:

Unix instructions go here

:expand-end:

:expand-start:Windows:

Windows instructions go here

:expand-end:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Page properties and status badges
&lt;/h2&gt;

&lt;p&gt;This is my favorite! This is such a powerful feature of Confluence. For those of you who are not familiar with this functionality, you define a table, which contains a set of keys and values. For example, you can specify an owner and a date in this table. You also need to set a label on the page. This is required for the report page. The label is used to make a list of pages that have this label. But the cool thing is that you can also display the values of the page properties.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxebia.com%2Fwp-content%2Fuploads%2F2025%2F02%2Fbecome-a-documentation-ninja-projects.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fxebia.com%2Fwp-content%2Fuploads%2F2025%2F02%2Fbecome-a-documentation-ninja-projects.png" title="Become a documentation ninja 4" alt="Example of page properties report" width="800" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As you can see in the screenshot above, we have three pages. The parent page holds the report, and the child pages are Project 1 and Project 2. However, it also shows the date, owner, and status badge. This is a great way to give a nice overview of projects.&lt;/p&gt;

&lt;p&gt;To create the page properties on the child page, you need the following syntax.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="c"&gt;&amp;lt;!-- Parent: Projects --&amp;gt;&lt;/span&gt;
&lt;span class="c"&gt;&amp;lt;!-- Label: project --&amp;gt;&lt;/span&gt;
&amp;lt;!-- Macro: :in-progress:
     Template: ac:status
     Title: In Progress
     Color: Blue --&amp;gt;

&lt;span class="gh"&gt;# Project 1&lt;/span&gt;

&lt;span class="nt"&gt;&amp;lt;ac:structured-macro&lt;/span&gt; &lt;span class="na"&gt;ac:name=&lt;/span&gt;&lt;span class="s"&gt;"details"&lt;/span&gt; &lt;span class="na"&gt;ac:schema-version=&lt;/span&gt;&lt;span class="s"&gt;"1"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;ac:rich-text-body&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;table&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;tbody&amp;gt;&lt;/span&gt;
                &lt;span class="nt"&gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;&amp;lt;strong&amp;gt;&lt;/span&gt;Status&lt;span class="nt"&gt;&amp;lt;/strong&amp;gt;&amp;lt;/th&amp;gt;&amp;lt;td&amp;gt;&lt;/span&gt;:in-progress:&lt;span class="nt"&gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;/span&gt;
                &lt;span class="nt"&gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;&amp;lt;strong&amp;gt;&lt;/span&gt;Owner&lt;span class="nt"&gt;&amp;lt;/strong&amp;gt;&amp;lt;/th&amp;gt;&amp;lt;td&amp;gt;&lt;/span&gt;Joris Conijn&lt;span class="nt"&gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;/span&gt;
                &lt;span class="nt"&gt;&amp;lt;tr&amp;gt;&amp;lt;th&amp;gt;&amp;lt;strong&amp;gt;&lt;/span&gt;Date&lt;span class="nt"&gt;&amp;lt;/strong&amp;gt;&amp;lt;/th&amp;gt;&amp;lt;td&amp;gt;&lt;/span&gt;2025-02-15&lt;span class="nt"&gt;&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;/span&gt;
            &lt;span class="nt"&gt;&amp;lt;/tbody&amp;gt;&lt;/span&gt;
        &lt;span class="nt"&gt;&amp;lt;/table&amp;gt;&lt;/span&gt;
    &lt;span class="nt"&gt;&amp;lt;/ac:rich-text-body&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;&amp;lt;/ac:structured-macro&amp;gt;&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Project 2 markdown file is similar to the one above. However, the parent page has a different definition.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Projects&lt;/span&gt;

&amp;lt;!-- Include: ac:detailssummary
     SortBy: Title
     CQL: 'space = "Projects" and label = "project"' --&amp;gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, my confluence space is called Projects. The Confluence Query Language (CQL) retrieves all pages in the Projects space that have the label project. A third requirement is that the page also needs to have the page properties defined. We did this on Projects 1 and 2, so the two projects are rendered in the table.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Writing documentation doesn’t have to be a chore. By leveraging Confluence macros within Markdown, you can enhance your documentation with dynamic links, Jira issue references, collapsible content, and structured page properties. These features make your documentation more interactive, easier to maintain, and more useful for your team.&lt;/p&gt;

&lt;p&gt;By embedding these enhancements directly in your markdown files, you streamline the process of writing and updating documentation, reducing friction and improving adoption. So, next time you document a feature or process, try incorporating these techniques. You might find documentation efficient and enjoyable!&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/a-man-practicing-japanese-martial-arts-7792259/" rel="noopener noreferrer"&gt;cottonbro studio&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/become-a-documentation-ninja/" rel="noopener noreferrer"&gt;Become a documentation ninja&lt;/a&gt; appeared first on &lt;a href="https://xebia.com" rel="noopener noreferrer"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>technology</category>
    </item>
    <item>
      <title>ECS Fargate Persistent Storage: EFS Access Points vs. Lambda Workarounds</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Tue, 04 Feb 2025 21:33:12 +0000</pubDate>
      <link>https://dev.to/aws-builders/ecs-fargate-persistent-storage-efs-access-points-vs-lambda-workarounds-2g1e</link>
      <guid>https://dev.to/aws-builders/ecs-fargate-persistent-storage-efs-access-points-vs-lambda-workarounds-2g1e</guid>
      <description>&lt;p&gt;When running a Docker container on ECS Fargate, persistent storage is often a necessity. I initially attempted to solve this by manually creating the required directory on EFS using a Lambda-backed custom resource. While this worked, it introduced unnecessary complexity. Through experimentation, I discovered a more elegant solution—using EFS access points. In this post, I'll walk through my journey, the challenges I faced, and how I ultimately simplified the setup with fewer resources and less maintenance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up EFS
&lt;/h2&gt;

&lt;p&gt;When you start a container on ECS Fargate, you must define a TaskDefinition. This definition is used to start the container. Within this definition, you can define Volumes. These volumes can then be mounted in the container by specifying a mount point per container.&lt;/p&gt;

&lt;p&gt;So far, so good! But I quickly ran into an issue. The task could not be started because the path on EFS did not yet exist. I tried to mount /my-task/data on /data in the container. However, the /my-task/data path does not exist on the EFS drive. The easy solution would be to hook it up to an EC2 instance and create the folder from there. You might think that the problem is solved, but this is not the solution to the underlying problem. As you could have read in my &lt;a href="https://xebia.com/blog/streamlining-workflows-with-feature-branches-and-logical-stacks/" rel="noopener noreferrer"&gt;previous blog&lt;/a&gt;, I like to be able to spin up multiple versions of a stack. With this manual step in between, you can’t do that.&lt;/p&gt;

&lt;h2&gt;
  
  
  How about a custom resource?
&lt;/h2&gt;

&lt;p&gt;We know we need to create a folder on the EFS drive. A Lambda function could do this, so I started implementing a custom resource. The idea is very simple: create a Lambda function that mounts the root of the EFS, and then the Lambda code would simply make the folder on EFS. We mark this custom resource as a dependency on the ECS Fargate service to ensure that the folder exists before we start the container and we are done.&lt;/p&gt;

&lt;p&gt;I fully implemented this flow, and it worked perfectly. When you deploy your stack, the custom resource does its thing, and the container can be launched successfully.&lt;/p&gt;

&lt;h2&gt;
  
  
  Is there a better way?
&lt;/h2&gt;

&lt;p&gt;When implementing the custom resource, I needed to create an access point. A Lambda function can only mount an EFS drive through an access point. This access point allows you to define a path on EFS and the user and group ID used to access this path. If you supply /my-lambda/ as the path, the function can only see the content of the my-lambda folder. This is great for protecting other paths from being accessed. From the lambda functions perspective, you don’t see the my-lambda folder you only see the content.&lt;/p&gt;

&lt;p&gt;Why is this so important, you might ask? Well, this path is created on the EFS drive for you. I looked at the ECS documentation, and you can also use an access point for container volumes. Using the access point to define the /my-task/data, I set the uid and gid to the same ID the user executed in the container. We could run the container without the need for a custom resource, so I deleted the function from my template.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Experimentation often leads to better solutions. My initial approach—using a Lambda function to create the necessary EFS directory—worked but added unnecessary complexity. By leveraging EFS access points, I achieved the same result with a cleaner, more maintainable setup. This experience reinforced an important lesson: the simplest solution is often the best. When working with cloud infrastructure, it's always worth exploring built-in features before resorting to custom implementations.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/a-close-up-shot-of-letter-dice-6120219/" rel="noopener noreferrer"&gt;Nataliya Vaitkevich&lt;/a&gt;&lt;/p&gt;

</description>
      <category>fargate</category>
      <category>lambda</category>
      <category>efs</category>
    </item>
    <item>
      <title>Streamlining Workflows with Feature Branches and Logical Stacks</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Sun, 26 Jan 2025 14:10:55 +0000</pubDate>
      <link>https://dev.to/aws-builders/streamlining-workflows-with-feature-branches-and-logical-stacks-3g05</link>
      <guid>https://dev.to/aws-builders/streamlining-workflows-with-feature-branches-and-logical-stacks-3g05</guid>
      <description>&lt;p&gt;Efficient collaboration and streamlined deployment processes are crucial in modern development workflows, especially for teams working on complex projects. Feature branches and stack-based development approaches offer powerful ways to isolate changes, test effectively, and ensure seamless integration. However, proper strategies can make managing resources, dependencies, and environments challenging. This blog explores how to optimize feature branch workflows, maintain encapsulated logical stacks, and apply best practices like resource naming to improve clarity, scalability, and cost-effectiveness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working with feature branches
&lt;/h2&gt;

&lt;p&gt;I have a strong preference for encapsulating things into logical stacks. These stacks should have a minimal number of dependencies. And you should be able to deploy multiple instances. Let me first explain why you want to deploy multiple instances of the same stack. The answer is quite simple! Feature branches! If you work in a team with multiple people, you might work on different stories. Therefore, you don’t want to update the same environment with your changes. My change might break your change and vice versa. Detecting why something failed becomes more challenging in this case. By deploying your own environment, your changes will only impact your environment. When you are done, you can thoroughly test your changes before merging them into the main branch.&lt;/p&gt;

&lt;h2&gt;
  
  
  Working with multiple stacks
&lt;/h2&gt;

&lt;p&gt;You can use multiple stacks, each with its area of responsibility, to keep things small and simple. However, this also introduces the danger of breaking encapsulation and introducing dependencies, such as an RDS database. You need to create it in a base stack and pass it to other stacks. This is needed because the components in the other stacks will need the endpoint to connect to it, creating a dependency.&lt;/p&gt;

&lt;p&gt;This example applies to the more traditional lift and shift approaches. However, we always advise modernizing to reap all the benefits of the cloud. Why? Simple: In the example, we needed an RDS instance. If you create an RDS instance per stack, you will have multiple instances and must pay per RDS instance, driving up the cost. By switching to serverless, you pay for the usage. This will allow you to move the database into your stacks, removing the dependency. I explicitly say option because, in relational databases, it’s not always possible to split the database into two because of the nature of relational databases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Use proper naming for your resources.
&lt;/h2&gt;

&lt;p&gt;CloudFormation can handle naming resources for you. A naming schema follows this pattern: StackName + LogicalResourceId + RandomizedString. This ensures that you don’t end up with duplicate resource names. Some resources allow you to have duplicates, while others do not. So, having a guaranteed unique name is perfect from the perspective of having feature branches. But from a management perspective, this is bad. Most names are trimmed after several characters, and the generated long names are not that descriptive.&lt;/p&gt;

&lt;p&gt;By introducing a prefix, you can have a clear name and still deploy your stacks multiple times. You simply pass the prefix as a parameter and prefix each resource name. So, for an IAM Role, that could be &lt;code&gt;Joris-CheckoutProcess&lt;/code&gt;. The CheckoutProcess name describes what it is, a role used by, for example, a lambda function that processes the checkout. The prefix &lt;code&gt;Joris&lt;/code&gt; makes it unique and provides information about who owns and created the resource. Using this pattern you avoid creating a &lt;a href="https://xebia.com/blog/stop-organizing-scavenger-hunts-in-your-cloud-infrastructure/" rel="noopener noreferrer"&gt;scavenger hunt&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Adopting feature branches and logical stacks is an excellent way to foster efficient, error-free collaboration in team environments. You can enhance your development workflow by deploying isolated environments for individual changes, embracing serverless solutions to reduce dependencies, and implementing descriptive naming conventions. These practices streamline debugging and testing and ensure resource management remains clear and cost-efficient. Whether you’re modernizing applications or maintaining legacy systems, these strategies will help you harness the full potential of cloud-based development.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/green-leafed-tree-low-angle-photography-1313807/" rel="noopener noreferrer"&gt;Min An&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>clouddevelopment</category>
      <category>featurebranches</category>
    </item>
    <item>
      <title>Stop organizing scavenger hunts in your cloud infrastructure</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Sun, 19 Jan 2025 09:59:55 +0000</pubDate>
      <link>https://dev.to/aws-builders/stop-organizing-scavenger-hunts-in-your-cloud-infrastructure-1m9c</link>
      <guid>https://dev.to/aws-builders/stop-organizing-scavenger-hunts-in-your-cloud-infrastructure-1m9c</guid>
      <description>&lt;p&gt;A CloudWatch alarm is triggered. Now what? I am not the first person to tell you that observability is essential to your cloud infrastructure. You are not done when you have set up CloudWatch alarms!&lt;/p&gt;

&lt;h2&gt;
  
  
  Who will act on those alarms?
&lt;/h2&gt;

&lt;p&gt;Observing some metrics and raising an alarm if a certain threshold is breached is just the start! You must also consider who will act if this alarm is breached. In some cases, you might be able to automate your actions. For example, you could think of a high CPU load on your application servers. The CloudWatch alarm is then triggered, and you can scale up your cluster.&lt;/p&gt;

&lt;p&gt;These auto-remediation actions work very well for all known issues that can go wrong. You can detect them and build remediation actions that you trigger based on these CloudWatch alarms.&lt;/p&gt;

&lt;p&gt;But in some cases, you just need a human to jump online and figure out what happened. When this happens, you don’t want to start a scavenger hunt in your AWS account to find the problem. You want a clear starting point and, from there, guidance in the right direction.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudWatch Dashboards
&lt;/h2&gt;

&lt;p&gt;CloudWatch dashboards can guide you in the right direction during production issues. For example, assume we have a SQS Queue that contains messages. A lambda function is processing these messages. If the process fails, it will retry, and after three attempts, it will deliver the message to the dead-letter queue.&lt;/p&gt;

&lt;p&gt;When one or more messages are in the dead-letter queue, a CloudWatch alarm is triggered, and we know something went wrong in our application.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu37fmc27rj3ah154ow69.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu37fmc27rj3ah154ow69.png" alt="Example infrastructure of a SQS Queue with a dead-letter queue" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This alarm can trigger an SNS topic, and from there, you can reach an engineer who can investigate this issue. A dashboard that shows these components makes it easy to navigate to the issue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk469er5bxwm44jisv7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmk469er5bxwm44jisv7s.png" alt="CloudWatch Dashboard example where the alarm is triggered" width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here, you can see the dead-letter queues from our application. MyLambda is in an alarm state, and on the right, you see a link that brings you directly to that function's LogGroup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dnq8kflaeq9stwloujd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8dnq8kflaeq9stwloujd.png" alt="Screenshot of the log group of the lambda function" width="800" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It turns out that someone pushed code with an exception. We fixed the code and started re-driving the messages in the dead-letter queue back to the original queue. The lambda function is now processing the messages as it should, and the alarm will go back to an OK state.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5coek3vvroe6xrnjatj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5coek3vvroe6xrnjatj.png" alt="CloudWatch Dashboard example where the alarm is in a OK state" width="800" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When you build cloud infrastructure, you also need to think about what can go wrong. Design for failure, build auto-remediation for all known failures and loop in a person when unknown issues appear. But be kind for your fellow engineer and don’t sent him on a scavenger hunt. Guide him to problem and give context about the failures in a CloudWatch dashboard.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/falcon-bird-flying-on-sky-6585227/" rel="noopener noreferrer"&gt;Gill Heward&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>cloudwatch</category>
      <category>observability</category>
      <category>scavengerhunt</category>
      <category>aws</category>
    </item>
    <item>
      <title>Boost Your Productivity with awscurl: Simplifying IAM-Secured API Testing in AWS</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Sat, 21 Dec 2024 13:43:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/boost-your-productivity-with-awscurl-simplifying-iam-secured-api-testing-in-aws-5e62</link>
      <guid>https://dev.to/aws-builders/boost-your-productivity-with-awscurl-simplifying-iam-secured-api-testing-in-aws-5e62</guid>
      <description>&lt;p&gt;With modern clouds, you can build awesome things. You can bring your ideas to life within the hour, enabling experimentation and driving innovation. One of the significant advantages of the cloud is that you get a lot of security controls out of the box. But these security controls can also block you from being productive!&lt;/p&gt;

&lt;h2&gt;
  
  
  How are these security controls blocking me?
&lt;/h2&gt;

&lt;p&gt;When you use AWS, you can interact with it through the console, sdk, or cli. But these all have one thing in common. They all use the same set of APIs to perform the actions requested by the user. These APIs are protected, and how authentication and authorization are done through the service &lt;a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html" rel="noopener noreferrer"&gt;IAM&lt;/a&gt;. When you develop a workload or work on a &lt;a href="https://en.wikipedia.org/wiki/Proof_of_concept" rel="noopener noreferrer"&gt;PoC&lt;/a&gt;, you will also use the IAM service. &lt;/p&gt;

&lt;p&gt;Assume you want to build an internal API and use the &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html" rel="noopener noreferrer"&gt;API Gateway&lt;/a&gt; service. Enabling IAM authentication on the methods you define is easy. You can then give the appropriate roles access to your API. The consumer of this API only needs to add the &lt;a href="https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html" rel="noopener noreferrer"&gt;AWSSigv4&lt;/a&gt; header, and as long as the role policy allows the invocation of the API, it will work.&lt;/p&gt;

&lt;p&gt;Various programming languages can be used to add the signature. But what if you want to test the API from your local machine or the cloud shell from the console? In those cases, you have a shell, and generating the signature becomes much harder. This will impact your productivity.&lt;/p&gt;

&lt;h2&gt;
  
  
  How can we unblock ourselves?
&lt;/h2&gt;

&lt;p&gt;As explained in the previous section, the challenge is to generate the signature based on the current payload. In the past, I used a simple Python script to perform these API calls, but that always took some time and energy to build. I also felt demotivated when I encountered such situations. &lt;/p&gt;

&lt;p&gt;The problem is not the script that you need to write. The problem is that you need to do it repeatedly because each situation is different. This goes against one of my principles, which is to automate everything! There should be a solution to this problem, and the good news is there is one!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/okigan" rel="noopener noreferrer"&gt;Igor Okulist&lt;/a&gt; created a small tool called &lt;a href="https://github.com/okigan/awscurl" rel="noopener noreferrer"&gt;awscurl&lt;/a&gt;. This tool allows you to perform a curl command that automatically signs your API call. For example, when we use the scenario described in the previous section, you can simply do the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;AWS_PROFILE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;my-profile
awscurl &lt;span class="nt"&gt;--region&lt;/span&gt; eu-west-1 &lt;span class="nt"&gt;--service&lt;/span&gt; execute-api &lt;span class="nt"&gt;-X&lt;/span&gt; GET https://my-url-to-my-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The example assumes that you have a valid profile configured called my-profile. Then, you do the same command as the original curl command. But you only add a region and a service option to the command. For the comparison, the normal curl command would look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; GET https://my-url-to-my-api
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But this command will result in the following error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"message"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Missing Authentication Token"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The awscurl command will inject the Authorization header that looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Authorization: AWS4-HMAC-SHA256 Credential=XXX, SignedHeaders=XXX, Signature=XXX
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you try this out yourself, you can add the &lt;code&gt;-v&lt;/code&gt; option, which will print out the headers sent and received. You will see that the API now accepts the signature and responds with the actual results.&lt;/p&gt;

&lt;p&gt;You can use it to perform any API call that supports sigv4, but for the majority of services, the AWS cli tool is the best tool for the job. But for your own APIs or calls to your &lt;a href="https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html" rel="noopener noreferrer"&gt;OpenSearch&lt;/a&gt; cluster, this tool is definitely a timesaver.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;When you develop an internal API or have an OpenSearch cluster that uses IAM for authentication and authorization, testing calls towards these endpoints can be challenging as you will need to create a signature. The awscurl cli tool helps you create this signature, allowing you to focus on your business value instead of building signatures for tests.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/launching-of-white-space-shuttle-34521/" rel="noopener noreferrer"&gt;Pixabay&lt;/a&gt;&lt;/p&gt;

</description>
      <category>sigv4</category>
      <category>devops</category>
      <category>automation</category>
      <category>aws</category>
    </item>
    <item>
      <title>Relative Python imports in a Dockerized lambda function</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Sat, 14 Dec 2024 14:08:02 +0000</pubDate>
      <link>https://dev.to/aws-builders/relative-python-imports-in-a-dockerized-lambda-function-24o6</link>
      <guid>https://dev.to/aws-builders/relative-python-imports-in-a-dockerized-lambda-function-24o6</guid>
      <description>&lt;p&gt;Relative Python imports can be tricky for lambda functions. I wrote &lt;a href="https://dev.to/blog/2021/10/python-and-relative-imports-in-aws-lambda-functions/"&gt;a blog&lt;/a&gt; on this 3 years ago. But recently, I ran into the same issue with Dockerized lambda functions. So, I figured it was time for a new blog!&lt;/p&gt;

&lt;p&gt;You can follow along with the steps or look at the result directly &lt;a href="https://github.com/conijnio/aws-cdk-docker-function" rel="noopener noreferrer"&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project setup
&lt;/h2&gt;

&lt;p&gt;Make sure you installed the AWS CDK cli.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;aws-cdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Initialize the project:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;cdk init app &lt;span class="nt"&gt;--language&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;typescript
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Lambda setup
&lt;/h2&gt;

&lt;p&gt;First we will need to create the file and folder structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;mkdir&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; lib/functions/hello-world/hello_world
&lt;span class="nb"&gt;touch &lt;/span&gt;lib/functions/hello-world/hello_world/__init__.py
&lt;span class="nb"&gt;touch &lt;/span&gt;lib/functions/hello-world/requirements.txt
&lt;span class="nb"&gt;touch &lt;/span&gt;lib/functions/hello-world/Dockerfile
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now you will need to fill the Dockerfile, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; public.ecr.aws/lambda/python:3.12&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt .&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; hello_world ${LAMBDA_TASK_ROOT}/hello_world&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["hello_world.handler"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We are using a Python base image that is based on Python 3.12. Next, we will copy in the &lt;code&gt;requirements.txt&lt;/code&gt; file and the source code. We will install all dependencies listed in the &lt;code&gt;requirements.txt&lt;/code&gt; file and make sure that the &lt;code&gt;handler&lt;/code&gt; method is set as the &lt;code&gt;CMD&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Next, we will need to fill our Python files with some code. In the &lt;code&gt;__init__.py&lt;/code&gt; file, you can place the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt;
&lt;span class="c1"&gt;# Example of the relative import
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;.business_logic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;business_logic&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="c1"&gt;# User the method that is imported based on a relative path
&lt;/span&gt;    &lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;business_logic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hello &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;!&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="n"&gt;__all__&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;handler&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterward, we will need to fill the &lt;code&gt;business_logic.py&lt;/code&gt; file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;business_logic&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Dict&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Any&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;World&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;NOTE: The code used here could use multiple relative imports. This is possible because it is in a separate package. This example only shows one example in the &lt;code&gt;__init__.py&lt;/code&gt; file. However, you can use multiple files here to improve the maintainability of your project.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;For this example, I don't need any dependencies, so we can keep the &lt;code&gt;requirements.txt&lt;/code&gt; file empty. I included it in this example to illustrate how you can include dependencies as well.&lt;/p&gt;

&lt;h2&gt;
  
  
  Create the Lambda function using IaC
&lt;/h2&gt;

&lt;p&gt;Our folders and files are in place, so it is time to add the Lambda function to the CDK construct. You can simply add it like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;    &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Function&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;this&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Function&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;functionName&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;hello-world&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;code&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Code&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;fromAssetImage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;lib/functions/hello-world&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="na"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ecr_assets&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Platform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;LINUX_ARM64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="p"&gt;}),&lt;/span&gt;
      &lt;span class="na"&gt;runtime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Runtime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FROM_IMAGE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Handler&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;FROM_IMAGE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;architecture&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;lambda&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Architecture&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ARM_64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="na"&gt;timeout&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;cdk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;seconds&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="na"&gt;memorySize&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For this to work, you also need the following imports:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;lambda&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-cdk-lib/aws-lambda&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nx"&gt;ecr_assets&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;aws-cdk-lib/aws-ecr-assets&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that we make sure that the code directory points to the directory that contains the &lt;code&gt;Dockerfile&lt;/code&gt; and that we select the ARM platform for both the code and the function itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the lambda function locally
&lt;/h2&gt;

&lt;p&gt;Fast feedback is important, so there might be cases where you need to run the container locally. For this, you first need to build the container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker build &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/arm64 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-t&lt;/span&gt; hello-world:latest &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-f&lt;/span&gt; ./lib/functions/hello-world/Dockerfile &lt;span class="se"&gt;\&lt;/span&gt;
  ./lib/functions/hello-world
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that this command can be executed from the project's root. Next, we need to make sure it's running before we can invoke it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;--platform&lt;/span&gt; linux/arm64 &lt;span class="nt"&gt;-p&lt;/span&gt; 9000:8080 hello-world:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Afterwards, you can invoke the function as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl http://localhost:9000/2015-03-31/functions/function/invocations &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"name": "Joris"}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Relative imports can be tricky! You need to place your code in a package. This allows you to do relative imports within your own package. This will enable cleaner code, as you can split responsibilities into multiple files, making it easier to manage and maintain.&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/man-jumping-on-intermodal-container-379964/" rel="noopener noreferrer"&gt;Kaique Rocha&lt;/a&gt;&lt;/p&gt;

</description>
      <category>serverless</category>
      <category>python</category>
      <category>cdk</category>
    </item>
    <item>
      <title>Avoid Costly Loops in AWS Step Functions</title>
      <dc:creator>Joris Conijn</dc:creator>
      <pubDate>Sat, 12 Oct 2024 12:16:58 +0000</pubDate>
      <link>https://dev.to/aws-builders/avoid-costly-loops-in-aws-step-functions-3eon</link>
      <guid>https://dev.to/aws-builders/avoid-costly-loops-in-aws-step-functions-3eon</guid>
      <description>&lt;p&gt;We all played around with AWS StepFunctions in our careers. It is a fantastic orchestration tool! But there are some scenarios where your cloud bill can explode in your face! In this blog, I will walk you through how you can end up in this situation and, maybe more importantly, how to avoid it!&lt;/p&gt;

&lt;h2&gt;
  
  
  Looping
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1gb4uprle2tjshsuawf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz1gb4uprle2tjshsuawf.png" alt="StepFunction loop example" width="342" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When you look at this pattern, you might think, okay, we wait for 5 seconds and use lambda to query something. If the response is true, we are done; if not, we will wait again and check the status once more. The problem with this pattern is that the response might never be true, causing an endless loop, which will generate cost. &lt;/p&gt;

&lt;h2&gt;
  
  
  There are no endless loops in StepFunctions
&lt;/h2&gt;

&lt;p&gt;If you find yourself in such an “endless” loop, AWS will stop executing once you have exceeded 25,000 events. Even if you implement it, it will eventually be stopped by AWS. But each loop has 3 transitions, and you must pay for these transactions. They are pretty cheap, but the problem lies within this loop, especially if this pattern is part of a map distribution that allows you to do 10,000 parallel executions.&lt;/p&gt;

&lt;p&gt;Assume your StepFunction is invoked based on an S3 event trigger. We upload 100 files in our S3 bucket. This will trigger 100 executions resulting in 3,000,000 state transitions each 5 seconds until they exceed the 25,000 events.&lt;/p&gt;

&lt;p&gt;You can already see that this will ramp up your AWS bill.&lt;/p&gt;

&lt;h2&gt;
  
  
  The callback pattern
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkinszopsde84fc42p43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkinszopsde84fc42p43.png" alt="StepFunction Callback pattern" width="264" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can solve this by implementing a callback pattern. It’s quite simple: You will place a message in an SQS Queue. You can then use any consumer to do the task at hand. When it’s done, the consumer notifies the step function execution and passes the result back.&lt;/p&gt;

&lt;p&gt;This is a known pattern, but we design our systems for failure! The cool thing about a SQS Queue is its retry mechanism.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaan7uwtqor6ddz3bd6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiaan7uwtqor6ddz3bd6k.png" alt="Callback pattern with a dead letter queue" width="615" height="520"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now, we can design it as follows:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;The StepFunction will place the task in the SQS Queue.&lt;/li&gt;
&lt;li&gt;The Processor will receive a batch of messages&lt;/li&gt;
&lt;li&gt;The function will process each message, and for each, it will:

&lt;ol&gt;
&lt;li&gt;Send a heartbeat to the StepFunction execution. This prevents the StepFunction from timing out.&lt;/li&gt;
&lt;li&gt;If the message is invalid, we can fail the StepFunction execution directly.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;Depending on the outcome, the function will:

&lt;ol&gt;
&lt;li&gt;When a failure occurs, the message will be kept in the queue.&lt;/li&gt;
&lt;li&gt;This will trigger the retry mechanism based on the visibility settings on the queue.&lt;/li&gt;
&lt;li&gt;When the message is processed successfully, we will:&lt;/li&gt;
&lt;li&gt;Send a success to the StepFunction execution.&lt;/li&gt;
&lt;li&gt;Remove the message from the queue.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;li&gt;When the message has reached the maximum number of retries, it will be delivered to the dead letter queue.&lt;/li&gt;

&lt;li&gt;The Dead Letter Queue Processor will receive the message.&lt;/li&gt;

&lt;li&gt;It will fail the StepFunction execution with some additional context.&lt;/li&gt;

&lt;li&gt;Remove the message from the dead letter queue.&lt;/li&gt;

&lt;/ol&gt;

&lt;p&gt;Why is this a good design? It accounts for failure! We left the actual processing job out of scope here. However, the lambda processing can fail for many reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It can reach AWS Service limits.&lt;/li&gt;
&lt;li&gt;It might depend on another external system that is unavailable.&lt;/li&gt;
&lt;li&gt;You’re scaling so fast that the serverless components can’t handle the load yet.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You have a better chance of succeeding with the SQS Queue and the retry mechanism. If the retries are exhausted, the dead letter queue will mark the failure in the step function itself. This will keep all the execution information in a single place for you to analyze.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Avoid creating loops in your StepFunctions. Use the callback pattern instead! Design your flows to be robust and expect failures to happen!&lt;/p&gt;

&lt;p&gt;Photo by &lt;a href="https://www.pexels.com/photo/intricate-abstract-3d-render-sculpture-28844046/" rel="noopener noreferrer"&gt;Steve Johnson&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The post &lt;a href="https://xebia.com/blog/avoid-costly-loops-in-aws-step-functions/" rel="noopener noreferrer"&gt;Avoid Costly Loops in AWS Step Functions&lt;/a&gt; appeared first on &lt;a href="https://xebia.com" rel="noopener noreferrer"&gt;Xebia&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>technology</category>
    </item>
  </channel>
</rss>
