<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: desawsume</title>
    <description>The latest articles on DEV Community by desawsume (@desawsume).</description>
    <link>https://dev.to/desawsume</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/desawsume"/>
    <language>en</language>
    <item>
      <title>Create AWS Nested Stack</title>
      <dc:creator>desawsume</dc:creator>
      <pubDate>Thu, 24 Nov 2022 06:38:03 +0000</pubDate>
      <link>https://dev.to/desawsume/create-aws-nested-stack-5bfi</link>
      <guid>https://dev.to/desawsume/create-aws-nested-stack-5bfi</guid>
      <description></description>
      <category>aws</category>
      <category>cdk</category>
      <category>python</category>
      <category>programming</category>
    </item>
    <item>
      <title>Alternative way of working Lambda layer in CDK</title>
      <dc:creator>desawsume</dc:creator>
      <pubDate>Wed, 03 Aug 2022 02:09:00 +0000</pubDate>
      <link>https://dev.to/desawsume/alternative-way-of-working-lambda-layer-in-cdk-2jl9</link>
      <guid>https://dev.to/desawsume/alternative-way-of-working-lambda-layer-in-cdk-2jl9</guid>
      <description>&lt;p&gt;What if your Lambda functions doesn't have the module or packages in AWS, and you wish to run this Lambda in the Cloud&lt;/p&gt;

&lt;p&gt;Simple answer is using Lambda Layer. &lt;/p&gt;

&lt;p&gt;CDK has the native way of handling it like below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;layer = aws_lambda.LayerVersion(self, 'lambda-layer',
                code = code,
                layer_version_name='layer-bundle',
                compatible_runtimes=[
                    aws_lambda.Runtime.PYTHON_3_7,
                    aws_lambda.Runtime.PYTHON_3_8,
                    aws_lambda.Runtime.PYTHON_3_9
                ]
            )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The other way of do this is:&lt;/p&gt;

&lt;p&gt;Deploy Python Lambda functions with .zip file archives.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- cd lambda

- python3 -m venv .venv-zip

- source .venv-zip/bin/activate

- pip install -r requirements.txt

- deactivate

- cd .venv-zip/lib/python3.9/site-packages

- zip -r ../../../../lambda_handler.zip .

- cd ../../../../

- zip -g lambda_handler.zip *.py requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;using the from_asset() method point to your zip file location&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Get the python version and clean up the Mac residue python installed version</title>
      <dc:creator>desawsume</dc:creator>
      <pubDate>Fri, 29 Jul 2022 08:51:00 +0000</pubDate>
      <link>https://dev.to/desawsume/get-the-python-version-and-clean-up-the-mac-residue-python-installed-version-5df2</link>
      <guid>https://dev.to/desawsume/get-the-python-version-and-clean-up-the-mac-residue-python-installed-version-5df2</guid>
      <description>&lt;p&gt;we might have situation when the python install in below path&lt;br&gt;
&lt;code&gt;Library/Frameworks/Python.framework&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And we will go through the step to clean up and remove the path as well as install the brew version&lt;/p&gt;

&lt;p&gt;First, let's uninstall previous Python versions:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rm -rf /Library/Frameworks/Python.framework
rm -rf /usr/local/bin/python3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second, remove the previous frameworks from the &lt;strong&gt;$PATH&lt;/strong&gt; variable:&lt;/p&gt;

&lt;p&gt;it can either on &lt;br&gt;
&lt;code&gt;~/.bash_profile&lt;/code&gt;&lt;br&gt;
&lt;code&gt;~/.zshrc&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;brew install python3&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Pyenv failed to install specific version on Mac M1</title>
      <dc:creator>desawsume</dc:creator>
      <pubDate>Thu, 21 Jul 2022 23:43:47 +0000</pubDate>
      <link>https://dev.to/desawsume/pyenv-failed-to-install-specific-version-on-mac-m1-1b0d</link>
      <guid>https://dev.to/desawsume/pyenv-failed-to-install-specific-version-on-mac-m1-1b0d</guid>
      <description></description>
    </item>
    <item>
      <title>Cross-account S3 Origin Setup</title>
      <dc:creator>desawsume</dc:creator>
      <pubDate>Tue, 12 Jul 2022 03:45:58 +0000</pubDate>
      <link>https://dev.to/desawsume/cross-account-s3-origin-setup-1cpi</link>
      <guid>https://dev.to/desawsume/cross-account-s3-origin-setup-1cpi</guid>
      <description>&lt;p&gt;It is possible to setup cross account S3 Bucket for CloudFront&lt;/p&gt;

&lt;p&gt;Let's take a look of the solution first. &lt;/p&gt;

&lt;p&gt;Master account - This is where you created your CDN&lt;/p&gt;

&lt;p&gt;Sub-account - This is where you have the S3 &lt;/p&gt;

&lt;h2&gt;
  
  
  Sub-account
&lt;/h2&gt;

&lt;p&gt;You will need to setup s3 bucket policy to allow OAI access from the Master account&lt;/p&gt;

&lt;p&gt;S3 bucket policy looks like below&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity &amp;lt;OAI ID&amp;gt;"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::&amp;lt;bucket-name-of-the-sub-account&amp;gt;/*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Master Account
&lt;/h2&gt;

&lt;p&gt;Create a S3 Origin using the S3 endpoint &lt;/p&gt;

&lt;p&gt;Format:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;lt;bucket-name-from-the-sub-account&amp;gt;.s3.&amp;lt;aws-region&amp;gt;.amazonaws.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjay1lp39168r0omosqnj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjay1lp39168r0omosqnj.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;abc is the bucket name from the sub-account&lt;/p&gt;

&lt;p&gt;Select the Origin access identity from the Master account. &lt;/p&gt;

&lt;p&gt;Last but not least, Create a path pattern that suit your s3 origin behaviour if you have multiple origins&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Identifying HTTP request</title>
      <dc:creator>desawsume</dc:creator>
      <pubDate>Mon, 04 Jul 2022 22:29:07 +0000</pubDate>
      <link>https://dev.to/desawsume/identifying-http-request-1g9n</link>
      <guid>https://dev.to/desawsume/identifying-http-request-1g9n</guid>
      <description>&lt;p&gt;&lt;a href="http://www.domain.com:12345/page101?t=win&amp;amp;s=chess#para5"&gt;http://www.domain.com:12345/page101?t=win&amp;amp;s=chess#para5&lt;/a&gt; these would be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Scheme: http&lt;/li&gt;
&lt;li&gt;Authority: &lt;a href="http://www.domain.com:12345"&gt;www.domain.com:12345&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;User Information: not present&lt;/li&gt;
&lt;li&gt;Host: &lt;a href="http://www.domain.com"&gt;www.domain.com&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Port: 12345&lt;/li&gt;
&lt;li&gt;Path: /page101&lt;/li&gt;
&lt;li&gt;Query String: t=win&amp;amp;s=chess&lt;/li&gt;
&lt;li&gt;Fragment: para5&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;QueryString also called URL parameters or URL query parameters&lt;/p&gt;

&lt;p&gt;It refers to the portion of the URL that comes after a question mark (?) and consist of a key and a value, separated by an equal sign (=),  multiple parameters are each then separated by an ampersand (&amp;amp;)&lt;/p&gt;

&lt;p&gt;? : query string begins&lt;br&gt;
= : value separator&lt;br&gt;
&amp;amp; : parameter separator&lt;/p&gt;

&lt;p&gt;for example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;https//www.domain.com/page?key1=value1&amp;amp;key2=value2 

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Making use of Lambda@Edge outside of us-east-1 in CDK</title>
      <dc:creator>desawsume</dc:creator>
      <pubDate>Mon, 04 Jul 2022 04:58:06 +0000</pubDate>
      <link>https://dev.to/desawsume/making-use-of-lambdaedge-outside-of-us-east-1-in-cdk-c6a</link>
      <guid>https://dev.to/desawsume/making-use-of-lambdaedge-outside-of-us-east-1-in-cdk-c6a</guid>
      <description>&lt;p&gt;If you are deploying Lambda@Edge functions and CloudFront at us-east-1 it is straightforward to get all the resources stood up properly&lt;/p&gt;

&lt;h2&gt;
  
  
  Background
&lt;/h2&gt;

&lt;p&gt;ap-southeast-2 is my home region, &lt;br&gt;
us-east-1 is where I deploy CDN and other Global resources. &lt;/p&gt;

&lt;p&gt;CloudFront got deployed from a stack in ap-southeast-2, as my OAI and S3 will need to be in my home region (Even though my CDN comes from the ap-southeast stack, but it will get stay in N.Virginia region without problem)&lt;/p&gt;

&lt;p&gt;Note: Lambda@Edge functions must be created in the us-east-1 region, regardless of the region of the CloudFront distribution and stack&lt;/p&gt;

&lt;p&gt;When I look closer, if you deploy the Edge function in us-east-1, you can take advantage of using the normal lambda construct,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func = lambda_.Function(self, "EdgeFunction",
    runtime=lambda_.Runtime.NODEJS_12_X,
    handler="index.handler",
    code=lambda_.Code.from_asset(&amp;lt;path_of_your_lambda@edge function&amp;gt;)
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you have the stack structures like me, you would need to  upload ARN or relevant reference to the SSM and let the resources passing from region to region&lt;/p&gt;

&lt;p&gt;Thumb up to below method&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;EdgeFunction&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;edge_lambda_function = cloudfront.experimental.EdgeFunction(self, "edge_lambda_function",
    runtime=lambda_.Runtime.NODEJS_12_X,
    handler="index.handler",
    code=lambda_.Code.from_asset(&amp;lt;path_of_your_lambda@edge function&amp;gt;),
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So now, on the same stack, you can reference it in the main menthod.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cdn_distribution = cloudfront.Distribution(
self, 
"cdn_distribution",
default_behavior=cloudfront.BehaviorOptions(origin=my_origin),
    additional_behaviors={
        "images/*": cloudfront.BehaviorOptions(
            origin=my_origin,
            edge_lambdas=[cloudfront.EdgeLambda(
                function_version=edge_lambda_function.current_version,
                event_type=cloudfront.LambdaEdgeEventType.VIEWER_REQUEST,
                include_body=True
            )
            ]
        )
    }
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Problem
&lt;/h2&gt;

&lt;p&gt;When using the CloudFront EdgeFunction with a role specified attached to the function,&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;edge_lambda_role = iam.Role(
               self,
                "edge-lambda-iam-role",
             assumed_by=iam.CompositePrincipal(
                    iam.ServicePrincipal('lambda.amazonaws.com'),
                    iam.ServicePrincipal('edgelambda.amazonaws.com')
                ),
                managed_policies=[
                    iam.ManagedPolicy.from_aws_managed_policy_name('service-role/AWSLambdaBasicExecutionRole')
                ]
            )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Error -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Adding this dependency ("edge-lambda-stack-xxxxxxxxxxxxxxxxxxxxxxxxx/edge-fucntion/Resource" depends on "cloudfront/edge-iam-role/Resource") would create a cyclic reference.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is the most tricky part, and took me a while to understand what's going on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;After some research, someone mentioned:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The created us-east-1 stack must have a dependency on the "main" stack, as the main stack can't be created first; doing so would mean the function ARN isn't available to read and that the Distribution would not deploy correctly. Creating the role in the main stack certainly leads to a circular dependency.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So I have to remove it, and let the Lambda@Edge construct to create it on the fly (I think it does better than the IAM role we defined, I will explain that in a bit)&lt;/p&gt;

&lt;p&gt;Under the hood, the EdgeFunction method does this:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;create the IAM role, which will have enough permission to do all the magic but it is not permissive&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Z6PT9a5h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nzesgil5ag9mtdjj1k72.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Z6PT9a5h--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nzesgil5ag9mtdjj1k72.png" alt="Image description" width="880" height="654"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;a custom resource also upload the version arn to param store (CloudFront want the version arn instead of function arn)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--kfLEHh86--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bk5k8ehq2rd0snpc7rmj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--kfLEHh86--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bk5k8ehq2rd0snpc7rmj.png" alt="Image description" width="880" height="506"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It took me 2 hours to get this all going, enjoy finding the solution and also hope this you find helpful as well&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to shrink the EBS Volume</title>
      <dc:creator>desawsume</dc:creator>
      <pubDate>Sat, 02 Jul 2022 22:54:54 +0000</pubDate>
      <link>https://dev.to/desawsume/how-to-shrink-the-ebs-volume-4hc6</link>
      <guid>https://dev.to/desawsume/how-to-shrink-the-ebs-volume-4hc6</guid>
      <description>&lt;p&gt;&lt;strong&gt;## TL;DR&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Stop the target  EC2 instance (Target machine is Windows server 2019)&lt;/li&gt;
&lt;li&gt;Snapshot the root EBS volume.&lt;/li&gt;
&lt;li&gt;Create a brand new EBS volume and make sure it is at the same AZ. &lt;/li&gt;
&lt;li&gt;Detach the source volume from the EC2 and attach it as non-root volume to the worker EC2 below&lt;/li&gt;
&lt;li&gt;Launch a new EC2 worker instance in the same availability zone as the target instance. Here I launched  a small Amazon Linux 2 box to do this demo&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Main steps&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Fetch the epel to install ntfsprogs&lt;/p&gt;

&lt;p&gt;&lt;code&gt;amazon-linux-extras install epel -y&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yum install ntfsprogs&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;List the attached volume&lt;/p&gt;

&lt;p&gt;&lt;code&gt;fdisk -l&lt;/code&gt;&lt;br&gt;
Note down the target volume device name&lt;/p&gt;

&lt;p&gt;source volume - /dev/xvdf (The original volume)&lt;/p&gt;

&lt;p&gt;target volume - /dev/xvdg (The smaller size of the EBS volume)&lt;/p&gt;

&lt;p&gt;Dump current source disk partition info to a file&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sfdisk -d /dev/xvdf &amp;gt; xvdf.info&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Find out the minimum size the filesystem can be reduced to&lt;/p&gt;

&lt;p&gt;&lt;code&gt;ntfsresize --info /dev/xvdf&lt;br&gt;
&lt;/code&gt;&lt;br&gt;
&lt;code&gt;ntfsresize --info /dev/xvdf1&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Resize the filesystem to the suggested number. Enter ‘y’ when prompted to proceed.&lt;/p&gt;

&lt;p&gt;In this example it is 30Gb&lt;br&gt;
&lt;code&gt;ntfsresize -s 30000M /dev/xvdf&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Install pv, which is a tool for monitoring the progress of data through a pipeline.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;yum install pv&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Find out the total MB we need to copy from the source disk to the new smaller disk. Here we need to cover the reserved sectors (2048 sectors, or 1MB, from step 6 above) at the front of the disk, to the end of the filesystem.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;echo $((30000*2048))&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;alway + 1 of the total of your drive&lt;/p&gt;

&lt;p&gt;Block copy from xvdf to xvdg, with 1MB block size and total count&lt;br&gt;
&lt;code&gt;dd bs=1M if=/dev/xvdf count=30001 | pv -s 30001m | dd of=/dev/xvdg bs=1M&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Dump the new target disk xvdg partition info to a file. Edit the file by updating the xvdg1 size to the new size in sector&lt;/p&gt;

&lt;p&gt;&lt;code&gt;sfdisk -d /dev/xvdg &amp;gt; sfdisk-d.xvdg&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Detach the xvdg and attach it as root back to the original instance&lt;/p&gt;

&lt;p&gt;Wait for few mins and check! &lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to upgrade AWS CDK stacks from CDKv1 to v2</title>
      <dc:creator>desawsume</dc:creator>
      <pubDate>Sat, 02 Jul 2022 10:09:20 +0000</pubDate>
      <link>https://dev.to/desawsume/upgrading-aws-cdk-stacks-from-cdkv1-to-v2-gjl</link>
      <guid>https://dev.to/desawsume/upgrading-aws-cdk-stacks-from-cdkv1-to-v2-gjl</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Background&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;On June 1, 2023, AWS CDK version 1 will reach the end-of-support. &lt;a href="https://github.com/aws/aws-cdk-rfcs/blob/master/text/0079-cdk-2.0.md#aws-cdk-maintenance-policy"&gt;AWS CDK Maintenance Policy&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Old Way&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;With CDKv1, we defined our CDK app packages through a requirements.txt file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws-cdk.core=={cdk_version1}",
aws-cdk.aws_codebuild=={cdk_version1}",
aws-cdk.aws_cloudtrail=={cdk_version1}",
aws-cdk.aws_codepipeline=={cdk_version1}",
aws-cdk.aws_codepipeline_actions=={cdk_version1}",
aws-cdk.aws_events=={cdk_version1}",
aws-cdk.aws_events_targets=={cdk_version1}",
aws-cdk.aws_iam=={cdk_version1}",
aws-cdk.aws_lambda_event_sources=={cdk_version1}",
aws-cdk.aws_lambda=={cdk_version1}",
aws-cdk.aws_s3=={cdk_version1}",
aws-cdk.aws_kms=={cdk_version1}
aws-cdk.aws-ec2==cdk_version1
aws-cdk.aws-elasticloadbalancingv2==cdk_version1
aws-cdk.aws-lambda==cdk_version1

boto3
pytest
-e .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It could potentially be blocked by the package manager and we might get overwhelmed by the aws-cdk packages name. &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;CDK version 2&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Now, CDK and all stable constructs are now combined in 1 package/module. Experimental modules still need to be installed one by one.&lt;/p&gt;

&lt;p&gt;It makes things much easier with aws-cdk-lib &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Preparation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;To make changes easier, I would create a virtual environment isolated it from my CDKv1, because most my CDK apps are written in python. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;python3 -m venv .cdkv2&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;And activate your new virtual environment and run pip install to install the cdk v2 lib&lt;/p&gt;

&lt;p&gt;&lt;code&gt;source .cdkv2/bin/activate&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Note: Move all the old aws-cdk v1 package to another txt file, for example - requirements-dev.txt&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cPEXSBYu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c1caznbgtadzem5fk26g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cPEXSBYu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c1caznbgtadzem5fk26g.png" alt="Image description" width="528" height="182"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The requirement.txt look similar to below depending on your CDK version, use cdk --version to find out.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws-cdk-lib==2.15.0
constructs&amp;gt;=10.0.0,&amp;lt;11.0.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The cdk.json needs to be adjusted as well, some of the options in CDK v1 are now deprecated, which also simplify a lot in the cdk.json file&lt;/p&gt;

&lt;p&gt;overall, it looks something like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "app": "python app.py",
  "context": {
    "@aws-cdk/aws-apigateway:usagePlanKeyOrderInsensitiveId": false,
    "@aws-cdk/aws-cloudfront:defaultSecurityPolicyTLSv1.2_2021": false,
    "@aws-cdk/aws-rds:lowercaseDbIdentifier": false,
    "@aws-cdk/core:stackRelativeExports": false,
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;## Change the imports&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All the mature and stable release are in the aws-cdk-lib. Most cdk apps only need to change import statements&lt;/p&gt;

&lt;p&gt;Example that I have -&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from aws_cdk import core
from aws_cdk.core import Tags
import aws_cdk.aws_iam as iam
import aws_cdk.aws_lambda as _lambda 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Change to&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from constructs import Construct

from aws_cdk import (
    aws_iam as iam,
    aws_lambda as _lambda,
    Stack,
    CfnOutput,
    Tags
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;now re-bootstrap the env to the account by simply run&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cdk bootrap
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Lukcy that I don't have any error, I saw other people have issue with cdk bootrap version conflict. &lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
