<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rimpal Johal</title>
    <description>The latest articles on DEV Community by Rimpal Johal (@rimpaljohal).</description>
    <link>https://dev.to/rimpaljohal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rimpaljohal"/>
    <language>en</language>
    <item>
      <title>Understanding AWS Instance Metadata Service: A Closer Look</title>
      <dc:creator>Rimpal Johal</dc:creator>
      <pubDate>Sun, 21 Jan 2024 22:05:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/understanding-aws-instance-metadata-service-a-closer-look-69h</link>
      <guid>https://dev.to/aws-builders/understanding-aws-instance-metadata-service-a-closer-look-69h</guid>
      <description>&lt;p&gt;In the realm of cloud computing, particularly within the Amazon Web Services (AWS) ecosystem, there exists a service that, while invisible to many, plays a crucial role in the efficient management of virtual machines. This is the AWS Instance Metadata Service (IMDS). Below, we unravel what IMDS is, its significance in AWS, where you can find it within the AWS Management Console, and why it needs to be secured.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is the AWS Instance Metadata Service?
&lt;/h2&gt;

&lt;p&gt;AWS Instance Metadata Service is a specialized service that allows AWS Elastic Compute Cloud (EC2) instances to access instance-specific data, which is pivotal for the configuration or management of the running instance. The service is reachable at a special URL that is only accessible from the EC2 instance itself: &lt;a href="http://169.254.169.254/latest/meta-data/"&gt;http://169.254.169.254/latest/meta-data/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This metadata includes details such as the instance's IP address, the IAM role it is using, information about the AMI (Amazon Machine Image) used to launch the instance, and much more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Does AWS Use IMDS?
&lt;/h2&gt;

&lt;p&gt;IMDS serves as a bridge between the EC2 instance and the AWS environment. It allows instances to bootstrap themselves with the necessary configuration without the need to hard-code sensitive data into the instance or AMI. This can include dynamically setting environment variables, configuring services at runtime, or even obtaining temporary credentials to access other AWS services securely.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Implications of IMDS
&lt;/h2&gt;

&lt;p&gt;While IMDS is undeniably useful, it can also pose a security risk if not properly secured. Here are the few potential threats to highlight&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If an attacker gains access to your EC2 instance, they could potentially retrieve IAM credentials and other sensitive data from the IMDS.&lt;/li&gt;
&lt;li&gt;AWS introduced IMDSv2, which requires session-oriented requests, to mitigate the risk of unauthorized retrieval of metadata. IMDSv1, on the other hand, allows any process that can access &lt;a href="http://169.254.169.254/latest/meta-data/"&gt;http://169.254.169.254/latest/meta-data/&lt;/a&gt; to retrieve instance metadata.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is IMDSv1 and How it is different from IMDSv2
&lt;/h2&gt;

&lt;p&gt;IMDSv1 stands for "Instance Metadata Service version 1" in AWS (Amazon Web Services). It's a service provided by Amazon EC2 (Elastic Compute Cloud) to allow EC2 instances to retrieve instance-specific data, such as the instance ID, public IP, and user data, among other metadata.&lt;br&gt;
The Instance Metadata Service (IMDS) allows EC2 instances to access metadata about themselves. This metadata includes information like the instance type, security groups, and more. Applications or scripts running on the EC2 instance can use this data to make decisions or to configure themselves accordingly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Endpoint: IMDS is available at a specific IP address (169.254.169.254) that can be accessed from within the instance. This is a link-local address, which means it's only accessible from the instance itself and not from outside. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;IMDSv1 vs IMDSv2: AWS introduced IMDSv2 as an enhanced version of IMDSv1. The primary difference is in the security model:&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;IMDSv1: Allows direct retrieval of metadata with a simple HTTP GET request to the aforementioned IP.&lt;/p&gt;

&lt;p&gt;IMDSv2: Introduces session-based requests. Before fetching metadata, you need to create a session by making a PUT request. This session returns a token, which must be provided in subsequent GET requests for metadata.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Security Considerations: IMDSv1 can be vulnerable to certain types of attacks, especially if applications on the instance do not properly validate external input. For example, a server-side request forgery (SSRF) attack could trick the server into retrieving instance metadata. IMDSv2 was introduced to help mitigate this type of risk by requiring session tokens. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Usage: To use IMDSv1, one would typically use a tool like curl to fetch data from the endpoint. For example: &lt;br&gt;
curl &lt;a href="http://169.254.169.254/latest/meta-data/instance-id"&gt;http://169.254.169.254/latest/meta-data/instance-id&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  AWS recommendation and more details about IMDS
&lt;/h2&gt;

&lt;p&gt;AWS recommends using IMDSv2 due to its enhanced security features. If you're using EC2 instances and rely on instance metadata, it's a good idea to evaluate and potentially migrate to IMDSv2.&lt;br&gt;
In summary, IMDSv1 is the original version of the Instance Metadata Service provided by AWS EC2, allowing instances to access metadata about themselves. However, due to security improvements, AWS introduced IMDSv2, which is now the recommended version for accessing instance metadata.&lt;/p&gt;

&lt;p&gt;Both IMDSv1 and IMDSv2 are enabled by default on all EC2 instances. This means that you can make requests using either version without any additional configuration.&lt;br&gt;
Although both versions are available, AWS recommends using IMDSv2 due to its enhanced security features. IMDSv2 uses session-oriented requests, which can help protect against certain vulnerabilities, such as server-side request forgery (SSRF) attacks.&lt;br&gt;
You can modify the IMDS settings for an EC2 instance. For example, if you want to enforce the use of IMDSv2 for security reasons, you can configure the instance to disable IMDSv1. This is done through EC2 instance metadata options when launching a new instance or modifying an existing one.&lt;br&gt;
While new instances have both versions enabled, it's worth noting that very old EC2 instances that have been running since before the introduction of IMDSv2 might only have IMDSv1 available. However, this would be rare, as IMDSv2 has been available for several years.&lt;br&gt;
In summary, by default, both IMDSv1 and IMDSv2 are enabled on all EC2 instances. AWS recommends using IMDSv2 for its security benefits, and you have the option to configure which versions are enabled on your instances.&lt;/p&gt;

&lt;p&gt;If you're trying to determine which versions of IMDS are enabled on a running EC2 instance, you'd typically check the instance's metadata options or use AWS Management Console, AWS CLI, or SDKs. AWS provide multiple ways to determine the version of IMDS, details are as given below:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Management Console&lt;/strong&gt;: &lt;strong&gt;Checking for IMDSv1-Enabled Instances&lt;/strong&gt;: In the AWS Management Console, it's possible to determine which instances have IMDSv1 enabled by using the IMDSv2 attribute column on the Amazon EC2 page.
To view the IMDSv2 attribute column, follow these steps:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to the Amazon EC2 console and select 'Instances'.&lt;/li&gt;
&lt;li&gt;Click on the settings gear icon located in the upper-right corner.&lt;/li&gt;
&lt;li&gt;Find 'IMDSv2' in the list and activate it by moving the slider to the on position.&lt;/li&gt;
&lt;li&gt;Click 'Confirm' to apply this setting.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This action will display the IMDS status for your instances. If the status is marked as 'optional', it indicates that IMDSv1 is enabled for that instance. Conversely, a status of 'required' suggests that IMDSv1 is disabled.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Identifying IMDSv1-Enabled Instances via AWS CLI&lt;/strong&gt;&lt;br&gt;
To ascertain if your instances have IMDSv1 enabled, utilize the AWS Command Line Interface (AWS CLI). This is achievable by executing the aws ec2 describe-instances command and examining the HttpTokens value. This particular value is pivotal in determining the enabled version of IMDS. A setting of optional signifies that both IMDSv1 and IMDSv2 are enabled, whereas required implies that only IMDSv2 is in use. In essence, when the status is optional, it indicates the activation of IMDSv1 on the instance, and required denotes that IMDSv1 is deactivated.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Identifying IMDSv1-Enabled Instances Using Security Hub&lt;/strong&gt;&lt;br&gt;
Within Security Hub, there's a specific control for Amazon EC2 that leverages the AWS Config rule, named ec2-imdsv2-check, to verify whether the instance metadata version is set to IMDSv2. Should the HttpTokens setting be configured as optional, the rule will flag as NON_COMPLIANT, indicating that the EC2 instance is operating with IMDSv1 enabled.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Below is a Python code snippet using Boto3, the AWS SDK for Python, that will list all EC2 instances (both running and stopped) in an AWS account and determine which versions of IMDS are enabled on each:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3

# Initialize the EC2 client
ec2_client = boto3.client('ec2')

def check_imds_versions():
    # Paginate through all EC2 instances
    paginator = ec2_client.get_paginator('describe_instances')
    for page in paginator.paginate():
        for reservation in page['Reservations']:
            for instance in reservation['Instances']:
                instance_id = instance['InstanceId']
                state = instance['State']['Name']

                # Get IMDS version
                imds_version = "Unknown"
                if 'MetadataOptions' in instance:
                    if instance['MetadataOptions']['HttpTokens'] == 'required':
                        imds_version = "IMDSv2"
                    else:
                        imds_version = "IMDSv1 &amp;amp; IMDSv2"

                print(f"Instance ID: {instance_id}, State: {state}, IMDS Version: {imds_version}")

if __name__ == "__main__":
    check_imds_versions()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AWS IMDSv1 service uses a request/response access method and the AWS IMDSv2 service uses a session-oriented method. Both services are fully secure, but v2 provides additional layers of protection for four types of vulnerabilities that could be used to try to access IMDS.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementing IMDSv2 on EC2 Instances
&lt;/h2&gt;

&lt;p&gt;Upon identifying instances operating with the default IMDSv1, we can proceed to adopt recommended practices provided by AWS to transition these instances from IMDSv1 to the more secure IMDSv2 protocol.&lt;br&gt;
&lt;strong&gt;For Launching New Instances:&lt;/strong&gt;&lt;br&gt;
To ensure new instances utilize IMDSv2, disable IMDSv1 and enable IMDSv2 through the metadata-options parameter in the run-instances CLI command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 run-instances \
    --image-id &amp;lt;ami-xxxxxxxxxxx&amp;gt; \
    --instance-type m4.large \
    --metadata-options "HttpEndpoint=enabled,HttpTokens=required"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;To Update Running Instances:&lt;/strong&gt;&lt;br&gt;
Modify the metadata options of an already running instance to enforce the use of IMDSv2:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 modify-instance-metadata-options \
    --instance-id &amp;lt;instance-xxxxxxx&amp;gt; \
    --http-tokens required \
    --http-endpoint enabled
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;For Configuring New AMIs:&lt;/strong&gt;&lt;br&gt;
When registering a new Amazon Machine Image (AMI), set it to support IMDSv2 by default:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws ec2 register-image \
    --name &amp;lt;private-image&amp;gt; \
    --root-device-name /dev/xdga \
    --block-device-mappings DeviceName=/dev/xvda,Ebs={SnapshotId=&amp;lt;snap-xxxxxxx&amp;gt;} \
    --imds-support v2.0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Launching Instances via the AWS Console with IMDSv2:&lt;/strong&gt;&lt;br&gt;
When initiating the launch of instances through the AWS Console, proceed by selecting 'Launch Instance'. After reaching the instance configuration stages, navigate to the 'Advanced details' tab. Scroll to the 'Metadata version' section and ensure to select 'V2 only (token required)' from the available options. This setting ensures that the instance will exclusively utilize IMDSv2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementing IMDSv2 with EC2 Launch Templates:&lt;/strong&gt;&lt;br&gt;
EC2 launch templates serve as configurable blueprints that Amazon Auto Scaling groups leverage for the deployment of EC2 instances. When crafting these launch templates through the AWS Console, it's possible to designate the Metadata version. To enhance security, select 'V2 only (token required)' in the Metadata version settings. This choice ensures that any EC2 instances launched using this template will automatically use the more secure IMDSv2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuring IMDSv2 in EC2 Launch Templates via AWS CloudFormation:&lt;/strong&gt;&lt;br&gt;
When utilizing AWS CloudFormation to create an EC2 launch template, it's crucial to set up the MetadataOptions property to enforce the use of IMDSv2. This is achieved by specifying HttpTokens as required. With this configuration, the process of fetching AWS Identity and Access Management (IAM) role credentials will only yield IMDSv2 credentials, effectively making IMDSv1 credentials inaccessible.&lt;br&gt;
To implement this, include the following properties within your CloudFormation template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "HttpEndpoint": "&amp;lt;String&amp;gt;",
  "HttpProtocolIpv6": "&amp;lt;String&amp;gt;",
  "HttpPutResponseHopLimit": &amp;lt;Integer&amp;gt;,
  "HttpTokens": "required",
  "InstanceMetadataTags": "&amp;lt;String&amp;gt;"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This setup ensures that any EC2 instances launched using the CloudFormation template are configured with the enhanced security standards of IMDSv2.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leveraging IMDSv2 in EC2 Instances with AWS Cloud Development Kit (CDK):&lt;/strong&gt;&lt;br&gt;
For those employing the AWS Cloud Development Kit (AWS CDK) to orchestrate and deploy EC2 instances, the requireImdsv2 property presents a streamlined way to enhance instance security. By setting requireImdsv2 to true, you effectively disable IMDSv1 and enable IMDSv2, ensuring a higher level of security compliance.In your AWS CDK script, configure the EC2 instance as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;new ec2.Instance(this, 'Instance', {
    // ...include other necessary parameters
    requireImdsv2: true,
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;AWS IMDSv2 provides additional protection against the following types of vulnerabilities compared to IMDSv1:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Server-Side Request Forgery (SSRF)&lt;/strong&gt;: SSRF vulnerabilities occur when a malicious actor can cause a server to make a request to an unintended location, such as the instance metadata service. IMDSv2 mitigates this risk by requiring a session token that is obtained via a PUT request. This token must be included in subsequent requests to access the metadata, which is something an SSRF vulnerability typically cannot perform since it can often only induce the server to make simple GET requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open Firewalls&lt;/strong&gt;: In some configurations, the firewall might be set to allow all traffic to reach the instance metadata service. This can be risky, especially if the services running on the instance have vulnerabilities or are misconfigured. IMDSv2 requires the use of a secret token, which is only known to the instance and AWS, thus reducing the risk from an open firewall.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Insecure Instance Roles&lt;/strong&gt;: Instance roles are used to grant permissions to EC2 instances to access other AWS services. If these roles are overly permissive, they can be exploited. IMDSv2 reduces the risk by making it more difficult for a potential attacker to retrieve the credentials associated with the role since they would need to obtain a valid session token first.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Misconfigured HTTP Proxies&lt;/strong&gt;: HTTP proxies within an EC2 instance can inadvertently allow access to the IMDS if not properly secured. Since IMDSv1 only requires a simple HTTP GET request, any process that can send such requests through the proxy can potentially access the metadata service. IMDSv2's use of a session-oriented approach requires a PUT request to get a token, which is typically not allowed through an HTTP proxy by default, thus mitigating this risk.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While IMDSv1 is considered secure under many circumstances, IMDSv2 adds these layers of security to protect against specific attack vectors that have been identified as potential risks. AWS recommends using IMDSv2 whenever possible to leverage these additional security benefits.&lt;/p&gt;

</description>
      <category>security</category>
      <category>aws</category>
      <category>cloudsecurity</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS Lambda Permissions: Resource-Based Policies vs. IAM Roles</title>
      <dc:creator>Rimpal Johal</dc:creator>
      <pubDate>Sat, 04 Nov 2023 18:08:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-lambda-permissions-resource-based-policies-vs-iam-roles-521n</link>
      <guid>https://dev.to/aws-builders/aws-lambda-permissions-resource-based-policies-vs-iam-roles-521n</guid>
      <description>&lt;p&gt;When it comes to managing access permissions for your AWS Lambda functions, AWS provides two primary methods: &lt;strong&gt;resource-based policies&lt;/strong&gt; and &lt;strong&gt;IAM roles&lt;/strong&gt;. Each method has its own set of use cases, benefits, and limitations. In this article, I'll delve into the details of Lambda resource-based policies and IAM roles, compare them, provide examples of each, and offer guidance on selecting the best approach in line with AWS best practices and the AWS Well-Architected Framework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is a Lambda Resource-Based Policy?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A Lambda resource-based policy is an access policy attached directly to a Lambda function. This policy defines which entities (principally AWS accounts and other AWS services) can invoke your Lambda function and what specific actions they can perform. It's a straightforward way to grant access to your Lambda function without having to manage separate IAM (Identity and Access Management) policies or roles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example of a Lambda Resource-Based Policy:&lt;/strong&gt;&lt;br&gt;
Imagine you want to allow another AWS account to invoke your Lambda function. Here's a snippet of what that resource-based policy might look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::123456789012:root"
      },
      "Action": "lambda:InvokeFunction",
      "Resource": "arn:aws:lambda:us-west-2:987654321098:function:my-function"
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Note that this policy can be viewed and edited directly in the AWS Lambda console under the 'Permissions' tab of your Lambda function.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is an IAM Role?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An IAM role is an entity within AWS that defines a set of permissions that dictate what the identity can and cannot do in AWS. IAM roles can be assumed by trusted entities, such as IAM users, applications, or AWS services, including AWS Lambda. Roles are a secure way to grant permissions that applications can assume when they need to perform actions on your behalf.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example of an IAM Role Policy for Lambda:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Consider an IAM role that grants permission to invoke any Lambda function in your account:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "lambda:InvokeFunction",
      "Resource": "*"
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can create and manage IAM roles in the IAM console, under the 'Roles' section.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Resource-Based Policy vs. IAM Roles: The Differences&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The key differences between the two are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Attachment&lt;/strong&gt;: Resource-based policies are attached directly to a Lambda function, while IAM roles are standalone entities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Management&lt;/strong&gt;: Resource-based policies are managed in the Lambda console, whereas IAM roles are managed in the IAM console.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalability&lt;/strong&gt;: Resource-based policies are less scalable due to their size limit. IAM roles are more scalable as they don't have this limitation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: IAM roles are more flexible, as they can be assumed by many different accounts and services without modifying the policy on the resource itself.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best Practices and the AWS Well-Architected Framework&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;According to AWS best practices and the Well-Architected Framework, the choice between using a Lambda resource-based policy and IAM roles largely depends on the scale of your operation and the need for flexibility.&lt;/p&gt;

&lt;p&gt;For simple scenarios with a few resources, a resource-based policy is adequate and easier to manage. However, as your environment grows, IAM roles are recommended. They offer greater scalability and flexibility, which align with the AWS Well-Architected Framework's principle of granting least privilege—providing only the necessary permissions to perform a task.&lt;/p&gt;

&lt;p&gt;IAM roles are particularly beneficial when dealing with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cross-account access.&lt;/li&gt;
&lt;li&gt;Applications that run on Amazon EC2 or other AWS services that need to perform actions on your Lambda functions.&lt;/li&gt;
&lt;li&gt;Environments where functions are invoked by many different principals.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cross-Account Access via IAM Roles&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To enable cross-account access, you must create an IAM role with the necessary permissions and then allow other AWS accounts to assume that role. Here's how you can achieve this:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Create an IAM Role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First, you create an IAM role in the account that owns the Lambda function. You define a trust policy that allows entities from the other account to assume the role.&lt;/p&gt;

&lt;p&gt;Trust Policy for IAM Role (Trust Relationship):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::OTHER_ACCOUNT_ID:root"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2: Attach Permissions to the IAM Role&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Next, you attach a permissions policy to the IAM role that grants the lambda:InvokeFunction permission.&lt;/p&gt;

&lt;p&gt;Permissions Policy for IAM Role:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": "lambda:InvokeFunction",
      "Resource": "arn:aws:lambda:REGION:ACCOUNT_ID:function:FUNCTION_NAME"
    }
  ]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace REGION, ACCOUNT_ID, and FUNCTION_NAME with your specific details.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Assume the IAM Role from the Other Account&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The entity in the other account must then use the AWS Security Token Service (STS) to assume the IAM role. Here's how you could do that programmatically also using AWS SDK for Python (Boto3).&lt;/p&gt;

&lt;p&gt;Concluding the blog here, I would like to say while resource-based policies offer a quick and easy way to grant access to your Lambda functions, they come with limitations, especially concerning policy size and scalability. IAM roles, on the other hand, provide a more robust and flexible solution that can grow with your AWS environment.&lt;/p&gt;

&lt;p&gt;When designing your cloud architecture, consider using IAM roles for their scalability and adherence to the principle of least privilege. By doing so, you'll ensure your AWS infrastructure remains secure, efficient, and well-aligned with AWS best practices.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>serverless</category>
      <category>lambda</category>
      <category>security</category>
    </item>
    <item>
      <title>Managing Amazon S3 Objects with Python</title>
      <dc:creator>Rimpal Johal</dc:creator>
      <pubDate>Tue, 28 Mar 2023 05:48:05 +0000</pubDate>
      <link>https://dev.to/aws-builders/managing-amazon-s3-objects-with-python-489p</link>
      <guid>https://dev.to/aws-builders/managing-amazon-s3-objects-with-python-489p</guid>
      <description>&lt;p&gt;Amazon Simple Storage Service (S3) is a popular cloud storage service that provides scalable and secure object storage for various types of data. Managing S3 objects can be a daunting task, especially when dealing with large datasets. In this blog post, we'll explore how to use Python and the boto3 library to manage S3 objects and list the objects created after a specific date and time.&lt;/p&gt;

&lt;p&gt;To get started, we'll need to install the boto3 library, which is the Amazon Web Services (AWS) SDK for Python. Once installed, we can create an S3 client and set the region name and bucket name that we want to work with. We'll also set the timezone to Melbourne, Australia, using the pytz library.&lt;/p&gt;

&lt;p&gt;Next, we'll set the start date for listing the S3 objects. In this example, we'll set the start date to March 5, 2023, at 12:00 AM in Melbourne time. We'll use the datetime and tzinfo modules to define the start date and timezone.&lt;/p&gt;

&lt;p&gt;We'll then use a paginator to iterate over all objects in the S3 bucket and filter the objects that were created after the specified start date. The paginator will help us handle large datasets and ensure that we don't exceed any API rate limits.&lt;/p&gt;

&lt;p&gt;We'll convert the UTC LastModified time of each object to Melbourne timezone using the astimezone() method and format it as a string that includes the timezone name and offset. We'll also convert the size of each object from bytes to megabytes and store the filtered objects in a list.&lt;/p&gt;

&lt;p&gt;Finally, we'll print and write the list of objects to a file named 's3_objects.txt' using the pprint and open() methods. The output file will include the object key, LastModified time, and size in megabytes.&lt;/p&gt;

&lt;p&gt;Here's the complete Python script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import datetime
import pprint
import json
import pytz

# Create an S3 client
s3 = boto3.client('s3', region_name='ap-southeast-2')

# Set the S3 bucket name
bucket_name = 'myuats3bucket'
tz = pytz.timezone('Australia/Melbourne')
# Set the start date for listing objects
start_date = datetime.datetime(2023, 3, 5, tzinfo=tz)
start_after = start_date.strftime('%Y-%m-%d %H:%M:%S')
print (start_after)

# Set the prefix for filtering objects
prefix = ''

# Initialize the list of objects
object_list = []

# Use a paginator to iterate over all objects in the bucket
paginator = s3.get_paginator('list_objects_v2')
try:
    for page in paginator.paginate(Bucket=bucket_name, Prefix=prefix, StartAfter=start_after):
    # If there are no more objects, stop iterating
        if 'Contents' not in page:
            break

    # Iterate over each object in the current page of results
    for obj in page['Contents']:
        # Add the object to the list if it was created after the start date
        if obj['LastModified'] &amp;gt;= start_date:
            # Convert UTC LastModified time to Melbourne timezone
            melbourne_time = obj['LastModified'].astimezone(tz)
            obj['LastModified'] = melbourne_time.strftime('%Y-%m-%d %H:%M:%S %Z%z')
            # Convert size to MB
            obj['Size'] = round(obj['Size'] / (1024 * 1024), 2)
            object_list.append(obj)
except Exception as e:
    print("Error:", e)

# Print the list of objects
pprint.pprint(object_list)

# Open the file for writing
with open('s3_objects.txt', 'w') as f:
    # Write the formatted output to the file
    for obj in object_list:
        f.write(f"Key: {obj['Key']}, Last Modified: {obj['LastModified']}, Size: {obj['Size']} MB\n")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In summary, using Python and the boto3 library to manage Amazon S3 objects is a powerful way to automate and streamline your data management workflows. With the ability to filter and format S3 objects based on specific criteria, you can save time and reduce errors in your data processing pipeline. Whether you're dealing with small or large datasets, this approach can help you manage your S3 objects more efficiently.&lt;/p&gt;

&lt;p&gt;Thank you for reading this blog post, and we hope that you found it helpful. If you have any questions or feedback, please feel free to leave a comment below.&lt;/p&gt;

</description>
      <category>s3</category>
      <category>storage</category>
      <category>python</category>
      <category>boto3</category>
    </item>
    <item>
      <title>Restore Postgresql Database in a Docker Container</title>
      <dc:creator>Rimpal Johal</dc:creator>
      <pubDate>Mon, 27 Mar 2023 07:09:11 +0000</pubDate>
      <link>https://dev.to/aws-builders/restore-postgresql-database-in-a-docker-container-3h85</link>
      <guid>https://dev.to/aws-builders/restore-postgresql-database-in-a-docker-container-3h85</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, we will discuss a common challenge faced by many developers when setting up and configuring a PostgreSQL database in a Docker container. The problem we encountered was the need to create a PostgreSQL database with the PostGIS extension enabled and to import data from an existing SQL backup file. This process involves a series of steps, such as initializing the database, starting the PostgreSQL server, creating the database, enabling the PostGIS extension, and importing the data from the backup file.&lt;/p&gt;

&lt;p&gt;To address this challenge, we'll use Docker, a platform that allows us to easily build, ship, and run applications in containers. Containers provide an isolated environment for running applications, which helps streamline the development and deployment process. In our case, we'll create a Docker container with PostgreSQL, a powerful and widely-used open-source relational database management system. Additionally, we'll be using the PostGIS extension, which adds support for geographic objects, allowing location queries to be run in SQL.&lt;/p&gt;

&lt;p&gt;In this blog post, we will walk you through the entire process of creating a Docker container with a PostgreSQL database and PostGIS extension enabled. We'll also demonstrate how to import data from an existing SQL backup file into the newly created database. This tutorial will serve as a comprehensive guide for developers who need to set up a PostgreSQL database with the PostGIS extension in a Docker container and import data from a backup file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Setting up the PostgreSQL Docker container&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this section, we'll guide you through the process of setting up a PostgreSQL Docker container with the PostGIS extension enabled. We'll start by creating a custom Dockerfile, which will define the necessary instructions and configurations for our container.&lt;/p&gt;

&lt;p&gt;Here's the Dockerfile we used to create our PostgreSQL container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Base image with PostgreSQL and PostGIS
FROM postgis/postgis

# Set up environment variables
ENV PGDATA=/var/pgdata
ENV POSTGRES_USER=postgres
ENV POSTGRES_PASSWORD=mysecretpassword
ENV POSTGRES_DB=mypostgresDB

# Create a directory to store PostgreSQL data and logs
RUN mkdir -p ${PGDATA} /tmp /var/log/postgresql &amp;amp;&amp;amp; chown -R postgres:postgres ${PGDATA} /tmp /var/log/postgresql

WORKDIR /data

# Expose the PostgreSQL port
EXPOSE 5432

# Set the user to run the container
USER postgres

# Copy the entrypoint script to the container
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

# Run the entrypoint script
CMD ["/entrypoint.sh"]

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this Dockerfile, we start with the postgis/postgis base image, which already includes PostgreSQL and the PostGIS extension. We then set up environment variables for the PostgreSQL data directory, the default PostgreSQL user, password, and database name. These variables will be used later in our entrypoint script.&lt;/p&gt;

&lt;p&gt;We create directories to store PostgreSQL data and logs, ensuring that the postgres user has the necessary permissions to access these directories. The PostgreSQL port (5432) is exposed to allow connections from outside the container.&lt;/p&gt;

&lt;p&gt;Finally, we copy our custom entrypoint.sh script into the container and set it as the command to be executed when the container starts. The entrypoint script is responsible for initializing the PostgreSQL data directory, starting the PostgreSQL server, creating the database with the PostGIS extension, and importing data from the SQL backup file.&lt;/p&gt;

&lt;p&gt;By using this Dockerfile, we're able to create a customized PostgreSQL container with the PostGIS extension enabled, tailored to our specific requirements. This setup ensures that our database is configured and ready to use, with the necessary data imported, as soon as the container starts running.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creating the entrypoint.sh script&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The entrypoint.sh script is an essential part of our PostgreSQL Docker container setup, as it handles the initialization and configuration of the database. In this section, we'll walk you through the contents of the entrypoint.sh script, explaining each step in detail.&lt;/p&gt;

&lt;p&gt;Here's the content of the entrypoint.sh script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Initialize the PostgreSQL data directory
initdb -D ${PGDATA}

# Start PostgreSQL in the background
pg_ctl -D ${PGDATA} -l /var/log/postgresql/logfile start

# Wait for PostgreSQL to start
wait_postgresql() {
  while ! pg_isready -q; do
    echo "Waiting for PostgreSQL to start..."
    sleep 1
  done
}
wait_postgresql

# Create the Postgres database named "mypostgresDB" with the "template_postgis" template
createdb mypostgresDB --template=template_postgis

# Extract the schema and data from the backup file
pg_restore -f /tmp/schema_new.sql -s /data/mypostgresDB-backup.sql 2&amp;gt;&amp;amp;1 | tee /var/log/postgresql/schema_extract.log
pg_restore -f /tmp/new_data.sql -a /data/mypostgresDB-backup.sql 2&amp;gt;&amp;amp;1 | tee /var/log/postgresql/data_extract.log

# Import the schema and data into the new database
psql --dbname=mypostgresDB -f /tmp/schema_new.sql 2&amp;gt;&amp;amp;1 | tee /var/log/postgresql/schema_import.log
psql --dbname=mypostgresDB -f /tmp/new_data.sql 2&amp;gt;&amp;amp;1 | tee /var/log/postgresql/data_import.log

# Keep PostgreSQL running
tail -f /var/log/postgresql/logfile

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Initialize the PostgreSQL data directory&lt;/strong&gt;: The initdb command initializes the data directory for the PostgreSQL server, setting up the necessary files and folders.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start PostgreSQL in the background&lt;/strong&gt;: The pg_ctl command starts the PostgreSQL server in the background, with logs being written to the specified logfile.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Wait for PostgreSQL to start&lt;/strong&gt;: The wait_postgresql function checks if the PostgreSQL server is ready to accept connections. It uses the pg_isready command to query the server status and waits until the server is ready.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Create the PostgreSQL database&lt;/strong&gt;: The createdb command creates a new PostgreSQL database named mypostgresDB using the template_postgis template, which includes the PostGIS extension.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Extract schema and data from the backup file&lt;/strong&gt;: The pg_restore command is used to extract the schema and data from the SQL backup file. We create two separate files: schema_new.sql for the schema and new_data.sql for the data. The extraction process logs are written to /var/log/postgresql/.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Import schema and data into the new database&lt;/strong&gt;: The psql command imports the schema and data from the extracted files into the newly created mypostgresDB database. The import process logs are written to /var/log/postgresql/.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Keep PostgreSQL running&lt;/strong&gt;: The tail -f command ensures that the PostgreSQL server keeps running by continuously monitoring the logfile.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By using this entrypoint.sh script, we ensure that our PostgreSQL container is fully configured and ready to use when it starts, with the necessary data imported from the backup file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running the Docker container and mounting the backup file&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this section, we'll cover the steps to run the Docker container and mount the backup file during the process. This allows the entrypoint.sh script to access the backup file and perform the required operations to restore the database schema and data.&lt;/p&gt;

&lt;p&gt;First, build the Docker image using the Dockerfile created earlier. Navigate to the directory containing the Dockerfile and run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t postgres-postgis .

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This command builds a new Docker image with the tag postgres-postgis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Download the SQL backup file&lt;/strong&gt;: Before running the Docker container, ensure that you have downloaded the SQL backup file from the S3 bucket to your local system. This file will be mounted as a volume in the Docker container to restore the database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run the Docker container&lt;/strong&gt;: To run the Docker container and mount the backup file, use the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d --name my-postgres -p 5432:5432 -v /path/to/your/local/backupfile.sql:/data/mypostgresDB-backup.sql postgres-postgis

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace /path/to/your/local/backupfile.sql with the path to the downloaded SQL backup file on your local system. This command does the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs the Docker container in detached mode (-d) with the name my-postgres.&lt;/li&gt;
&lt;li&gt;Maps the local port 5432 to the container's port 5432 (-p 5432:5432), allowing you to connect to the PostgreSQL server running inside the container.&lt;/li&gt;
&lt;li&gt;Mounts the backup file as a volume in the container at /data/mypostgresDB.sql. This allows the entrypoint.sh script to access the backup file during the container startup.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once the container is up and running, the entrypoint.sh script will initialize the PostgreSQL server, create the database, extract the schema and data from the backup file, and import them into the new database. You can then connect to the PostgreSQL server using your preferred client and verify that the database has been restored correctly.&lt;/p&gt;

&lt;p&gt;By following these steps, you have successfully created a Docker container for a PostgreSQL server with the PostGIS extension and restored a database using an SQL backup file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In this blog post, we have demonstrated how to set up a PostgreSQL Docker container with the PostGIS extension and restore a database using an SQL backup file. We walked through the process of creating a Dockerfile to define the container, implementing an entrypoint.sh script to manage the database restoration process, and running the Docker container while mounting the backup file.&lt;/p&gt;

&lt;p&gt;By leveraging Docker and PostgreSQL, we have created an easily deployable and scalable solution for managing and restoring geospatial databases. This approach not only simplifies the database restoration process but also ensures a consistent environment for your database server across different stages of development, testing, and production.&lt;/p&gt;

&lt;p&gt;With this knowledge, you can now confidently use Docker and PostgreSQL to manage your geospatial databases, and restore them from backup files as needed.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>docker</category>
      <category>containers</category>
      <category>containerapps</category>
    </item>
    <item>
      <title>AWS Multi-AZ FSX</title>
      <dc:creator>Rimpal Johal</dc:creator>
      <pubDate>Sat, 22 Oct 2022 01:31:50 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-multi-az-fsx-3715</link>
      <guid>https://dev.to/aws-builders/aws-multi-az-fsx-3715</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Amazon FSx for Windows File Server is a file system that provides the same features as NTFS. The file system can be deployed across multiple Availability Zones in the AWS Region, which will help ensure high availability and durability of your file system in the event of a catastrophic failure or disaster.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS Multi-AZ?
&lt;/h2&gt;

&lt;p&gt;AWS Multi-AZ is a feature of AWS that provides high availability and durability for your file system in the event of a catastrophic failure or disaster. For example, if you have an on-premises Windows File Server (WFS) and you want to use the WFS with AWS, then AWS Multi-AZ can be used to provide high availability and durability for your file system by replicating data between multiple Availability Zones.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is FSx for Windows File Server
&lt;/h2&gt;

&lt;p&gt;Amazon FSx for Windows File Server provides fully managed Microsoft Windows file servers, backed by a fully native Windows file system. FSx for Windows File Server has the features, performance, and compatibility to easily lift and shift enterprise applications to the AWS Cloud.You can use it to store your data and share it with other application services running on the same instance or applications running in other AWS Regions. Amazon FSx for Windows File Server offers the benefits as&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;High availability and durability&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Resilience to common operating system failures through consistency groups, which make sure every region has replicas of your file system data. If one region fails—for example because of an electrical problem or hardware failure—the other regions continue serving files from their own replicas without interruption.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What is AWS Multi-AZ FSx?
&lt;/h2&gt;

&lt;p&gt;AWS Multi-AZ FSx is a file system that can be deployed across multiple Availability Zones in the AWS Region. This will help ensure high availability and durability of your file system in the event of a catastrophic failure or disaster.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does AWS Multi AZ Fsx benefit me?
&lt;/h2&gt;

&lt;p&gt;You can't afford to have your business data stored in a single-region location, so Multi-AZ Fsx is a perfect fit. It'll keep your data safe from any catastrophic events that might happen elsewhere, whether it's an earthquake or a cyber attack.&lt;br&gt;
You can also expect to see more performance as well as cost efficiency with Multi-AZ Fsx, since there are two copies of data in different availability zones that can be used simultaneously by your applications. This helps ensure that no matter what happens in one zone, there will still be access to all of the needed information for transactions and other operations to complete successfully.&lt;/p&gt;

&lt;p&gt;Also as Multi-AZ FSx is hybrid-enabled, it helps me to migrate and synchronize large data sets from data centre to AWS and further facilitate to immediately available the data to a broad set of integrated AWS Services.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do I create an AWS Multi-AZ deployment for Fsx?
&lt;/h2&gt;

&lt;p&gt;One can deploy the Multi-AZ FSx both from AWS Console or from CloudFormation Template. I have used the CloudFormation template to create an FSx file system across multiple availability zone. &lt;/p&gt;

&lt;p&gt;AWS Multi-AZ FSx deployment Cloudformation snippet for reference. Please note this need to change as per your business requirements. In the below given code snippet, I have created three Multi-AZ FSx:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;DRBSQLDataFileSystemMAZ Size 40 GB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DRBSQLLogsFileSystemMAZ Size 32 GB&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DRBSQLDBFileSystemMAZ   Size 32 GB&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

&lt;p&gt;Resources:&lt;br&gt;
DRBSQLDataFileSystemMAZ:&lt;br&gt;
Type: 'AWS::FSx::FileSystem'&lt;br&gt;
Properties:&lt;br&gt;
FileSystemType: WINDOWS&lt;br&gt;
StorageCapacity: 40&lt;br&gt;
StorageType: SSD&lt;br&gt;
SubnetIds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;!Ref DRBWorkloadSubnetID1&lt;/li&gt;
&lt;li&gt;!Ref DRBWorkloadSubnetID2
SecurityGroupIds:&lt;/li&gt;
&lt;li&gt;!FindInMap&lt;/li&gt;
&lt;li&gt;!Ref Environment&lt;/li&gt;
&lt;li&gt;DRBSg&lt;/li&gt;
&lt;li&gt;sgSQLFsx
Tags:&lt;/li&gt;
&lt;li&gt;Key: Name
Value: !Sub DRB-SQL-Data-Fsx-Maz&lt;/li&gt;
&lt;li&gt;Key: OS
Value: WINDOWS&lt;/li&gt;
&lt;li&gt;Key: !Ref Environment
Value: Prod&lt;/li&gt;
&lt;li&gt;Key: AppName
Value: SQLWindowsFileShareMaz
WindowsConfiguration:
ThroughputCapacity: 32
AutomaticBackupRetentionDays: 30
WeeklyMaintenanceStartTime: '6:17:30'
DailyAutomaticBackupStartTime: '17:45'
CopyTagsToBackups: false
DeploymentType: MULTI_AZ_1
PreferredSubnetId: !Ref DRBWorkloadSubnetID1
SelfManagedActiveDirectoryConfiguration:
DnsIps:&lt;/li&gt;
&lt;li&gt;!Select&lt;/li&gt;
&lt;li&gt;0&lt;/li&gt;
&lt;li&gt;!Split&lt;/li&gt;
&lt;li&gt;','&lt;/li&gt;
&lt;li&gt;!Ref DomainControllerIps
DomainName: 'example.com'
UserName: !Ref DRBSQLDBServiceAccountUName
Password: !Ref DRBSQLDBServiceAccountPwd&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DRBSQLLogsFileSystemMAZ:&lt;br&gt;
Type: 'AWS::FSx::FileSystem'&lt;br&gt;
Properties:&lt;br&gt;
FileSystemType: WINDOWS&lt;br&gt;
StorageCapacity: 32&lt;br&gt;
StorageType: SSD&lt;br&gt;
SubnetIds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;!Ref DRBWorkloadSubnetID1&lt;/li&gt;
&lt;li&gt;!Ref DRBWorkloadSubnetID2
SecurityGroupIds:&lt;/li&gt;
&lt;li&gt;!FindInMap&lt;/li&gt;
&lt;li&gt;!Ref Environment&lt;/li&gt;
&lt;li&gt;DRBSg&lt;/li&gt;
&lt;li&gt;sgSQLFsx
Tags:&lt;/li&gt;
&lt;li&gt;Key: Name
Value: !Sub DRB-SQL-Logs-Fsx-Maz&lt;/li&gt;
&lt;li&gt;Key: OS
Value: WINDOWS&lt;/li&gt;
&lt;li&gt;Key: !Ref Environment
Value: Prod&lt;/li&gt;
&lt;li&gt;Key: AppName
Value: SQLWindowsFileShareMaz
WindowsConfiguration:
ThroughputCapacity: 32
AutomaticBackupRetentionDays: 30
WeeklyMaintenanceStartTime: '6:17:00'
DailyAutomaticBackupStartTime: '17:15'
CopyTagsToBackups: false
DeploymentType: MULTI_AZ_1
PreferredSubnetId: !Ref DRBWorkloadSubnetID1
SelfManagedActiveDirectoryConfiguration:
DnsIps:&lt;/li&gt;
&lt;li&gt;!Select&lt;/li&gt;
&lt;li&gt;0&lt;/li&gt;
&lt;li&gt;!Split&lt;/li&gt;
&lt;li&gt;','&lt;/li&gt;
&lt;li&gt;!Ref DomainControllerIps
DomainName: 'example.com'
UserName: !Ref DRBSQLDBServiceAccountUName
Password: !Ref DRBSQLDBServiceAccountPwd&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;DRBSQLDBFileSystemMAZ:&lt;br&gt;
Type: 'AWS::FSx::FileSystem'&lt;br&gt;
Properties:&lt;br&gt;
FileSystemType: WINDOWS&lt;br&gt;
StorageCapacity: 32&lt;br&gt;
StorageType: SSD&lt;br&gt;
SubnetIds:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;!Ref DRBWorkloadSubnetID1&lt;/li&gt;
&lt;li&gt;!Ref DRBWorkloadSubnetID2
SecurityGroupIds:&lt;/li&gt;
&lt;li&gt;!FindInMap&lt;/li&gt;
&lt;li&gt;!Ref Environment&lt;/li&gt;
&lt;li&gt;DRBSg&lt;/li&gt;
&lt;li&gt;sgSQLFsx
Tags:&lt;/li&gt;
&lt;li&gt;Key: Name
Value: !Sub DRB-SQL-TempDB-Fsx-Maz&lt;/li&gt;
&lt;li&gt;Key: OS
Value: WINDOWS&lt;/li&gt;
&lt;li&gt;Key: !Ref Environment
Value: Prod&lt;/li&gt;
&lt;li&gt;Key: AppName
Value: SQLWindowsFileShareMaz
WindowsConfiguration:
ThroughputCapacity: 32
AutomaticBackupRetentionDays: 30
WeeklyMaintenanceStartTime: '6:16:30'
DailyAutomaticBackupStartTime: '16:45'
CopyTagsToBackups: false
DeploymentType: MULTI_AZ_1
PreferredSubnetId: !Ref DRBWorkloadSubnetID1
SelfManagedActiveDirectoryConfiguration:
DnsIps:&lt;/li&gt;
&lt;li&gt;!Select&lt;/li&gt;
&lt;li&gt;0&lt;/li&gt;
&lt;li&gt;!Split&lt;/li&gt;
&lt;li&gt;','&lt;/li&gt;
&lt;li&gt;!Ref DomainControllerIps
DomainName: 'example.com'
UserName: !Ref DRBSQLDBServiceAccountUName
Password: !Ref DRBSQLDBServiceAccountPwd&lt;/li&gt;
&lt;/ul&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Security and Permission for Multi-AZ FSx&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;AWS recommends following windows file server ports to open as the mandatory requirement for the FSx to work in the new deployment. We have created defined these ports in the security group and attached that security group to Multi-AZ FSx. &lt;em&gt;However, one port detail which is missing in AWS document is port 464 which is required for an inbound rule for both TCP and UDP traffic on domain controller instances&lt;/em&gt;. We have added port 464(tcp/udp) in the security group attached to the domain controller instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fth5yxgbaotu29qxm1xm0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fth5yxgbaotu29qxm1xm0.png" alt="FSx for Windows file server port requirement"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AWS Multi-AZ Fsx provides added protection for file systems. The concept of this deployment is similar to a database cluster: it keeps copies of your data in multiple locations so that if one location fails, there are still backups available. In addition, AWS Multi-AZ FSx provides a way to create file shares across multiple Availability Zones that can be managed by the same administrator or group of administrators through a single console.&lt;/p&gt;

</description>
      <category>storage</category>
      <category>s3</category>
      <category>fsx</category>
      <category>devops</category>
    </item>
    <item>
      <title>Migration Evaluator install and Configuration</title>
      <dc:creator>Rimpal Johal</dc:creator>
      <pubDate>Sun, 14 Aug 2022 06:41:36 +0000</pubDate>
      <link>https://dev.to/aws-builders/migration-evaluator-install-and-configuration-32in</link>
      <guid>https://dev.to/aws-builders/migration-evaluator-install-and-configuration-32in</guid>
      <description>&lt;p&gt;Migration Evaluator which is formerly known as TSO Logic is the AWS provided tool which can facilitate access to insights on data center assets(mainly the workload earmarked for migration)and accelerate business decision-making for migration to AWS at no cost. Following data collection from data center(on-premise) you will quickly receive an assessment including a projected cost estimate and savings of running your on-premises workloads in the AWS Cloud. &lt;/p&gt;

&lt;p&gt;To gather the required information, we need to deploy the agentless Migration Evaluator data collector to monitor VMware, Hyper-V, and bare-metal infrastructure. AWS Migration Evaluator leverages TSO Logic technology as part of the assessment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TSO- Preinstall checklist&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Has an account been created on &lt;a href="https://console.tsologic.com" rel="noopener noreferrer"&gt;https://console.tsologic.com&lt;/a&gt;? Please contact your Migration Evaluator specialist if you have not received an invitation request.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Need a dedicated windows server to install the migration evaluator (TSO) software with following requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjdgqzxtgn3lxx626lxn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyjdgqzxtgn3lxx626lxn.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How to Install TSO on Windows Server
&lt;/h2&gt;

&lt;p&gt;To Install TSO software, login to &lt;a href="https://console.tsologic.com" rel="noopener noreferrer"&gt;https://console.tsologic.com&lt;/a&gt; and download the below tools&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Download and save the TSOBootstapper.exe from &lt;a href="https://console.tsologic.com/discover/tools" rel="noopener noreferrer"&gt;https://console.tsologic.com/discover/tools&lt;/a&gt;&lt;br&gt;
onto the new designated server for TSO installation&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download and save the Migration Evaluator Collector software MSI from&lt;br&gt;
&lt;a href="https://console.tsologic.com/discover/tools" rel="noopener noreferrer"&gt;https://console.tsologic.com/discover/tools&lt;/a&gt; onto the new designated server&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Download and save the collector specific encryption certificate from&lt;br&gt;
&lt;a href="https://console.tsologic.com/discover/collectors" rel="noopener noreferrer"&gt;https://console.tsologic.com/discover/collectors&lt;/a&gt; onto the new designated server&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure you are logged in as a local Administrator and meet the server hardware requirements&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwztc2rsepl4luvoy6ftx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwztc2rsepl4luvoy6ftx.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Select Install and wait while the packages are installed. Once done, select the Close button to complete the process.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the bootstrapper installed, then run the TSOCollector_.msi. Select the certificate file (-.crt) previously downloaded. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select to run the collector under a local system account or the service account created prior to install.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select the Grant rights automatically checkbox and click OK to proceed. Click test Credentials to ensure you have required permissions to run the software.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select HTTPS for communication with the IIS application. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Yes to automatically start the collection service once the install sequence finishes &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once installation has completed, your next step will be to create your local account for managing the collector. This is not the account used on &lt;a href="https://console.tsologic.com" rel="noopener noreferrer"&gt;https://console.tsologic.com&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Access the Migration Evaluator Collector software by clicking on the newly created desktop shortcut, or by opening your browser at: &lt;a href="https://localhost" rel="noopener noreferrer"&gt;https://localhost&lt;/a&gt;. Enter your desired credentials, and click Create Account&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Take note of your recovery key. This will allow you access to the Migration Evaluator Collector if you forget your password. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Once open you will see the screen as below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dyxeepd7u729ep88ces.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7dyxeepd7u729ep88ces.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How to configure OS collection VMware Servers
&lt;/h2&gt;

&lt;p&gt;If on-premise servers are identified as a virtual machine in Datacenter then configure the collection as per OS credentials via WMI Protocol.&lt;/p&gt;

&lt;p&gt;Before configuring the collection, test if WMI protocol is open from the TSO Server. Use the below command to check for one of the data center server. If the command is success, it will return the name else we will get an error as "Access Denied" which means that WMI port is not opened for that server. It is advisable to contact your organisation system administrator to gain access to the server via WMI.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

wmic /user:/&amp;lt;"user name"&amp;gt; /node:"10.28.x.x" CSPRODUCT GET NAME


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Once get the confirmation on WMI connectivity, proceed to the configuration bit.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Configure OS Credentials&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As per the official migration evaluator documentation, If we have servers identified as virtual machines then we need to configure Operating system credentials . Go to Configuration tab -&amp;gt; Global -&amp;gt; OS credentials tab -&amp;gt; Select WMI protocol -&amp;gt; and click on “Next”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapkwkza6c754dbadq1hf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fapkwkza6c754dbadq1hf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above screen, select the "WMI" protocol and provide name "VMware Virtual Platform", domain as your organisation domain and username and password as service account credentials which are provided you by your administrator and then click on "save".&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Prepare the CSV file&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To add all the servers’ details in Data providers section, Prepare the CSV file with Name and IP and save the file to your local system.&lt;br&gt;
If all the servers in your organisation are identified as a virtual machine in Datacenter. Prepare the csv file with Name and IP all servers for Operating system metrics collection from Migration Evaluator (TSO).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;**Upload the CSV to Migration Evaluator&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In the migration evaluator go to Global settings --&amp;gt; Data Providers --&amp;gt; Select "AWS Migration Evaluator" and click "Next"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ovxyzs1blc0umltxxo1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ovxyzs1blc0umltxxo1.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the next screen provide the name as "VMware" and browse to the location of the CSV file from your computer.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pjk2chqv4gpln8k8nds.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2pjk2chqv4gpln8k8nds.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Check the collection for all Servers&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Go to Device settings --&amp;gt; Under Bare_metal list --&amp;gt; Operating systems, you can see all the servers configured for collection.&lt;/p&gt;

&lt;p&gt;This is stage where you need to "Test Colection"&lt;/p&gt;

&lt;p&gt;Select any one of the server which you can see from your data center and click on "Test collection", if it return the status as "Success" that means your Data Center Server is ready for the collection.&lt;/p&gt;

&lt;p&gt;Here you may have the possibility for some server which have difficulty in connection. Troubleshoot the faulty servers before you proceed to the next step of OS Data collection.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;OS Collection&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final step is initiate the data collection using the configured Migration Evaluator Tool&lt;br&gt;
You may need to add few servers manually which are having the issue with credentials.&lt;/p&gt;

&lt;p&gt;For adding servers individually, go to configuration --&amp;gt;Global--&amp;gt;OS Credentials and add each server individually with appropriate credentials.&lt;/p&gt;

&lt;p&gt;Once all the servers which are having issue with credentials are added , then go to “OS Collection” tab &lt;br&gt;
 &lt;br&gt;
Global Settings-&amp;gt; OS Collection -&amp;gt; Click on “Scan all Adhoc Devices for OS Utilization”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e0aavvnvmwvg04v0zug.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5e0aavvnvmwvg04v0zug.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Migration Evaluator will take minimum two weeks of time to generate the report with complete assessment of client infrastructure with OS level utilisation and Cost estimates.&lt;/p&gt;

</description>
      <category>migration</category>
      <category>aws</category>
      <category>cloud</category>
      <category>vm</category>
    </item>
    <item>
      <title>VMC on AWS Overview</title>
      <dc:creator>Rimpal Johal</dc:creator>
      <pubDate>Sun, 22 Aug 2021 12:13:07 +0000</pubDate>
      <link>https://dev.to/aws-builders/vmc-on-aws-overview-j5g</link>
      <guid>https://dev.to/aws-builders/vmc-on-aws-overview-j5g</guid>
      <description>&lt;p&gt;VMware Cloud on AWS is a cloud service jointly developed by VMware and AWS. This offering brings the same technologies enterprise customers use on premises (e.g., VMware vSphere® ESXi, vSAN, NSX) into the cloud.&lt;/p&gt;

&lt;p&gt;VMware Cloud on AWS (in short VMC) helps customers deploy and accelerate migration of VMware vSphere-based workloads to the cloud while providing robust capabilities and hybrid cloud solutions. VMware Cloud on AWS allows organizations to take immediate advantage of the scalability, availability, security, and global reach of the AWS infrastructure. &lt;/p&gt;

&lt;p&gt;Also this gives customers capability to access native AWS services from the VMWare Cloud on AWS SDDC. The access to AWS native services from SDDC is without incurring any ingress or egress charges.&lt;/p&gt;

&lt;p&gt;A key advantage of VMware Cloud on AWS is that it runs ESXi hosts on bare metal EC2 instances on the AWS infrastructure. This helps end users to establish a VMware Cloud on AWS&lt;br&gt;
SDDC and quickly begin deploying workloads to the cloud.&lt;/p&gt;

&lt;p&gt;Use cases for VMC on AWS are&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Cloud Migration: This helps customers to migrate to cloud without converting or re-architecting their existing application stacks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data Center Extension: VMC is the most viable solution for customers who are looking for expanding data center capacity in a cost-effective way.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Disaster Recovery: VMC can be considered as a DR solution for a customers who want combine VMWare disaster recovery with AWS Cloud.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Application Modernization: Customers can liverage private access to AWS native services to modernize application with the use of feature rich AWS native services.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The first step for VMC on AWS is to - Deploy a SDDC. Below are the steps to deploy a SDDC.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Open &lt;a href="https://console.cloud.vmware.com/"&gt;https://console.cloud.vmware.com/&lt;/a&gt; - login with your credentials.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Under my services, click on the VMWare Cloud on AWS tile.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcpb4e4a5l73cjgg4b3a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzcpb4e4a5l73cjgg4b3a.png" alt="image" width="356" height="193"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click on Create SDDC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujvz2iheyue74ziw231y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujvz2iheyue74ziw231y.png" alt="image" width="397" height="103"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The first step when creating a new SDDC is to define properties relating to the AWS region you wish to deploy your SDDC to, the deployment and host types for the SDDC, and to give your SDDC a name. Enter the following parameters and click Next:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Property&lt;/strong&gt;&lt;br&gt;
  Cloud                          &amp;lt;&amp;gt;&lt;br&gt;
  AWS Region                     &amp;lt;&amp;lt; Your AWS Region&amp;gt;&amp;gt;&lt;br&gt;
  Deployment                     &amp;lt;&amp;lt; Mult-Host&amp;gt;&amp;gt;&lt;br&gt;
  Host Type                      &amp;lt;&amp;lt; I3 ( local SSD)&lt;br&gt;
  SDDC Name                      &amp;lt;&amp;lt; Custom Name&amp;gt;&amp;gt;&lt;br&gt;
  Number of Hosts                &amp;lt;&amp;lt; 2 &amp;gt;&amp;gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In this step, connect to the AWS account(Note this is the customer AWS account - defined/named as sidecar account). Select the AWS account and click next&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For VPC and subnet, select the VPC and choose the subnet.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For the management subnet, define the CIDR or use the default.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Review and acknowledge and click deploy SDDC.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the SDDC is deployed , you can see your new SDDC in the list of SDDCs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That completes the first section of deploying the SDDC.&lt;/p&gt;

</description>
      <category>vmc</category>
      <category>aws</category>
      <category>vmware</category>
      <category>migration</category>
    </item>
    <item>
      <title>Virtual Private Gateways association to Direct Connect Gateways in AWS</title>
      <dc:creator>Rimpal Johal</dc:creator>
      <pubDate>Tue, 17 Aug 2021 12:37:22 +0000</pubDate>
      <link>https://dev.to/aws-builders/associate-virtual-private-gateways-to-direct-connect-gateways-in-aws-8n2</link>
      <guid>https://dev.to/aws-builders/associate-virtual-private-gateways-to-direct-connect-gateways-in-aws-8n2</guid>
      <description>&lt;p&gt;In this blog, I am explaining how to associate multiple virtual private gateways to single direct connect gateway in an AWS account.&lt;/p&gt;

&lt;p&gt;Direct connect gateway gives you the option of associating multiple Virtaul Private Gateways(VGW) in an account to one direct connect gateway. When the direct connection established, it need to be consumed at AWS console either by VPG or by direct connect gateway. If you have multiple VPCs in the account and have multiple associated virtual gateways which need to be facilitated by one direct connect connection then direct connect gateway is the best option to manage this. Detailing below the steps to associate multiple VGWs to single direct connect gateway.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a Virtual Private Gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4x03qx92jlxo0z6xh62.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4x03qx92jlxo0z6xh62.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjuojqnw2hopmruuh8a85.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjuojqnw2hopmruuh8a85.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4uvbrfjpdbwv5ydsbuy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp4uvbrfjpdbwv5ydsbuy.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After you create the Virtual Private Gateway, it will be in the detach state, attach it or associate it with your VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx089uccekvbnna8pqwt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flx089uccekvbnna8pqwt.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbyc0pckufw7gv1j7s85o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbyc0pckufw7gv1j7s85o.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwt0si46zkxbdoccwsky1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwt0si46zkxbdoccwsky1.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukf1990sskueefbcvpd7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fukf1990sskueefbcvpd7.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After Virtual Gateway is attached to the VPC, create the direct connect gateway. Navigate to Direct Connect &amp;gt; Direct connect gateways and click on create direct connect gateways&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgscj193bq6rs3vtch3gb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgscj193bq6rs3vtch3gb.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Mention some name for the direct connect gateway and provide Amazon side ASN with in the given rage. This range need to define between 64512–65534&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajj9yuhottmk9ily3bg9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajj9yuhottmk9ily3bg9.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z67rki9mdcfh4c92f3h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2z67rki9mdcfh4c92f3h.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After direct connect gateway is created in your AWS account, it will show in the available state. Click on the gateway id and click the second tab gateway associations to associate your virtual gateway to the direct connect gateway.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63pp3l49bmyryv4k1l0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63pp3l49bmyryv4k1l0n.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsleepf2ejc77xcw19ns.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzsleepf2ejc77xcw19ns.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Click on the associate gateway and attach the virtual private gateway which you have created earlier and associated with VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fas1ibwabswmp7hi8f6by.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fas1ibwabswmp7hi8f6by.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ggwsom3hi2lnvyb5fiz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ggwsom3hi2lnvyb5fiz.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Initially the page will show the status of associating and after 3–4 minutes, the state will change to associated. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u615k2ntbqhdeiyi19x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1u615k2ntbqhdeiyi19x.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As given above in the steps, you can associate multiple virtual private gateway to single direct connect gateway in an account. Please note this association can happen only with in the account and not cross-accounts. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>awsnetworking</category>
      <category>directconnect</category>
      <category>awscommunity</category>
    </item>
    <item>
      <title>CloudFormation stack creation using Python</title>
      <dc:creator>Rimpal Johal</dc:creator>
      <pubDate>Thu, 12 Aug 2021 04:25:34 +0000</pubDate>
      <link>https://dev.to/aws-builders/cloudformation-stack-creation-using-python-4f65</link>
      <guid>https://dev.to/aws-builders/cloudformation-stack-creation-using-python-4f65</guid>
      <description>&lt;p&gt;CloudFormation stack can be created from AWS Console, AWS CLI or using many other ways. We can also automate the creation of the CloudFormation stack using AWS CLI, CodePipeline etc. In this section of my blog, I am going to introduce how to use an AWS SDK for Python to automate the CloudFormation Stack creation. Here I am assuming that you know the basic Python and have understanding of the AWS SDK for Python. I would suggest to test the below scripts with small stack first and further you can customise the Python script for your own requirements.&lt;/p&gt;

&lt;p&gt;Follow the steps below to automate the Cloudformation stack creation.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Make sure you have the latest Python installed on your box where you are intending to run the Python Script. In case of Mac it comes with default installed Python. I have upgraded it to the latest version.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rimpaljohal@MacBook-Pro Blog-Contents % Python --version
Python 3.9.6

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Get hold of the AWS SDK for Python and installed it on the box where you are going to execute the Python Script.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/sdk-for-python/"&gt;AWS SDK for Python (Boto3)&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;You need two files for automate creation of CloudFormation stack. &lt;br&gt;
 &lt;em&gt;CFNStackCreation.py&lt;/em&gt; --&amp;gt; Your Python Script&lt;br&gt;
 &lt;em&gt;Parameter.json&lt;/em&gt;      --&amp;gt; Your Parameter file&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Keep these two files in the same directory path from where you are going to execute the Python script. For example&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; rimpaljohal@MacBook-Pro Rimpal % ls CFNStackCreation.py Parameter.json 
CFNStackCreation.py Parameter.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Python Script below for reference&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#-- Import modules
import sys
import os.path
import json
import time
import boto3

 #-- Functions
def check_status( cfn_client_ss, cfn_stack_name_ss ):
stacks_ss = cfn_client_ss.describe_stacks(StackName=cfn_stack_name_ss)["Stacks"]
stack_ss_val = stacks_ss[0]
status_cur_ss = stack_ss_val["StackStatus"]
print ("Current status of stack " + stack_ss_val["StackName"] + ": " + status_cur_ss)
for ln_loop in range(1, 9999):
    if "IN_PROGRESS" in status_cur_ss:
        print ("\rWaiting for status update(" + str(ln_loop) + ")...",)
        time.sleep(5) # pause 5 seconds

        try:
            stacks_ss = cfn_client_ss.describe_stacks(StackName=cfn_stack_name_ss)["Stacks"]
        except:
            print (" ")
            print ("Stack " + stack_ss_val["StackName"] + " no longer exists")
            status_cur_ss = "STACK_DELETED"
            break

        stack_ss_val = stacks_ss[0]

        if stack_ss_val["StackStatus"] != status_cur_ss:
            status_cur_ss = stack_ss_val["StackStatus"]
            print (" ")
            print ("Updated status of stack " + stack_ss_val["StackName"] + ": " + status_cur_ss)
    else:
        break

return status_cur_ss
#-- End Functions

 #-- Main program
def main(access_key_ss, secret_key_ss, param_file_ss):

#-- Confirm parameters file exists
if os.path.isfile(param_file_ss):
    json_data_ss=open(param_file_ss).read()
else:
    print ("Parameters file: " + param_file_ss + " is invalid!")
    print (" ")
    sys.exit(3)

print ("Parameters file: " + param_file_ss)
parameters_data_ss = json.loads(json_data_ss)
region_ss = parameters_data_ss["RegionId"]

#-- Connect to AWS region specified in parameters file
print ("Connecting to region: " + region_ss)
cfn_client_ss = boto3.client('cloudformation', region_ss, aws_access_key_id=access_key_ss, aws_secret_access_key=secret_key_ss)

#-- Store parameters from file into local variables
cfn_stack_name_ss = parameters_data_ss["StackName"]
print ("You are deploying stack: " + cfn_stack_name_ss)
#-- Check if this stack name already exists
stack_list_ss = cfn_client_ss.describe_stacks()["Stacks"]
stack_exists_ss = False
for stack_ss_cf in stack_list_ss:
    if cfn_stack_name_ss == stack_ss_cf["StackName"]:
        print ("Stack " + cfn_stack_name_ss + " already exists.")
        stack_exists_ss = True

#-- If the stack already exists then delete it first
if stack_exists_ss:
    user_response = input ("Do you want to delete or update the stack")
    print ("Calling Delete Stack API for " + cfn_stack_name_ss)
    cfn_client_ss.delete_stack(StackName=cfn_stack_name_ss)

    #-- Check the status of the stack deletion
    check_status(cfn_client_ss, cfn_stack_name_ss)

print (" ")
print ("Loading parameters from parameters file:")
fetch_stack_parameters_ss = []
for key_ss in parameters_data_ss.keys():
    if key_ss == "TemplateUrl":
        template_url_ss = parameters_data_ss["TemplateUrl"]
    elif key_ss == "StackName" or key_ss == "RegionId":
        # -- Do not send as parameters
        print (key_ss + " - "+ parameters_data_ss[key_ss] + " (not sent as parameter)")
    else:
        print (key_ss + " - "+ parameters_data_ss[key_ss])
        fetch_stack_parameters_ss.append({"ParameterKey": key_ss, "ParameterValue": parameters_data_ss[key_ss]})

#-- Call CloudFormation API to create the stack   TemplateBody='', 
print (" ")
print ("Calling CREATE_STACK method to create: " + cfn_stack_name_ss)

status_cur_ss = ""

result_ss = cfn_client_ss.create_stack(StackName=cfn_stack_name_ss, DisableRollback=True, TemplateURL=template_url_ss, Parameters=fetch_stack_parameters_ss, Capabilities=["CAPABILITY_NAMED_IAM"])
print ("Output from API call: ")
print (result_ss)
print (" ")

#-- Check the status of the stack creation
status_cur_ss = check_status( cfn_client_ss, cfn_stack_name_ss )

if status_cur_ss == "CREATE_COMPLETE":
    print ("Stack " + cfn_stack_name_ss + " created successfully.")
else:
    print ("Failed to create stack " + cfn_stack_name_ss)
    sys.exit(1)

#-- Call Main program
if __name__ == "__main__":
if len(sys.argv) &amp;lt; 4:
    print ("%s:  Error: %s\n" % (sys.argv[0], "Not enough command options given"))
    print ("Argument 1 (required): AWS Access Key ")
    print ("Argument 2 (required): AWS Secret Access Key ")
    print ("Argument 3 (required): Stack Parameters JSON file ")
    print (" ")
    sys.exit(3)
else:
    access_key_ss = sys.argv[1]
    secret_key_ss = sys.argv[2]
    param_file_ss = sys.argv[3]

main(access_key_ss, secret_key_ss, param_file_ss)

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Parameter json file below for reference. This is the template parameter json file and one can customise it as per the cloudformation build requirement. I am calling master stack here through this parameter file. Master stack is having the nested stacks to create VPC, Subnets(private and public, Database Subnets), application ec2 instances with autoscaling and Load Balancer.&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; {
"RegionId": "us-east-1",
"TemplateUrl": "https://blogbucket.s3.amazonaws.com/masterstackv1.yml",
"StackName": "nonprodstack",
"EnvironmentType": "dev",
"VpcCIDR": "10.192.0.0/16",
"PublicSubnet1CIDR": "10.192.10.0/24",
"PublicSubnet2CIDR": "10.192.11.0/24",
"PublicSubnet3CIDR": "10.192.12.0/24",
"PrivateSubnet1CIDR": "10.192.20.0/24",
"PrivateSubnet2CIDR": "10.192.22.0/24",
"PrivateSubnet3CIDR": "10.192.24.0/24",
"DatabaseSubnet1CIDR": "10.192.21.0/24",
"DatabaseSubnet2CIDR": "10.192.23.0/24",
"DatabaseSubnet3CIDR": "10.192.25.0/24",
"LatestAmiID": "/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2",
"ALBAccessCIDR": "0.0.0.0/0",
"ServerInstanceType": "t2.micro",
"WebAsgMax": "4",
"WebAsgMin": "3"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once the required two files(Python script and the Parameter file) are saved in one directory then open the terminal and navigate/go to the directory which is having two mentioned files.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Execute the python scripts as given below&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;   rimpaljohal@MacBook-Pro Rimpal % python CFNStackCreation.py AU************* SL*********************** Parameter.json
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Here you need to give AWS Access Key and AWS Secret Access Key of your AWS account.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As the script progress, you can see the progress on the Mac terminal.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Above defined steps will help you to deploy the CloudFormation stack using python script.&lt;/p&gt;

&lt;p&gt;Happy Coding!!&lt;/p&gt;

</description>
      <category>python</category>
      <category>aws</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>Install Python AWS SDK Boto3</title>
      <dc:creator>Rimpal Johal</dc:creator>
      <pubDate>Sun, 18 Jul 2021 05:48:53 +0000</pubDate>
      <link>https://dev.to/aws-builders/install-python-aws-sdk-boto3-1dkf</link>
      <guid>https://dev.to/aws-builders/install-python-aws-sdk-boto3-1dkf</guid>
      <description>&lt;p&gt;This post is about installing AWS SDK boto3 on Mac. Python comes pre-installed on Mac OS so it is easy to start using it. However, to take advantage of the latest versions of Python, you will need to download and install newer versions alongside the system ones. Note that Python installers are available for the latest Python 3 and Python 2 releases that will work on all Macs that run Mac OS X 10.5 and later.&lt;/p&gt;

&lt;p&gt;Follow the steps below to install Python AWS SDK boto3 on MacBook.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Check the version of Python on your Mac machine first&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  @MacBook-Pro ~ % python --version
  Python 2.7.16
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To install AWS SDK boto3 , it is recommended to upgrade the Python to latest version&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Since the Python is system installed here so we need to install the latest version of Python and not upgrade the Python. Note that any attempt to upgrade the Python will give the error as " Error: Python not installed"&lt;/p&gt;

&lt;p&gt;command to use is&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; brew install python
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the latest version is installed on your Mac, check the installed python by the command as below&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@MacBook-Pro ~ % ls -ltra /usr/local/bin/python*   
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;One can see the output as below from the above command&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lrwxr-xr-x  1 rimpaljohal  admin  38 16 Jul 09:27 /usr/local/bin/python3 -&amp;gt; ../Cellar/python@3.9/3.9.6/bin/python3
lrwxr-xr-x  1 rimpaljohal  admin  45 16 Jul 09:27 /usr/local/bin/python3-config -&amp;gt; ../Cellar/python@3.9/3.9.6/bin/python3-config
lrwxr-xr-x  1 rimpaljohal  admin  40 16 Jul 09:27 /usr/local/bin/python3.9 -&amp;gt; ../Cellar/python@3.9/3.9.6/bin/python3.9
lrwxr-xr-x  1 rimpaljohal  admin  47 16 Jul 09:27 /usr/local/bin/python3.9-config -&amp;gt; ../Cellar/python@3.9/3.9.6/bin/python3.9-config
lrwxr-xr-x  1 rimpaljohal  admin  24 16 Jul 10:03 /usr/local/bin/python -&amp;gt; /usr/local/bin/python3.9
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;After confirming the python version, change the default                                  python symlink to the version one want to use from above.&lt;/p&gt;

&lt;p&gt;Command below will create the symbolic link for pointing            the default Python path to the latest installed version of  the Python&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  ln -s -f /usr/local/bin/python3.9 /usr/local/bin/python

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Close the terminal and reopen the new session. Check the version of Python now&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  Python --version

  @MacBook-Pro ~ % Python --version
  Python 3.9.6

&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;Here, I have updated my Mac to the latest Python Version.    Now, I will move to install Boto3 on my Mac &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Install the pip in the first place&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo easy_install pip

Searching for pip
Best match: pip 21.1.3
Processing pip-21.1.3-py2.7.egg
pip 21.1.3 is already the active version in easy-  install.pth
Installing pip script to /usr/local/bin
Installing pip2.7 script to /usr/local/bin
Installing pip2 script to /usr/local/bin

&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Once the pip is installed then install boto3&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@MacBook-Pro ~ % pip3 install boto3
Successfully installed boto3-1.18.0 botocore-1.21.0    jmespath-0.10.0 python-dateutil-2.8.2 s3transfer-0.5.0 six-1.16.0 urllib3-1.26.6
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;After boto3 is installed then install awscli&lt;br&gt;
&lt;/p&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@MacBook-Pro ~ % pip3 install awscli

Successfully installed PyYAML-5.4.1 awscli-1.20.0 colorama-0.4.3 docutils-0.15.2 pyasn1-0.4.8 rsa-4.7.2
&lt;/code&gt;&lt;/pre&gt;

&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the final step configure awscli with   and you are all ready to use Python AWS SDK boto3 on your Mac super machine!!&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Some quick commands&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@MacBook-Pro ~ % which pip
/usr/local/bin/pip
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;@MacBook-Pro ~ % pip3 --version
pip 21.1.3 from /usr/local/lib/python3.9/site-packages/pip (python 3.9)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Happy Coding!!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>boto3</category>
      <category>python</category>
      <category>awscli</category>
    </item>
  </channel>
</rss>
