<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hammad Khan</title>
    <description>The latest articles on DEV Community by Hammad Khan (@hammadk94).</description>
    <link>https://dev.to/hammadk94</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hammadk94"/>
    <language>en</language>
    <item>
      <title>Delete your Cloud Infrastructure with a single command -cloud nuke</title>
      <dc:creator>Hammad Khan</dc:creator>
      <pubDate>Fri, 16 Feb 2024 15:43:20 +0000</pubDate>
      <link>https://dev.to/hammadk94/delete-your-cloud-infrastructure-with-a-single-command-cloud-nuke-6cm</link>
      <guid>https://dev.to/hammadk94/delete-your-cloud-infrastructure-with-a-single-command-cloud-nuke-6cm</guid>
      <description>&lt;p&gt;Recently, I had the task of deleting all the resources in my AWS account, as they were not being used and were generating unwanted bills. With close to 50 different resources spread across various regions, the manual deletion process would have taken hours.&lt;/p&gt;

&lt;p&gt;To streamline this, I discovered a helpful open-source tool called “Cloud-Nuke” that allows you to delete all resources in one go without manual intervention.&lt;/p&gt;

&lt;p&gt;Installation&lt;br&gt;
Windows :&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Download the latest binary for your OS on the releases page&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Move the binary to a folder on your PATH. E.g.: mv cloud-nuke_darwin_amd64 /usr/local/bin/cloud-nuke&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add execute permissions to the binary. E.g.: chmod u+x /usr/local/bin/cloud-nuke&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Linux/MacOs:&lt;br&gt;
Install using package manager — : brew install cloud-nuke&lt;/p&gt;

&lt;p&gt;To test the installation — : cloud-nuke --help&lt;/p&gt;

&lt;p&gt;Setting Credentials&lt;br&gt;
Set the credentials of a user with “Admin” privileges as it is necessary to destroy the resources.&lt;/p&gt;

&lt;p&gt;export AWS_ACCESS_KEY_ID="ASIAXZXA2NEZM"&lt;br&gt;
export AWS_SECRET_ACCESS_KEY="5bPGiXSbDkdQSmeRDcVgEV/dMlMbL"&lt;br&gt;
export AWS_SESSION_TOKEN="IQoJb3JpZ2luX2VjECmFwLXNvduScbHJr8cK"&lt;br&gt;
Generate your user credentials from the IAM console and export them in the terminal.&lt;/p&gt;

&lt;p&gt;Implementation&lt;br&gt;
With the setup ready, let’s explore some commands for deleting AWS infrastructure.&lt;br&gt;
Caution⚠️ : Be certain while using Cloud-Nuke, as it will irreversibly delete all resources.&lt;/p&gt;

&lt;p&gt;Display available commands and get additional help&lt;br&gt;
cloud-nuke --help&lt;/p&gt;

&lt;p&gt;This will show the available commands and additional help to use cloud-nuke.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Destroy all resources (with confirmation prompt)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;cloud-nuke aws&lt;/p&gt;

&lt;p&gt;This is the command that will check all the resources in the account and destroy everything. Also, it will ask for confirmation before nuking.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Check resources without deletion&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;cloud-nuke aws --dry-run&lt;/p&gt;

&lt;p&gt;This command will only check the resources in your account and list them for you on the terminal.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Delete resources in a specific region (e.g., ap-south-1)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;cloud-nuke aws --region ap-south-1&lt;/p&gt;

&lt;p&gt;In the previous command , it search and delete the resources in all the region, but if you want to delete resources in a particular region then you can use this command.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;List resource types that will be checked and deleted&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;cloud-nuke aws --list-resource-types&lt;/p&gt;

&lt;p&gt;If you want to know which resources will be checked and deleted by cloud-nuke, this is the command to go with.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Exclude specific regions from deletion (e.g., us-east-1, us-east-2)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;cloud-nuke aws --exclude-region us-east-1 --exclude-region us-east-2&lt;/p&gt;

&lt;p&gt;If you want to exclude any particular region from nuking, you can use the — exclude-region attribute.&lt;/p&gt;

&lt;p&gt;Note: Be cautious while deleting VPC resources due to potential dependencies; ensure all dependencies are removed before applying Cloud-Nuke. Additionally, be aware of any Service Control Policies (SCPs) applied to the account.&lt;/p&gt;

&lt;p&gt;For more details, you can check out this GitHub repo: &lt;a href="https://github.com/gruntwork-io/cloud-nuke"&gt;https://github.com/gruntwork-io/cloud-nuke&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Happy Destruction!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
    </item>
    <item>
      <title>AWS CloudFormation introduces Git management of stacks</title>
      <dc:creator>Hammad Khan</dc:creator>
      <pubDate>Mon, 27 Nov 2023 20:27:38 +0000</pubDate>
      <link>https://dev.to/hammadk94/aws-cloudformation-introduces-git-management-of-stacks-g5i</link>
      <guid>https://dev.to/hammadk94/aws-cloudformation-introduces-git-management-of-stacks-g5i</guid>
      <description>&lt;p&gt;AWS CloudFormation now supports Git sync, enabling customers to synchronize their stacks from a CloudFormation template stored in a remote Git repository. A CloudFormation template describes your desired resources and their dependencies so you can launch and configure them together as a stack. &lt;/p&gt;

&lt;p&gt;This feature enables developers to speed up the development cycle by integrating CloudFormation deployments directly into their Git workflow and reducing time lost to context switching. You can enable CloudFormation Git sync through the AWS Console, CLI, and SDKs. Dynamic values such as stack parameters and tags can now be specified via a YAML deployment file, enabling customers to track historical changes to these values in a Git file stored in your remote repository. After setup, AWS will automatically sync the deployment file and CloudFormation template, updating your stack after every commit. To test CloudFormation changes before pushing them to production, teams can configure one stack to sync from a Git staging branch and another from a production branch. Git sync works with GitHub, GitHub enterprise, GitLab, and BitBucket. &lt;/p&gt;

&lt;p&gt;This feature is available in the following regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Paris), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Stockholm), and South America (São Paulo).&lt;/p&gt;

</description>
      <category>aws</category>
      <category>git</category>
      <category>cloudformation</category>
    </item>
    <item>
      <title>10 study tips for the AWS Certified Database – Specialty Certification</title>
      <dc:creator>Hammad Khan</dc:creator>
      <pubDate>Mon, 09 Jan 2023 21:22:54 +0000</pubDate>
      <link>https://dev.to/aws-builders/10-study-tips-for-the-aws-certified-database-specialty-certification-3amj</link>
      <guid>https://dev.to/aws-builders/10-study-tips-for-the-aws-certified-database-specialty-certification-3amj</guid>
      <description>&lt;p&gt;I have completed 5 AWS Certifications, currently I am currently preparing for AWS Certified Database – Specialty Certification.&lt;br&gt;
The AWS Certified Database - Specialty certification attests to a candidate's proficiency with the full range of AWS database services. There are more than 15 purpose-built database engines available on the AWS platform, including relational, key-value, document, in-memory, graph, time series, and ledger databases. With so many possibilities, it's crucial that you choose the best instrument for the task at hand. Candidates have the chance to demonstrate their aptitude for requirement analysis and the creation of suitable database solutions by taking the AWS Certified Database - Specialty exam. This exam tests candidates' ability to design, recommend, and maintain the best AWS database solution for a given use case. I recently studied for this test and got an A. I previously worked as a database administrator, so I found the exam-study process to be especially interesting.&lt;/p&gt;

&lt;p&gt;In order to help you get ready for the certification, AWS Training and Certification offers a combination of free, on-demand digital courses and virtual/in-person instructor-led classroom training. It also offers a specific database study path. I urge you to make use of both the instruction and these ten suggestions.&lt;/p&gt;

&lt;p&gt;Understand which workloads are best suited to each of the purpose-built database services on AWS When incorporating a new database solution into an architecture, it’s important to consider the nature of the data, such as its storage, usage, volume, and velocity. The database services on AWS can vary widely in terms of performance, scalability, and availability characteristics. Understanding the strengths of each service and being able to match workload characteristics to the different services is an important skill that’s heavily tested on the exam. Workload-specific database design currently comprises 26% of the exam composition. Resources to consider: • AWS Public Sector Summit presentation: Building with Purpose-Built Databases: Match Your Workload to the Right Database • Blog post: Build a Modern Application with Purpose-Built AWS Databases&lt;br&gt;
Understand strategies for disaster recovery and high availability When deploying databases on AWS, it’s critical to understand how to configure database architectures to achieve recovery-point and recovery-time objectives. This topic is on the exam and covers high-availability and disaster-recovery configurations for a variety of the available database services. This subject area aligns closely to the Reliability Pillar of the AWS Well-Architected Framework. Resources to consider: • Webpage: Amazon Relational Database Service (Amazon RDS) high-availability architecture concepts • Blog post: Cross-Region disaster recovery of Amazon RDS for SQL Server • Documentation: Disaster-recovery strategies for Amazon Aurora databases&lt;br&gt;
Understand how database solution deployments can be automated A best practice for deploying AWS resources is to use a configuration system that treats your infrastructure as code. AWS CloudFormation is one way you can do this on AWS. Infrastructure as code is a key enabler of DevOps practices and brings developers and operations together to collaborate on automating application delivery at scale. Because databases are typically stateful components in your architecture, it’s important to understand how you can use CloudFormation to provision new resources and manage them throughout their lifecycle. Resources to consider: • Blog post: Using CloudFormation to configure auto scaling for Amazon DynamoDB • Documentation: CloudFormation User Guide&lt;br&gt;
Determine data preparation and migration strategies The exam tests your knowledge of data-migration methods both into and within AWS. To address questions on this topic, it’s important to understand capabilities such as snapshots, database restores, and data-replication options. Ensure you understand which tools and services are most appropriate to maximize efficiency. Additionally, know how to prepare your data sources and targets and choose schema-conversion methods using tools such as the AWS Schema Conversion Tool. Resource to consider: • Blog post: Migrating a commercial database to open source with AWS SCT and AWS DMS&lt;br&gt;
Determine backup and restore strategies To ensure business data is protected, you need to determine appropriate backup and restoration strategies. Depending on the AWS database services being used, backup and recovery strategies will vary. Data-protection strategies may include the ability to take manual snapshots and leverage automated backups or continuous backups. Resource impact resulting from backup and restoration activities can also vary. Backup solutions for AWS database services, such as Aurora and DynamoDB, are designed to have little to no impact on performance and will not cause interruptions. In other cases, such as with Amazon ElastiCache, potential impacts depend on engine version, activity level, or configurations such as reserved memory. Consider performance and availability impacts and mitigation strategies.&lt;br&gt;
Manage the operational environment of a database solution A number of AWS database services are provided as fully managed database services where AWS manages many aspects of database management on your behalf. For example, this could include applying patches to the database engine or its underlying operating system. Ensure you understand how individual database services handle updates and configuration changes, as there are some differences between services.&lt;br&gt;
Determine monitoring and alerting strategies Additionally, you’ll need to be familiar with the monitoring capabilities of the AWS databases and understand how they interact with the additional AWS monitoring and alerting tools, including Amazon CloudWatch, AWS CloudTrail, and the collection of custom metrics. Resource to consider: • Documentation: Performance Insights dashboard documentation&lt;br&gt;
Understand how you can optimize database performance For the exam, you’ll need to apply troubleshooting skills to database-performance issues, fine tune database design and performance, and identify AWS tools and services that are most helpful and cost effective for database scenarios. The approach for optimizing performance and costs varies across AWS database services. For example, in DynamoDB, design your application for uniform activity across all logical partition keys in the table and its secondary indexes. Resources to consider: • Blog post: Sharding with Amazon Relational Database Service • Webpage: Amazon RDS Read Replicas&lt;br&gt;
Encrypt data at rest and in transit Encrypting data at rest is a key component of data protection on AWS. AWS database services are unique and often implement data protection in different ways. Understanding how to leverage AWS Key Management Service (AWS KMS) for encryption key management to create encryption keys and define the policies that control the use of these keys is a topic that you’re likely to encounter on the exam. Be sure to also spend some time reviewing options for the different database engines that support encryption of data in transit. Resource to consider: • Blog post: Select the right encryption options for Amazon RDS and Amazon Aurora database engines&lt;br&gt;
Determine access control and authentication mechanisms Authentication options vary by database service. Become familiar with the database services that support authentication via AWS Identity and Access Management (IAM). For configurations that rely on native-database authentication schemes, know how database credential management can be handled by using AWS Secrets Manager, which allows you to create secrets and use them in place of hard-coded credentials in your applications or infrastructure as code. The value of AWS Certification Architects and IT engineering professionals have the chance to demonstrate their database management expertise and authenticate their knowledge with the AWS Certified Database - Specialty certification. Practicing for a certification exam is a great method to confirm your understanding of several technologies. I hope you'll think about taking this test. Don't forget to utilise the learning options at your disposal, such as our free online courses, free virtual webinars, and courses for exam preparation. Create a training account to pay the suggested courses if you haven't already. I wish you luck!&lt;/p&gt;

</description>
      <category>offers</category>
      <category>marketing</category>
      <category>socialmedia</category>
    </item>
    <item>
      <title>Creating Launch Template &amp; Auto-scale group for Ec2 Creation and Automation</title>
      <dc:creator>Hammad Khan</dc:creator>
      <pubDate>Mon, 09 Jan 2023 21:20:58 +0000</pubDate>
      <link>https://dev.to/aws-builders/creating-launch-template-auto-scale-group-for-ec2-creation-and-automation-49l3</link>
      <guid>https://dev.to/aws-builders/creating-launch-template-auto-scale-group-for-ec2-creation-and-automation-49l3</guid>
      <description>&lt;p&gt;Our goal is to create an auto-scale group which can automatically launch a new EC2 instance from the launch template based on the EC2 health check monitoring.&lt;br&gt;
 It will kill/terminate the old EC2 instance automatically and create new one with same configuration and basic software application installations which were part of its user data script.&lt;br&gt;
Auto-scale Group Creation &lt;/p&gt;

&lt;p&gt;Name your auto-scale group and create launch template for it &lt;/p&gt;

&lt;p&gt;Create New Launch template in same AWS region where you will create your auto-scale group, and your Launch template will help you to&lt;br&gt;
• Choose OS of your choice AMI (Amazon machine image).&lt;br&gt;
• Apply Essential tags to categorize your EC2 instances accordingly. &lt;br&gt;
• Select Instance type to select best compatible type for your EC2 instance.&lt;br&gt;
• Attach Key pairs to access your machines remotely.&lt;br&gt;
• Select subnet and security group for your EC2 instance for connectivity and security &lt;br&gt;
• Attach EBS volumes for your instance&lt;br&gt;
• Add User data script to install basic software’s and application during instance launch.&lt;/p&gt;

&lt;p&gt;In auto-scale group select your created launch template and it will show the configuration of the template &lt;/p&gt;

&lt;p&gt;The Auto Scaling group must be created in the same VPC and choose the same security group which  you specified in your launch template. For the Availability Zones and subnets, choose one or more subnets in the specified VPC so you can use subnets in multiple Availability Zones for high availability&lt;/p&gt;

&lt;p&gt;You can register your Amazon EC2 instances with a load balancer or choose no load balancer option if you do not want LB configurations. In case of using ELB set the amount of time until Amazon EC2 Auto Scaling checks the Elastic Load Balancing health status of an instance after it enter in the InService state.&lt;br&gt;
Cloud watch metrics provide the measurements that can be useful to identify potential issue, like number of terminating instances or number of pending instances.&lt;br&gt;
Enable default instance warmup, so you can choose the warm-up time for your application on EC2 instance&lt;/p&gt;

&lt;p&gt;For Desired capacity, enter the initial number of instances to launch from auto-scale group and set the minimum and maximum capacity of instance you want to launch from auto-scale group. &lt;br&gt;
To automatically scale the size of the Auto Scaling group, choose the Target tracking scaling policy.&lt;br&gt;
You can control whether an Auto Scaling group can terminate a particular instance when scaling in by enabling instance scale-in protection.&lt;/p&gt;

&lt;p&gt;Add notifications and tags for your convenience to get notifications and categorize applications accordingly.&lt;/p&gt;

&lt;p&gt;After review an Auto-scale group has been created &lt;/p&gt;

&lt;p&gt;Create new instance from launch template&lt;/p&gt;

&lt;p&gt;After EC2 instance creation it will be taken care by its auto-scale group which is integrated with the launch template and on every potential issue the instance will be spin up and will start working as per the user data script.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>community</category>
      <category>ec2</category>
      <category>automation</category>
    </item>
    <item>
      <title>10 study tips for the AWS Certified Database – Specialty Certification</title>
      <dc:creator>Hammad Khan</dc:creator>
      <pubDate>Sun, 27 Nov 2022 19:04:28 +0000</pubDate>
      <link>https://dev.to/aws-builders/10-study-tips-for-the-aws-certified-database-specialty-certification-49kj</link>
      <guid>https://dev.to/aws-builders/10-study-tips-for-the-aws-certified-database-specialty-certification-49kj</guid>
      <description>&lt;p&gt;I have completed 5 AWS Certifications, currently I am currently preparing for AWS Certified Database – Specialty Certification.&lt;br&gt;
The AWS Certified Database - Specialty certification attests to a candidate's proficiency with the full range of AWS database services. There are more than 15 purpose-built database engines available on the AWS platform, including relational, key-value, document, in-memory, graph, time series, and ledger databases. With so many possibilities, it's crucial that you choose the best instrument for the task at hand. Candidates have the chance to demonstrate their aptitude for requirement analysis and the creation of suitable database solutions by taking the AWS Certified Database - Specialty exam. This exam tests candidates' ability to design, recommend, and maintain the best AWS database solution for a given use case. I recently studied for this test and got an A. I previously worked as a database administrator, so I found the exam-study process to be especially interesting.&lt;/p&gt;

&lt;p&gt;In order to help you get ready for the certification, AWS Training and Certification offers a combination of free, on-demand digital courses and virtual/in-person instructor-led classroom training. It also offers a specific database study path. I urge you to make use of both the instruction and these ten suggestions.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Understand which workloads are best suited to each of the purpose-built database services on AWS
When incorporating a new database solution into an architecture, it’s important to consider the nature of the data, such as its storage, usage, volume, and velocity. The database services on AWS can vary widely in terms of performance, scalability, and availability characteristics. Understanding the strengths of each service and being able to match workload characteristics to the different services is an important skill that’s heavily tested on the exam. Workload-specific database design currently comprises 26% of the exam composition.
Resources to consider:
• AWS Public Sector Summit presentation: Building with Purpose-Built Databases: Match Your Workload to the Right Database
• Blog post: Build a Modern Application with Purpose-Built AWS Databases

&lt;ol&gt;
&lt;li&gt;Understand strategies for disaster recovery and high availability
When deploying databases on AWS, it’s critical to understand how to configure database architectures to achieve recovery-point and recovery-time objectives. This topic is on the exam and covers high-availability and disaster-recovery configurations for a variety of the available database services. This subject area aligns closely to the Reliability Pillar of the AWS Well-Architected Framework.
Resources to consider:
• Webpage: Amazon Relational Database Service (Amazon RDS) high-availability architecture concepts
• Blog post: Cross-Region disaster recovery of Amazon RDS for SQL Server
• Documentation: Disaster-recovery strategies for Amazon Aurora databases&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;
&lt;li&gt;      Understand how database solution deployments can be automated
A best practice for deploying AWS resources is to use a configuration system that treats your infrastructure as code. AWS CloudFormation is one way you can do this on AWS. Infrastructure as code is a key enabler of DevOps practices and brings developers and operations together to collaborate on automating application delivery at scale. Because databases are typically stateful components in your architecture, it’s important to understand how you can use CloudFormation to provision new resources and manage them throughout their lifecycle.
Resources to consider:
• Blog post: Using CloudFormation to configure auto scaling for Amazon DynamoDB
• Documentation: CloudFormation User Guide&lt;/li&gt;
&lt;li&gt;      Determine data preparation and migration strategies
The exam tests your knowledge of data-migration methods both into and within AWS. To address questions on this topic, it’s important to understand capabilities such as snapshots, database restores, and data-replication options. Ensure you understand which tools and services are most appropriate to maximize efficiency. Additionally, know how to prepare your data sources and targets and choose schema-conversion methods using tools such as the AWS Schema Conversion Tool.
Resource to consider:
• Blog post: Migrating a commercial database to open source with AWS SCT and AWS DMS&lt;/li&gt;
&lt;li&gt;      Determine backup and restore strategies
To ensure business data is protected, you need to determine appropriate backup and restoration strategies. Depending on the AWS database services being used, backup and recovery strategies will vary. Data-protection strategies may include the ability to take manual snapshots and leverage automated backups or continuous backups. Resource impact resulting from backup and restoration activities can also vary. Backup solutions for AWS database services, such as Aurora and DynamoDB, are designed to have little to no impact on performance and will not cause interruptions. In other cases, such as with Amazon ElastiCache, potential impacts depend on engine version, activity level, or configurations such as reserved memory. Consider performance and availability impacts and mitigation strategies.&lt;/li&gt;
&lt;li&gt;      Manage the operational environment of a database solution
A number of AWS database services are provided as fully managed database services where AWS manages many aspects of database management on your behalf. For example, this could include applying patches to the database engine or its underlying operating system. Ensure you understand how individual database services handle updates and configuration changes, as there are some differences between services.&lt;/li&gt;
&lt;li&gt;      Determine monitoring and alerting strategies
Additionally, you’ll need to be familiar with the monitoring capabilities of the AWS databases and understand how they interact with the additional AWS monitoring and alerting tools, including Amazon CloudWatch, AWS CloudTrail, and the collection of custom metrics.
Resource to consider:
• Documentation: Performance Insights dashboard documentation&lt;/li&gt;
&lt;li&gt;      Understand how you can optimize database performance
For the exam, you’ll need to apply troubleshooting skills to database-performance issues, fine tune database design and performance, and identify AWS tools and services that are most helpful and cost effective for database scenarios. The approach for optimizing performance and costs varies across AWS database services. For example, in DynamoDB, design your application for uniform activity across all logical partition keys in the table and its secondary indexes.
Resources to consider:
• Blog post: Sharding with Amazon Relational Database Service
• Webpage: Amazon RDS Read Replicas&lt;/li&gt;
&lt;li&gt;      Encrypt data at rest and in transit
Encrypting data at rest is a key component of data protection on AWS. AWS database services are unique and often implement data protection in different ways. Understanding how to leverage AWS Key Management Service (AWS KMS) for encryption key management to create encryption keys and define the policies that control the use of these keys is a topic that you’re likely to encounter on the exam. Be sure to also spend some time reviewing options for the different database engines that support encryption of data in transit.
Resource to consider:
• Blog post: Select the right encryption options for Amazon RDS and Amazon Aurora database engines&lt;/li&gt;
&lt;li&gt;  Determine access control and authentication mechanisms
Authentication options vary by database service. Become familiar with the database services that support authentication via AWS Identity and Access Management (IAM). For configurations that rely on native-database authentication schemes, know how database credential management can be handled by using AWS Secrets Manager, which allows you to create secrets and use them in place of hard-coded credentials in your applications or infrastructure as code.
The value of AWS Certification
Architects and IT engineering professionals have the chance to demonstrate their database management expertise and authenticate their knowledge with the AWS Certified Database - Specialty certification. Practicing for a certification exam is a great method to confirm your understanding of several technologies. I hope you'll think about taking this test. Don't forget to utilise the learning options at your disposal, such as our free online courses, free virtual webinars, and courses for exam preparation. Create a training account to pay the suggested courses if you haven't already. I wish you luck!&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>10 study areas for the AWS Certified Advanced Networking – Specialty exam</title>
      <dc:creator>Hammad Khan</dc:creator>
      <pubDate>Sun, 27 Nov 2022 18:56:08 +0000</pubDate>
      <link>https://dev.to/aws-builders/10-study-areas-for-the-aws-certified-advanced-networking-specialty-exam-2ijh</link>
      <guid>https://dev.to/aws-builders/10-study-areas-for-the-aws-certified-advanced-networking-specialty-exam-2ijh</guid>
      <description>&lt;p&gt;I have completed 5 AWS Certifications, recently, I have passed, AWS Certified Advanced Networking – Specialty exam.&lt;br&gt;
During the last few years working as a solutions architect at Systems Limited, I’ve had the opportunity to work with numerous customers building resilient network connectivity between their data centers and AWS regions. As my knowledge of networking in AWS increased, I decided to study for the AWS Certified Advanced Networking – Specialty exam. The exam validates networking expertise and verifies the learner’s ability to implement AWS network services to meet performance, cost, and security requirements.&lt;br&gt;
I have summarized few topics from AWS certification path and would like to share with you and I hope it will be helpful of those who are taking this exam in future.&lt;/p&gt;

&lt;p&gt;The exam goes in-depth on a lot of networking-related topics. It is necessary to have a basic understanding of networking concepts including IP-routing logic, the Open Systems Interconnection (OSI) model, IPv4 addressing and Classless Inter-Domain Routing (CIDR), and subnetting. I would advise passing either the AWS Certified SysOps Administrator - Associate or AWS Certified Solutions Architect - Associate certification before taking this speciality test, based on my own experience.&lt;br&gt;
My understanding has been much improved by the experience of studying for the exam. I created networks on my AWS account and tested scenarios using tools like AWS Global Accelerator as part of my exam preparation. I find that using technology firsthand is a terrific method to reinforce my knowledge. I've discovered that I'm better equipped to assist customers in considering network-design options since taking the exam and earning the certification. Any architect or engineer interested in this field should pursue this certification, in my opinion.&lt;br&gt;
AWS Training and Certification offers a mix of free, on-demand digital courses, virtual/in-person instructor-led classroom training, virtual webinars, and an exam-readiness course to help you build your knowledge. I encourage you to utilize the training as well as my suggestions for 10 study areas to review as you prepare for the AWS Certified Advanced Networking – Specialty exam.&lt;br&gt;
Areas of study&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Edge network services
AWS edge-computing services provide infrastructure and software that move data processing and analysis as close to the endpoint as necessary. Amazon CloudFront is a global content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to your viewers with low latency and high transfer speeds. AWS Lambda is a compute service that allows you to run code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second. Lambda@Edge allows you to run Node.js and Python Lambda functions to customize content that Amazon CloudFront delivers, executing the functions in AWS locations closer to the viewer. The functions run in response to CloudFront events without provisioning or managing servers. CloudFront integration with AWS Web Application Firewall (WAF) can mitigate network attacks that target different layers of the OSI model. For network path optimization can be used AWS Global Accelerator. Deploying compute closer to your users or network path optimization can simplify your architecture, increase security, and optimize the user experience through reduced latency. In addition to the network and transport layer protections that come with Shield Standard, Shield Advanced provides additional detection and mitigation against large and sophisticated DDoS attacks, near real-time visibility into attacks, and integration with AWS WAF. Shield Advanced also gives protection against DDoS-related spikes in your EC2, ELB, CloudFront, Global Accelerator, and Route 53 charges.&lt;/li&gt;
&lt;li&gt;AWS global infrastructure and how to deploy foundational network elements
To pass the Advanced Networking – Specialty Certification exam, you’ll need a thorough understanding of how the AWS Global Infrastructure is designed and how the fundamental AWS networking components in a Virtual Private Cloud (VPC) work. Be sure to brush up on configuration options for foundational VPC design, including IPv4 and IPv6 CIDRs, subnets, route tables, network-access control lists (NACLs), and security groups (SGs), Dynamic Host Configuration Protocol (DHCP) configurations,. As an architect, it’s also necessary to know how to best provide connectivity beyond the VPC, including NAT gateways (NGW), internet gateways (IGW), egress-only internet gateways (EIGW), and virtual gateways (VGW). Consider reviewing:
• Documentation: What is Amazon VPC?
• Documentation: Bring your own IP addresses and IPAM
• Blog post: Dual-stack IPv6 architectures for AWS and hybrid networks&lt;/li&gt;
&lt;li&gt;Hybrid network-connectivity options
Many AWS customers rely on VPNs or SDWAN to provide private connectivity between infrastructure in AWS and on-premises resources. For use cases requiring higher bandwidth, consistent network performance, or increased privacy, AWS Direct Connect may be more appropriate. Traffic routing and failover is another important topic. These connectivity solutions are often critical in enabling migration to AWS. Consider the following resources:
• Whitepaper: Hybrid Connectivity
• Whitepaper: Building a Scalable and Secure Multi-VPC AWS Network Infrastructure
• Blog post: Introducing AWS Site-to-Site VPN Private IP VPN
• Blog post: Adding MACsec security to AWS Direct Connect connections
• Blog post: Simplify SD-WAN connectivity with AWS Transit Gateway Connect
• re:Invent video: AWS Direct Connect: Deep Dive&lt;/li&gt;
&lt;li&gt;Inter-VPC connectivity options
VPC peering provides a convenient way to connect multiple VPCs; however, at scale, there are considerable operational efficiencies of hub-and-spoke network designs using AWS Transit Gateway. Transit Gateway unlocks a variety of design options. Consider the following resources:
• re:Invent video: Transit Gateway architectures for many VPCs
• re:Invent video: Advanced VPC Design and New Capabilities for Amazon VPC
• Digital course: AWS Transit Gateway Networking and Scaling
• Blog post: Designing hyperscale Amazon VPC networks
• Blog post: Multicast with AWS Transit Gateway&lt;/li&gt;
&lt;li&gt;Automate network management using AWS CloudFormation
Infrastructure as code is the ability to build up and tear down entire environments programmatically and automatically. It enables a rapid deployment of infrastructure, enabling organizations to operate with great agility. It also provides the ability to rebuild infrastructure rapidly, increasing resilience.  CloudFormation is infrastructure as code. It provides the ability to manage your network configuration through simple JSON or YAML. It’s important to understand how CloudFormation can deploy network infrastructure and how it can safely update configurations using features such as change sets and deletion policies. These features enable you to manage the entire lifecycle of your network components. Consider the following resources:
• Documentation: Updating stacks using change sets
• Documentation: How do I retain some of my resources when I delete an AWS CloudFormation stack?&lt;/li&gt;
&lt;li&gt; Integrate VPC networks with other AWS services
Preventing sensitive data, such as customer records, from traversing the internet is a requirement for some workloads, which have to maintain compliance with regulations, such as HIPAA, EU/US Privacy Shield, and PCI. AWS PrivateLink provides private connectivity between VPCs, AWS services, and your on-premises networks without exposing your traffic to the public internet. A common use case for customers is the need to provide communication between workloads deployed inside a VPC (e.g., EC2 instances) to other AWS services (e.g., an Amazon Simple Storage Service bucket or an Amazon Simple Queue Service queue). AWS enables this communication across a private network segment via Gateway and interface VPC endpoints powered by AWS PrivateLink. Endpoints can be used to improve the reliability and security of communications. Configuring VPC endpoints correctly requires knowledge of AWS Identity and Access Management (IAM), route tables, elastic network interfaces, security groups, and NACLs. With wide adoption of Kubernetes, it’s important to understand EKS networking. Consider the following resources:
• Blog post: Reduce Cost and Increase Security with Amazon VPC Endpoints
• Workshop: VPC Endpoint Workshop
• re:Invent video: Integrate Amazon EKS with your networking pattern&lt;/li&gt;
&lt;li&gt;Security and compliance
Many AWS customers deploy infrastructure accessed by a globally distributed user base. Network architects need to support access in a secure manner. AWS providing variety of services that can help to meet these security goals. Network Access Analyzer is a feature that identifies unintended network access to your resources on AWS. You can use Network Access Analyzer to specify your network access requirements and to identify potential network paths that do not meet your specified requirements. With AWS Network Firewall, you can filter traffic at the perimeter of your VPC. This includes filtering traffic going to and coming from an internet gateway, NAT gateway, or over VPN or AWS Direct Connect. Consider the following resources:
• Blog post: Deployment models for AWS Network Firewall with VPC routing enhancements
• Documentation: VPC Security&lt;/li&gt;
&lt;li&gt;Methods to simplify network management and troubleshooting
AWS Firewall Manager is a security management service that allows you to centrally configure and manage firewall rules across your accounts and applications in AWS Organizations. With Network Manager you can centrally manage and monitor CloudWAN core network and TGW network across AWS accounts, Regions, and on-premises locations. Connectivity issues are common in real-world scenarios, arising when communication needs to occur within VPCs between peered VPCs and when working with VPNs or Direct Connect to on-premises networks. AWS provides a variety of data sources that increase visibility into network operations. These can aid common network-administration tasks such as troubleshooting network connectivity. You can use VPC Reachability Analyzer to determine whether a destination resource in your virtual private cloud (VPC) is reachable from a source resource. Logs include VPC flow logs, TGW flow logs, access logs for your application load balancer, WAF logs, ANF logs and CloudFront logs. Additionally, Traffic Mirroring is an Amazon VPC feature that you can use to copy network traffic from an elastic network interface of Amazon Elastic Compute Cloud (EC2) instances. You can then send the traffic to out-of-band security and monitoring appliances for content inspection, threat monitoring, and troubleshooting. Network administrators need to understand where these data sources are stored, how frequently data is written to them, and what information they contain to be effective at troubleshooting.
• Blog post: Analyzing VPC Flow Logs using Amazon Athena, and Amazon QuickSight
• Documentation: Analyzing VPC flow log with CloudWatch Logs Insights&lt;/li&gt;
&lt;li&gt;  Network configuration options for high performance applications
Certain application workloads, such as high-performance computing, may require lower latency, high bandwidth network connections between compute nodes. AWS provides configuration options to meet the needs of these workloads (i.e., placement groups, jumbo frames, Elastic Fabric Adapters (EFA) and Elastic Network Adapters (ENA)). High-performance computing workloads may require an operating system configuration to achieve the desired network performance. Consider reviewing this documentation, Network Performance.&lt;/li&gt;
&lt;li&gt;Designs for reliability
A design principal of the reliability pillar of the AWS Well Architected Framework is to design systems that can automatically recover from failure. Network architects can build highly resilient, multi-region designs using network services such as Amazon Route 53 and AWS Global Accelerator. These services can detect failure and route client traffic away from it, increasing availability. Similarly, traffic flows within an Amazon VPC can route around failure. AWS Elastic Load Balancing offers health-checking capabilities that can validate the health of compute components using a variety of network protocols (i.e., TCP, HTTP, HTTPS, and SSL). When integrated with Amazon CloudWatch, these capabilities provide operational alerting and can trigger automated remediation of failures.
The value of certification
Network architecture is a crucial building block for businesses wishing to move workloads to AWS or create new workloads there. IT engineering professionals have the chance to demonstrate their understanding of how to construct affordable, secure, and performant networks on AWS by earning the AWS Certified Advanced Networking - Specialty certification. A great method to strengthen your understanding of any technology is to be ready for a certification exam. I hope you think about taking this exam and gain similar advantages. Create a training account and take the suggested courses if you haven't already. I wish you luck!&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>rust</category>
      <category>webassembly</category>
      <category>performance</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Create a launch template for an Auto Scaling group</title>
      <dc:creator>Hammad Khan</dc:creator>
      <pubDate>Sun, 27 Nov 2022 18:14:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-a-launch-template-for-an-auto-scaling-group-3b33</link>
      <guid>https://dev.to/aws-builders/create-a-launch-template-for-an-auto-scaling-group-3b33</guid>
      <description>&lt;p&gt;Before you can create an Auto Scaling group using a launch template, you must create a launch template with the parameters required to launch an EC2 instance. These parameters include the ID of the Amazon Machine Image (AMI) and an instance type.&lt;/p&gt;

&lt;p&gt;A launch template provides full functionality for Amazon EC2 Auto Scaling and also newer features of Amazon EC2 such as the current generation of Amazon EBS Provisioned IOPS volumes (io2), EBS volume tagging, T2 Unlimited instances, Elastic Inference, and Dedicated Hosts.&lt;/p&gt;

&lt;p&gt;To create new launch templates, use the following procedures.&lt;/p&gt;

&lt;p&gt;Contents&lt;/p&gt;

&lt;p&gt;Create your launch template (console)&lt;br&gt;
Change the default network interface settings&lt;br&gt;
Modify the storage configuration&lt;br&gt;
Configure advanced settings for your launch template&lt;br&gt;
Create a launch template from an existing instance (console)&lt;br&gt;
Additional information&lt;br&gt;
Limitations&lt;br&gt;
Important&lt;br&gt;
Launch template parameters are not fully validated when you create the launch template. If you specify incorrect values for parameters, or if you do not use supported parameter combinations, no instances can launch using this launch template. Be sure to specify the correct values for the parameters and use supported parameter combinations. For example, to launch instances with an Arm-based AWS Graviton or Graviton2 AMI, you must specify an Arm-compatible instance type.&lt;/p&gt;

&lt;p&gt;Create your launch template (console)&lt;br&gt;
The following steps describe how to configure your launch template:&lt;/p&gt;

&lt;p&gt;Specify the Amazon Machine Image (AMI) from which to launch the instances.&lt;/p&gt;

&lt;p&gt;Choose an instance type that is compatible with the AMI that you specify.&lt;/p&gt;

&lt;p&gt;Specify the key pair to use when connecting to instances, for example, using SSH.&lt;/p&gt;

&lt;p&gt;Add one or more security groups to allow relevant access to the instances from an external network.&lt;/p&gt;

&lt;p&gt;Specify whether to attach additional volumes to each instance.&lt;/p&gt;

&lt;p&gt;Add custom tags (key-value pairs) to the instances and volumes.&lt;/p&gt;

&lt;p&gt;To create a launch template&lt;/p&gt;

&lt;p&gt;Open the Amazon EC2 console at &lt;a href="https://console.aws.amazon.com/ec2/" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/ec2/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;On the navigation pane, under Instances, choose Launch Templates.&lt;/p&gt;

&lt;p&gt;Choose Create launch template. Enter a name and provide a description for the initial version of the launch template.&lt;/p&gt;

&lt;p&gt;Under Auto Scaling guidance, select the check box to have Amazon EC2 provide guidance to help create a template to use with Amazon EC2 Auto Scaling.&lt;/p&gt;

&lt;p&gt;Under Launch template contents, fill out each required field and any optional fields as needed.&lt;/p&gt;

&lt;p&gt;Application and OS Images (Amazon Machine Image): (Required) Choose the ID of the AMI for your instances. You can search through all available AMIs, or select an AMI from the Recents or Quick Start list. If you don't see the AMI that you need, choose Browse more AMIs to browse the full AMI catalog.&lt;/p&gt;

&lt;p&gt;To choose a custom AMI, you must first create your AMI from a customized instance. For more information, see Create an AMI in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;For Instance type, choose a single instance type that's compatible with the AMI that you specified.&lt;/p&gt;

&lt;p&gt;Alternatively, to launch an Auto Scaling group with multiple instance types, choose Advanced, Specify instance type attributes, and then specify the following options:&lt;/p&gt;

&lt;p&gt;Number of vCPUs: Enter the minimum and maximum number of vCPUs. To indicate no limits, enter a minimum of 0, and keep the maximum blank.&lt;/p&gt;

&lt;p&gt;Amount of memory (MiB): Enter the minimum and maximum amount of memory, in MiB. To indicate no limits, enter a minimum of 0, and keep the maximum blank.&lt;/p&gt;

&lt;p&gt;Expand Optional instance type attributes and choose Add attribute to further limit the types of instances that can be used to fulfill your desired capacity. For information about each attribute, see InstanceRequirementsRequest in the Amazon EC2 API Reference.&lt;/p&gt;

&lt;p&gt;Resulting instance types: You can view the instance types that match the specified compute requirements, such as vCPUs, memory, and storage.&lt;/p&gt;

&lt;p&gt;To exclude instance types, choose Add attribute. From the Attribute list, choose Excluded instance types. From the Attribute value list, select the instance types to exclude.&lt;/p&gt;

&lt;p&gt;For more information, see Create an Auto Scaling group using attribute-based instance type selection.&lt;/p&gt;

&lt;p&gt;Key pair (login): For Key pair name, choose an existing key pair, or choose Create new key pair to create a new one. For more information, see Amazon EC2 key pairs and Linux instances in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;Network settings: For Firewall (security groups), use one or more security groups, or keep this blank and configure one or more security groups as part of the network interface. For more information, see Amazon EC2 security groups for Linux instances in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;If you don't specify any security groups in your launch template, Amazon EC2 uses the default security group for the VPC that your Auto Scaling group will launch instances into. By default, this security group doesn't allow inbound traffic from external networks. For more information, see Default security groups for your VPCs in the Amazon VPC User Guide.&lt;/p&gt;

&lt;p&gt;Do one of the following:&lt;/p&gt;

&lt;p&gt;Change the default network interface settings. For example, you can enable or disable the public IPv4 addressing feature, which overrides the auto-assign public IPv4 addresses setting on the subnet. For more information, see Change the default network interface settings.&lt;/p&gt;

&lt;p&gt;Skip this step to keep the default network interface settings.&lt;/p&gt;

&lt;p&gt;Do one of the following:&lt;/p&gt;

&lt;p&gt;Modify the storage configuration. For more information, see Modify the storage configuration.&lt;/p&gt;

&lt;p&gt;Skip this step to keep the default storage configuration.&lt;/p&gt;

&lt;p&gt;For Resource tags, specify tags by providing key and value combinations. If you specify instance tags in your launch template and then you choose to propagate your Auto Scaling group's tags to its instances, all the tags are merged. If the same tag key is specified for a tag in your launch template and a tag in your Auto Scaling group, then the tag value from the group takes precedence.&lt;/p&gt;

&lt;p&gt;(Optional) Configure advanced settings. For more information, see Configure advanced settings for your launch template.&lt;/p&gt;

&lt;p&gt;When you are ready to create the launch template, choose Create launch template.&lt;/p&gt;

&lt;p&gt;To create an Auto Scaling group, choose Create Auto Scaling group from the confirmation page.&lt;/p&gt;

&lt;p&gt;Change the default network interface settings&lt;br&gt;
This section shows you how to change the default network interface settings. For example, you can define whether you want to assign a public IPv4 address to each instance instead of defaulting to the auto-assign public IPv4 addresses setting on the subnet.&lt;/p&gt;

&lt;p&gt;Considerations and limitations&lt;/p&gt;

&lt;p&gt;When changing the default network interface settings, keep in mind the following considerations and limitations:&lt;/p&gt;

&lt;p&gt;You must configure the security groups as part of the network interface, not in the Security groups section of the template. You cannot specify security groups in both places.&lt;/p&gt;

&lt;p&gt;You cannot assign secondary private IP addresses, known as secondary IP addresses, to a network interface.&lt;/p&gt;

&lt;p&gt;If you specify an existing network interface ID, you can launch only one instance. To do this, you must use the AWS CLI or an SDK to create the Auto Scaling group. When you create the group, you must specify the Availability Zone, but not the subnet ID. Also, you can specify an existing network interface only if it has a device index of 0.&lt;/p&gt;

&lt;p&gt;You cannot auto-assign a public IPv4 address if you specify more than one network interface. You also cannot specify duplicate device indexes across network interfaces. Both the primary and secondary network interfaces reside in the same subnet. For more information, see Provide network connectivity for your Auto Scaling instances using Amazon VPC.&lt;/p&gt;

&lt;p&gt;When an instance launches, a private address is automatically allocated to each network interface. The address comes from the CIDR range of the subnet in which the instance is launched. For information on specifying CIDR blocks (or IP address ranges) for your VPC or subnet, see the Amazon VPC User Guide.&lt;/p&gt;

&lt;p&gt;To change the default network interface settings&lt;/p&gt;

&lt;p&gt;Under Network settings, expand Advanced network configuration.&lt;/p&gt;

&lt;p&gt;Choose Add network interface to configure the primary network interface, paying attention to the following fields:&lt;/p&gt;

&lt;p&gt;Device index: Keep the default value, 0, to apply your changes to the primary network interface (eth0).&lt;/p&gt;

&lt;p&gt;Network interface: Keep the default value, New interface, to have Amazon EC2 Auto Scaling automatically create a new network interface when an instance is launched. Alternatively, you can choose an existing, available network interface with a device index of 0, but this limits your Auto Scaling group to one instance.&lt;/p&gt;

&lt;p&gt;Description: (Optional) Enter a descriptive name.&lt;/p&gt;

&lt;p&gt;Subnet: Keep the default Don't include in launch template setting.&lt;/p&gt;

&lt;p&gt;If the AMI specifies a subnet for the network interface, this results in an error. We recommend turning off Auto Scaling guidance as a workaround. After you make this change, you will not receive an error message. However, regardless of where the subnet is specified, the subnet settings of the Auto Scaling group take precedence and cannot be overridden.&lt;/p&gt;

&lt;p&gt;Auto-assign public IP: Change whether your network interface with a device index of 0 receives a public IPv4 address. By default, instances in a default subnet receive a public IPv4 address, while instances in a nondefault subnet do not. Select Enable or Disable to override the subnet's default setting.&lt;/p&gt;

&lt;p&gt;Security groups: Choose one or more security groups for the network interface. Each security group must be configured for the VPC that your Auto Scaling group will launch instances into. For more information, see Amazon EC2 security groups for Linux instances in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;Delete on termination: Choose Yes to delete the network interface when the instance is terminated, or choose No to keep the network interface.&lt;/p&gt;

&lt;p&gt;Elastic Fabric Adapter: To support high performance computing (HPC) use cases, change the network interface into an Elastic Fabric Adapter network interface. For more information, see Elastic Fabric Adapter in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;Network card index: Choose 0 to attach the primary network interface to the network card with a device index of 0. If this option isn't available, keep the default value, Don't include in launch template. Attaching the network interface to a specific network card is available only for supported instance types. For more information, see Network cards in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;To add a secondary network interface, choose Add network interface.&lt;/p&gt;

&lt;p&gt;Modify the storage configuration&lt;br&gt;
You can modify the storage configuration for instances launched from an Amazon EBS-backed AMI or an instance store-backed AMI. You can also specify additional EBS volumes to attach to the instances. The AMI includes one or more volumes of storage, including the root volume (Volume 1 (AMI Root)).&lt;/p&gt;

&lt;p&gt;To modify the storage configuration&lt;/p&gt;

&lt;p&gt;In Configure storage, modify the size or type of volume.&lt;/p&gt;

&lt;p&gt;If the value you specify for volume size is outside the limits of the volume type, or smaller than the snapshot size, an error message is displayed. To help you address the issue, this message gives the minimum or maximum value that the field can accept.&lt;/p&gt;

&lt;p&gt;Only volumes associated with an Amazon EBS-backed AMI appear. To display information about the storage configuration for an instance launched from an instance store-backed AMI, choose Show details from the Instance store volumes section.&lt;/p&gt;

&lt;p&gt;To specify all EBS volume parameters, switch to the Advanced view in the top right corner.&lt;/p&gt;

&lt;p&gt;For advanced options, expand the volume that you want to modify and configure the volume as follows:&lt;/p&gt;

&lt;p&gt;Storage type: The type of volume (EBS or ephemeral) to associate with your instance. The instance store (ephemeral) volume type is only available if you select an instance type that supports it. For more information, see Amazon EC2 instance store and Amazon EBS volumes in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;Device name: Select from the list of available device names for the volume.&lt;/p&gt;

&lt;p&gt;Snapshot: Select the snapshot from which to create the volume. You can search for available shared and public snapshots by entering text into the Snapshot field.&lt;/p&gt;

&lt;p&gt;Size (GiB): For EBS volumes, you can specify a storage size. If you have selected an AMI and instance that are eligible for the free tier, keep in mind that to stay within the free tier, you must stay under 30 GiB of total storage. For more information, see Constraints on the size and configuration of an EBS volume in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;Volume type: For EBS volumes, choose the volume type. For more information, see Amazon EBS volume types in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;IOPS: If you have selected a Provisioned IOPS SSD (io1 and io2) or General Purpose SSD (gp3) volume type, then you can enter the number of I/O operations per second (IOPS) that the volume can support. This is required for io1, io2, and gp3 volumes. It is not supported for gp2, st1, sc1, or standard volumes.&lt;/p&gt;

&lt;p&gt;Delete on termination: For EBS volumes, choose Yes to delete the volume when the instance is terminated, or choose No to keep the volume.&lt;/p&gt;

&lt;p&gt;Encrypted: If the instance type supports EBS encryption, you can choose Yes to enable encryption for the volume. If you have enabled encryption by default in this Region, encryption is enabled for you. For more information, see Amazon EBS encryption and Encryption by default in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;The default effect of setting this parameter varies with the choice of volume source, as described in the following table. In all cases, you must have permission to use the specified AWS KMS key.&lt;/p&gt;

&lt;p&gt;Encryption outcomes&lt;br&gt;
If Encrypted parameter is set to... And if source of volume is...   Then the default encryption state is... Notes&lt;br&gt;
No  New (empty) volume  Unencrypted*    N/A&lt;br&gt;
Unencrypted snapshot that you own   Unencrypted*&lt;br&gt;
Encrypted snapshot that you own Encrypted by same key&lt;br&gt;
Unencrypted snapshot that is shared with you    Unencrypted*&lt;br&gt;
Encrypted snapshot that is shared with you  Encrypted by default KMS key&lt;br&gt;
Yes New volume  Encrypted by default KMS key    To use a non-default KMS key, specify a value for the KMS key parameter.&lt;br&gt;
Unencrypted snapshot that you own   Encrypted by default KMS key&lt;br&gt;
Encrypted snapshot that you own Encrypted by same key&lt;br&gt;
Unencrypted snapshot that is shared with you    Encrypted by default KMS key&lt;br&gt;
Encrypted snapshot that is shared with you  Encrypted by default KMS key&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If encryption by default is enabled, all newly created volumes (whether or not the Encrypted parameter is set to Yes) are encrypted using the default KMS key. If you set both the Encrypted and KMS key parameters, then you can specify a non-default KMS key.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;KMS key: If you chose Yes for Encrypted, then you must select a customer managed key to use to encrypt the volume. If you have enabled encryption by default in this Region, the default customer managed key is selected for you. You can select a different key or specify the ARN of any customer managed key that you previously created using the AWS Key Management Service.&lt;/p&gt;

&lt;p&gt;To specify additional volumes to attach to the instances launched by this launch template, choose Add new volume.&lt;/p&gt;

&lt;p&gt;Configure advanced settings for your launch template&lt;br&gt;
You can define any additional capabilities that your Auto Scaling instances need. For example, you can choose an IAM role that your application can use when it accesses other AWS resources or specify the instance user data that can be used to perform common automated configuration tasks after an instance starts.&lt;/p&gt;

&lt;p&gt;The following steps discuss the most useful settings to pay attention to. For more information about any of the settings under Advanced details, see Creating a launch template in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;To configure advanced settings&lt;/p&gt;

&lt;p&gt;For Advanced details, expand the section to view the fields.&lt;/p&gt;

&lt;p&gt;For Purchasing option, you can choose Request Spot Instances to request Spot Instances at the Spot price, capped at the On-Demand price, and choose Customize to change the default Spot Instance settings. For an Auto Scaling group, you must specify a one-time request with no end date (the default). For more information, see Request Spot Instances for fault-tolerant and flexible applications.&lt;/p&gt;

&lt;p&gt;Note&lt;br&gt;
Amazon EC2 Auto Scaling lets you override the instance type in your launch template to create an Auto Scaling group that uses multiple instance types and launches Spot and On-Demand Instances. To do so, you must leave Purchasing option unspecified in your launch template.&lt;/p&gt;

&lt;p&gt;If you try to create a mixed instances group using a launch template with Purchasing option specified, you get the following error.&lt;/p&gt;

&lt;p&gt;Incompatible launch template: You cannot use a launch template that is set to request Spot Instances (InstanceMarketOptions) when you configure an Auto Scaling group with a mixed instances policy. Add a different launch template to the group and try again.&lt;/p&gt;

&lt;p&gt;For information about creating mixed instances groups, see Auto Scaling groups with multiple instance types and purchase options.&lt;/p&gt;

&lt;p&gt;For IAM instance profile, you can specify an AWS Identity and Access Management (IAM) instance profile to associate with the instances. When you choose an instance profile, you associate the corresponding IAM role with the EC2 instances. For more information, see IAM role for applications that run on Amazon EC2 instances.&lt;/p&gt;

&lt;p&gt;For Termination protection, choose whether to protect instances from accidental termination. When you enable termination protection, it provides additional termination protection, but it does not protect from Amazon EC2 Auto Scaling initiated termination. To control whether an Auto Scaling group can terminate a particular instance, use Use instance scale-in protection.&lt;/p&gt;

&lt;p&gt;For Detailed CloudWatch monitoring, choose whether to enable the instances to publish metric data at 1-minute intervals to Amazon CloudWatch. Additional charges apply. For more information, see Configure monitoring for Auto Scaling instances.&lt;/p&gt;

&lt;p&gt;For Elastic inference, choose an elastic inference accelerator to attach to your EC2 CPU instance. Additional charges apply. For more information, see Working with Amazon Elastic Inference in the Amazon Elastic Inference Developer Guide.&lt;/p&gt;

&lt;p&gt;For T2/T3 Unlimited, choose whether to enable applications to burst beyond the baseline for as long as needed. This field is only valid for T2, T3, and T3a instances. Additional charges may apply. For more information, see Using an Auto Scaling group to launch a burstable performance instance as Unlimited in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;For Placement group name, you can specify a placement group in which to launch the instances. Not all instance types can be launched in a placement group. If you configure an Auto Scaling group using a CLI command that specifies a different placement group, the placement group for the Auto Scaling group takes precedence.&lt;/p&gt;

&lt;p&gt;For Capacity Reservation, you can specify whether to launch the instances into shared capacity, any open Capacity Reservation, a specific Capacity Reservation, or a Capacity Reservation group. For more information, see Launching instances into an existing capacity reservation in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;For Tenancy, you can choose to run your instances on shared hardware (Shared), on dedicated hardware (Dedicated), or when using a host resource group, on Dedicated Hosts (Dedicated host). Additional charges may apply.&lt;/p&gt;

&lt;p&gt;If you chose Dedicated Hosts, complete the following information:&lt;/p&gt;

&lt;p&gt;For Tenancy host resource group, you can specify a host resource group for a BYOL AMI to use on Dedicated Hosts. You do not have to have already allocated Dedicated Hosts in your account before you use this feature. Your instances will automatically launch onto Dedicated Hosts regardless. Note that an AMI based on a license configuration association can be mapped to only one host resource group at a time. For more information, see Host resource groups in the AWS License Manager User Guide.&lt;/p&gt;

&lt;p&gt;For License configurations, specify the license configuration to use. You can launch instances against the specified license configuration to track your license usage. For more information, see Create a license configuration in the License Manager User Guide.&lt;/p&gt;

&lt;p&gt;To configure instance metadata options for all of the instances that are associated with this version of the launch template, do the following:&lt;/p&gt;

&lt;p&gt;For Metadata accessible, choose whether to enable or disable access to the HTTP endpoint of the instance metadata service. By default, the HTTP endpoint is enabled. If you choose to disable the endpoint, access to your instance metadata is turned off. You can specify the condition to require IMDSv2 only when the HTTP endpoint is enabled.&lt;/p&gt;

&lt;p&gt;For Metadata version, you can choose to require the use of Instance Metadata Service Version 2 (IMDSv2) when requesting instance metadata. If you do not specify a value, the default is to support both IMDSv1 and IMDSv2.&lt;/p&gt;

&lt;p&gt;For Metadata token response hop limit, you can set the allowable number of network hops for the metadata token. If you do not specify a value, the default is 1.&lt;/p&gt;

&lt;p&gt;For more information, see Configuring the instance metadata service in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;For User data, you can add shell scripts and cloud-init directives to customize an instance at launch. For more information, see Run commands on your Linux instance at launch in the Amazon EC2 User Guide for Linux Instances.&lt;/p&gt;

&lt;p&gt;Note&lt;br&gt;
Running scripts at launch adds to the amount of time it takes for an instance to be ready for use. However, you can allow extra time for the scripts to complete before the instance enters the InService state by adding a lifecycle hook to the Auto Scaling group. For more information, see Amazon EC2 Auto Scaling lifecycle hooks.&lt;/p&gt;

&lt;p&gt;Choose Create launch template.&lt;/p&gt;

&lt;p&gt;To create an Auto Scaling group, choose Create Auto Scaling group from the confirmation page.&lt;/p&gt;

&lt;p&gt;Create a launch template from an existing instance (console)&lt;br&gt;
To create a launch template from an existing instance&lt;/p&gt;

&lt;p&gt;Open the Amazon EC2 console at &lt;a href="https://console.aws.amazon.com/ec2/" rel="noopener noreferrer"&gt;https://console.aws.amazon.com/ec2/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;On the navigation pane, under Instances, choose Instances.&lt;/p&gt;

&lt;p&gt;Select the instance and choose Actions, Image and templates, Create template from instance.&lt;/p&gt;

&lt;p&gt;Provide a name and description.&lt;/p&gt;

&lt;p&gt;Under Auto Scaling guidance, select the check box.&lt;/p&gt;

&lt;p&gt;Adjust any settings as required, and choose Create launch template.&lt;/p&gt;

&lt;p&gt;To create an Auto Scaling group, choose Create Auto Scaling group from the confirmation page.&lt;/p&gt;

</description>
      <category>emptystring</category>
    </item>
  </channel>
</rss>
