<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Karthik R</title>
    <description>The latest articles on DEV Community by Karthik R (@karthikrnair).</description>
    <link>https://dev.to/karthikrnair</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/karthikrnair"/>
    <language>en</language>
    <item>
      <title>Maximizing SPOT Instance Efficiency Strategies</title>
      <dc:creator>Karthik R</dc:creator>
      <pubDate>Mon, 19 Feb 2024 01:29:15 +0000</pubDate>
      <link>https://dev.to/aws-builders/maximizing-spot-instance-efficiency-strategies-2gcj</link>
      <guid>https://dev.to/aws-builders/maximizing-spot-instance-efficiency-strategies-2gcj</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since 2009, leveraging AWS EC2's bidding process has ensured cost efficiency. Discounts ranging from 50% to 90% off on-demand rates are achievable. These instances, though, function similarly to on-demand ones. They are best suited for fault-tolerant workloads. However, they are subject to AWS's 2-minute interruption notice. Additionally, AWS can reclaim SPOT instances if current prices exceed bid prices. Capacity constraints in specific regions also pose a consideration. &lt;/p&gt;

&lt;p&gt;While this model offers optimal discounts for running compute workloads, its interruption nature, workload suitability, and the necessity for architecture alignment contribute to low enterprise adoption. Despite AWS introducing numerous features to mitigate interruptions, the adoption remains modest. In this blog, I'll delve into two features that significantly enhance the SPOT selection process, offering higher benefits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SPOT Placement Score&lt;/strong&gt;&lt;br&gt;
Before submitting a SPOT instance request for your workload, you can utilize this feature to assess recommended regions and availability zones with the highest potential matches and minimal interruption probabilities. This serves as an invaluable guideline for SPOT selection, enhancing the reliability of the instance. The Spot placement score offers insights into the likelihood of success for a Spot request in a given region or availability zone, albeit without guaranteeing instance availability and reliability. It's important to note that this score may fluctuate over time based on demand. Nonetheless, it provides an excellent starting point for initiating SPOT instance requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick tips&lt;/strong&gt;&lt;br&gt;
1)  You can regular view placement scoring to understand your SPOT instance success ratio.&lt;br&gt;
2)  Specify capacity range attributes instead of individual instance types.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8apghvpsv6vomd40bj0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8apghvpsv6vomd40bj0.jpg" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;br&gt;
As depicted in the aforementioned figure, if your workload requires significant memory resources, it is advisable to opt for a capacity range spanning from medium to large per instance. This range-based approach offers a wider selection of options compared to relying solely on fixed instance types.. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9823liz6nlgpv6go4h1r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9823liz6nlgpv6go4h1r.jpg" alt="Image description" width="800" height="413"&gt;&lt;/a&gt;&lt;br&gt;
As shows in the figure, the range give you many combinations, that is near to your capacity requirements.&lt;/p&gt;

&lt;p&gt;3)  Region and Availability Zone Recommendations:&lt;br&gt;
The SPOT placement score offers region and availability zone recommendations based on instance and capacity needs. Scores range from 10 to 1, with 10 indicating a high likelihood of Spot request success, though not guaranteed.&lt;br&gt;
Step by Step guidance on how to use spot placement score, please refer:- &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-placement-score.html#sps-specify-instance-attributes-console"&gt;https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-placement-score.html#sps-specify-instance-attributes-console&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Attribute based selection.&lt;/strong&gt;&lt;br&gt;
You must determine the final instance type requirement to establish your SPOT instance request, upon which AWS allocates capacity accordingly. Setting the placement score based on instance type or their specific attributes allows for a wider range of choices than specifying a instance type in a better model, resulting in higher success rates than relying solely on individual instance types. For instance, Figure 1 illustrates a fixed instance model, where the likelihood of success is diminished due to its inflexible nature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vfx3d16123022orvv7b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vfx3d16123022orvv7b.jpg" alt="Image description" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The below figure shows attribute-based selection, which can give you higher success ration and better suggestion as compared to the manual selection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg3d6mjrytvb24c5crhh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg3d6mjrytvb24c5crhh.jpg" alt="Image description" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Through placement score and attribute-based model, you can have higher success ratio in Spot instance category.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
leveraging AWS SPOT instances necessitates careful consideration of placement strategies and instance type selection. By embracing capacity range attributes and region recommendations based on SPOT placement scores, users can enhance reliability and optimize resource allocation. Adopting a flexible approach offers greater success rates compared to rigid, fixed instance models. This underscores the importance of adaptability and informed decision-making in maximizing the benefits of SPOT instances within AWS infrastructure.&lt;/p&gt;

</description>
      <category>awscommunity</category>
      <category>awscloud</category>
      <category>costoptimization</category>
    </item>
    <item>
      <title>Maximizing SPOT Instance Efficiency Strategies</title>
      <dc:creator>Karthik R</dc:creator>
      <pubDate>Mon, 19 Feb 2024 01:26:44 +0000</pubDate>
      <link>https://dev.to/karthikrnair/maximizing-spot-instance-efficiency-strategies-2nm5</link>
      <guid>https://dev.to/karthikrnair/maximizing-spot-instance-efficiency-strategies-2nm5</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since 2009, leveraging AWS EC2's bidding process has ensured cost efficiency. Discounts ranging from 50% to 90% off on-demand rates are achievable. These instances, though, function similarly to on-demand ones. They are best suited for fault-tolerant workloads. However, they are subject to AWS's 2-minute interruption notice. Additionally, AWS can reclaim SPOT instances if current prices exceed bid prices. Capacity constraints in specific regions also pose a consideration. &lt;/p&gt;

&lt;p&gt;While this model offers optimal discounts for running compute workloads, its interruption nature, workload suitability, and the necessity for architecture alignment contribute to low enterprise adoption. Despite AWS introducing numerous features to mitigate interruptions, the adoption remains modest. In this blog, I'll delve into two features that significantly enhance the SPOT selection process, offering higher benefits.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SPOT Placement Score&lt;/strong&gt;&lt;br&gt;
Before submitting a SPOT instance request for your workload, you can utilize this feature to assess recommended regions and availability zones with the highest potential matches and minimal interruption probabilities. This serves as an invaluable guideline for SPOT selection, enhancing the reliability of the instance. The Spot placement score offers insights into the likelihood of success for a Spot request in a given region or availability zone, albeit without guaranteeing instance availability and reliability. It's important to note that this score may fluctuate over time based on demand. Nonetheless, it provides an excellent starting point for initiating SPOT instance requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Quick tips&lt;/strong&gt;&lt;br&gt;
1)  You can regular view placement scoring to understand your SPOT instance success ratio.&lt;br&gt;
2)  Specify capacity range attributes instead of individual instance types.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8apghvpsv6vomd40bj0.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8apghvpsv6vomd40bj0.jpg" alt="Image description" width="800" height="391"&gt;&lt;/a&gt;&lt;br&gt;
As depicted in the aforementioned figure, if your workload requires significant memory resources, it is advisable to opt for a capacity range spanning from medium to large per instance. This range-based approach offers a wider selection of options compared to relying solely on fixed instance types.. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9823liz6nlgpv6go4h1r.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9823liz6nlgpv6go4h1r.jpg" alt="Image description" width="800" height="413"&gt;&lt;/a&gt;&lt;br&gt;
As shows in the figure, the range give you many combinations, that is near to your capacity requirements.&lt;/p&gt;

&lt;p&gt;3)  Region and Availability Zone Recommendations:&lt;br&gt;
The SPOT placement score offers region and availability zone recommendations based on instance and capacity needs. Scores range from 10 to 1, with 10 indicating a high likelihood of Spot request success, though not guaranteed.&lt;br&gt;
Step by Step guidance on how to use spot placement score, please refer:- &lt;a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-placement-score.html#sps-specify-instance-attributes-console"&gt;https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-placement-score.html#sps-specify-instance-attributes-console&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Attribute based selection.&lt;/strong&gt;&lt;br&gt;
You must determine the final instance type requirement to establish your SPOT instance request, upon which AWS allocates capacity accordingly. Setting the placement score based on instance type or their specific attributes allows for a wider range of choices than specifying a instance type in a better model, resulting in higher success rates than relying solely on individual instance types. For instance, Figure 1 illustrates a fixed instance model, where the likelihood of success is diminished due to its inflexible nature.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vfx3d16123022orvv7b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3vfx3d16123022orvv7b.jpg" alt="Image description" width="800" height="240"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The below figure shows attribute-based selection, which can give you higher success ration and better suggestion as compared to the manual selection.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg3d6mjrytvb24c5crhh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg3d6mjrytvb24c5crhh.jpg" alt="Image description" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Through placement score and attribute-based model, you can have higher success ratio in Spot instance category.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
leveraging AWS SPOT instances necessitates careful consideration of placement strategies and instance type selection. By embracing capacity range attributes and region recommendations based on SPOT placement scores, users can enhance reliability and optimize resource allocation. Adopting a flexible approach offers greater success rates compared to rigid, fixed instance models. This underscores the importance of adaptability and informed decision-making in maximizing the benefits of SPOT instances within AWS infrastructure.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Balancing between Cost and Performance of EFS. A guide to drive the best dollar performance using Amazon EFS.</title>
      <dc:creator>Karthik R</dc:creator>
      <pubDate>Wed, 01 Feb 2023 09:01:24 +0000</pubDate>
      <link>https://dev.to/aws-builders/balancing-between-cost-and-performance-of-efs-a-guide-to-drive-the-best-dollar-performance-using-amazon-efs-3kgg</link>
      <guid>https://dev.to/aws-builders/balancing-between-cost-and-performance-of-efs-a-guide-to-drive-the-best-dollar-performance-using-amazon-efs-3kgg</guid>
      <description>&lt;p&gt;Introduction&lt;/p&gt;

&lt;p&gt;Performance and cost optimization are two different extreme ends of a pole. Identifying your requirements and selecting the right performance characteristics are essential for an optimized cost solution/services.&lt;/p&gt;

&lt;p&gt;AWS Elastic File System (EFS) is a shared file system offering from AWS works with Network File System (NFS) protocol. It is one of the shared file systems used in various architectures. EFS resemebles Network Attached File System (NFS) from traditional storage solutions. Capabilities of EFS has been diversified in last few years to become the befitting storage choice for Elastic Kubernetes Services (EKS) , Elastic Container Services (EKS), Fargate and Lambda functions besides mounting capabilities from AWS EC2 instances(with NFS Protocol).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For Implementation please refer&lt;/em&gt;;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Mount EFS in EKS:- &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-pods-encryption-efs/" rel="noopener noreferrer"&gt;https://aws.amazon.com/premiumsupport/knowledge-center/eks-pods-encryption-efs/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt; Mount EFS in ECS:- &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-mount-efs-containers-tasks/" rel="noopener noreferrer"&gt;https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-mount-efs-containers-tasks/&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt; Mount EFS in lambda:- &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/efs-mount-with-lambda-function/" rel="noopener noreferrer"&gt;https://aws.amazon.com/premiumsupport/knowledge-center/efs-mount-with-lambda-function/&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Unique capabilities of EFS.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This shared file system has many distinctive forte which speeds the adoption as a File Storage.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Automatic Capacity management(elastic)&lt;/strong&gt;:- Zero storage capacity management is one of the greatest feature, it can grow from zero to Terabyte of capacity transparently . AWS allocates storage initially, monitor capacity demand and add/shrink capacity as it grows.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;True elastic in nature&lt;/strong&gt;:- AWS increase and resize/decrease the storage allocation based on files you add/remove. Storage consumption cost is based on the actual storage capacity used.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Multi Regional replication&lt;/strong&gt;:- Out of the box configurations are available to replicate your EFS volume to other region or Azs within the region. Amazon EFS Replication is nearly continuous, designed to provide a recovery point objective (RPO) and a recovery time objective (RTO) of minutes. This makes as a storage of choice if multi regional recover is in picture.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Multi client support&lt;/strong&gt;:- EFS shares can be accessed within the region, across regions and from Datacenter through Direct connect.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For details, refer : &lt;a href="https://aws.amazon.com/efs/features/" rel="noopener noreferrer"&gt;https://aws.amazon.com/efs/features/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;EFS file system has two extends , performance(Throughput) and optimized cost(storage tiers). EFS service offer three options of throughput and few options for cost optimization.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Throughput modes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EFS provides 3 throughput modes for each file system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pxcvqj2t6pj41ir9d1h.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0pxcvqj2t6pj41ir9d1h.PNG" alt="Image description"&gt;&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bursting (default)&lt;/strong&gt;:- This is the default throughput mode which scales throughput based on the storage capacity at of 50 KiBps per each GiB of storage. In this mode, Burst credits accrue when the file system consumes below its base throughput rate, and are deducted when throughput exceeds the base rate.&lt;/p&gt;

&lt;p&gt;When burst credits are available, a file system can drive throughput up to 100 MiBps per TiB of storage, up to the Amazon EFS Region's limit, with a minimum of 100 MiBps. If no burst credits are available, a file system can drive up to 50 MiBps per TiB of storage, with a minimum of 1 MiBps.&lt;/p&gt;

&lt;p&gt;To deep dive, refer:- &lt;a href="https://docs.aws.amazon.com/efs/latest/ug/performance.html#bursting" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/efs/latest/ug/performance.html#bursting&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp45mhel61jbxf5tb7ca.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frp45mhel61jbxf5tb7ca.png" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Elastic&lt;/strong&gt;:- In contrasts to the baseline and burst credits way of Bursting throughput mode, Elastic throughput mode of EFS automatically scales throughput performance up or down to meet the needs of your workload activity. This is suited for spiky workloads where the throughput cannot be predicted and change dynamically. Elastic Throughput can drive up to 3 GiBps for read operations and 1 GiBps for write operations per file system, in all AWS Regions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Provisioned Throughput&lt;/strong&gt;:- In this mode, you define the throughput requirement of your file system irrespective of the storage capacity needs, You pay throughput cost dedicately along with the storage cost.&lt;/p&gt;

&lt;p&gt;Provisioned Throughput can drive up to 3 GiBps for read operations and 1 GiBps for write operations per file system, in all Regions.&lt;/p&gt;

&lt;p&gt;Re-Configure throughput mode&lt;/p&gt;

&lt;p&gt;The above mentioned throughput mode can be changed for a file system with certain restrictions. To dive the required performance on the file system , you need to switch these by evaluating the performance charteristics and cost factors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Mode change restrictions&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;the following actions are restricted for a 24-hour period:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Switching from Provisioned Throughput mode to Bursting or Elastic Throughput mode.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Decreasing the provisioned throughput amount.
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;How to determine the right throughput mode&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;EFS reports its performance usage in AWS CloudWatch services. The Metrics relevant to identify the right choice of throughput is “PermittedThroughput”. It measures the amount of allowed throughput for the file system. This value is based on one of the following methods:&lt;/p&gt;

&lt;p&gt;For file systems in Bursting Throughput mode, this value is a function of the file system size and BurstCreditBalance. If the metrics shows value consistently at or near zero, consider switching to Elastic Throughput or Provisioned Throughput mode to get additional throughput.&lt;/p&gt;

&lt;p&gt;For file systems in Elastic Throughput mode, this value reflects the maximum write throughput of the file system.. You may also use “MeteredIOBytes “along with “PermittedThoughput”. When the values for MeteredIOBytes and PermittedThroughput are equal, your file system is consuming all available throughput. For file systems using Provisioned Throughput mode, you can provision additional throughput.&lt;/p&gt;

&lt;p&gt;Similar to various throughput options, there are various storage classes available ; based on the access needs of the EFS files, they can be moved to Infrequent Access (IA) tiers through Life Cycle Management. An efficient selection of the right storage class gives you the best dollar performance. Let us have a close look on the tiers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnulfvu30oggtipc7rp2v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnulfvu30oggtipc7rp2v.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Data stored in the EFS standard or EFS One-Zone standard can move their files to EFS IA or One Zone IA through life cycle management. AWS:Reinvent 2022 has an announcement of &lt;strong&gt;“1”day&lt;/strong&gt; transition policy added to the existing 7, 14, 30, 60, and 90 days. This is a simple configuration for data moved between different tiers , Standard to EFS IA . Data stored on the IA classes are &lt;em&gt;47 percent lower than the standard tiers&lt;/em&gt;.&lt;br&gt;
EFS One Zone Storage is an optimal solution for non-critical workloads that can sustain from Availability Zone (AZ) failures. Typical use cases are non-production file system, re-producible files, temporary storage etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon EFS Intelligent-Tiering&lt;/strong&gt;, a new EFS lifecycle management feature that helps to move file between tiers based on the access pattern of the file. Intelligent-Tiering helps to move the file back to the standard tier transparently when files are accessed from IA class. This helps to reduce the retrieval charges of IA classes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Identifying the right balance of Throughput and performance tiers of EFS storage classes is crucial to get the right dollar performance.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>A Guide on selecting Amazon Aurora Serverless and provisioned Database cluster</title>
      <dc:creator>Karthik R</dc:creator>
      <pubDate>Tue, 05 Jul 2022 17:34:59 +0000</pubDate>
      <link>https://dev.to/aws-builders/a-guide-on-selecting-amazon-aurora-serverless-and-provisioned-database-cluster-1e1h</link>
      <guid>https://dev.to/aws-builders/a-guide-on-selecting-amazon-aurora-serverless-and-provisioned-database-cluster-1e1h</guid>
      <description>&lt;h3&gt;
  
  
  Introduction
&lt;/h3&gt;

&lt;p&gt;The AWS Aurora is a managed database offering by AWS.  It provides enterprise grade capabilities and availabilities thus can be called as True Cloud Database.  It has the capabilities like Aurora Global Database, Deep integration with most AWS services, custom reader endpoints, 15 replicas, Highest availability (99.99%) in managed SQL instances family, granular point in time recovery, cross regional replication with latency typically under a second. The suit of features improved multi fold when Serverless capabilities introduced for Aurora and substantial improvement happened in Serverless v2. Refer, Serverless V2 to know details about this version. &lt;br&gt;
The serverless adoption had scaled from non-production to product ready with the new V2 version.&lt;br&gt;
This blog is intended to give you serverless adoption use cases, that help you to select the right database capacity model for your database.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Serverless And Provisioned Capacity Aurora Cluster&lt;/strong&gt;&lt;br&gt;
You can configure an Aurora DB as Provisioned (Defined Capacity) or serverless while you spin up the cluster. A provisioned capacity cluster is the DB instance with preconfigured CPU, Memory and Storage IOPS for your database. You need to plan your compute requirement and configure while spin up, although you have an option to scale up/down manually the capacity as and when capacity demand increases/decreases. Whereas for serverless, you define the minimum and maximum compute requirement for your cluster, AWS monitors the usage demand and adjusts the compute capacity dynamically. This scaling adjusts the compute (memory and related CPU) of the Aurora DB cluster in contrast to the Read-only node auto scaling option available in Aurora provisioned DB cluster. &lt;br&gt;
Advantages of Serverless&lt;br&gt;
Aligning to the context of this blog, the major advantage of Aurora serverless DB instances are scalability, cost efficiency and minimal operational overhead. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to Start Using Serverless&lt;/strong&gt;&lt;br&gt;
You can start Serverless Aurora cluster just by giving few details such as&lt;br&gt;
1)  MySQL/ PostgreSQL -Compatible Database engine edition &lt;br&gt;
2)  Database Engine version (Please note: - Since, not all MySQL and PostgreSQL version are compatible, selecting older version (for e.g.,- MySQL 5.7) will grayed out “Serverless” option. &lt;br&gt;
3)  Select the minimum and maximum Aurora Capacity Unit (ACU).&lt;br&gt;
The following diagram shows the configuration window of Aurora serverless service&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqd9kg5kvnindyzxzuje6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqd9kg5kvnindyzxzuje6.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;br&gt;
Once the deployment is complete, you can connect the Aurora Serverless DB cluster using DB Endpoint provided , you get dedicated endpoints for reader and The below screen shot marks the reader and writer endpoint details from the Aurora DB cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fau45jjznr60fb5r9sire.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fau45jjznr60fb5r9sire.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Selection Criteria Between Serverless and Provisioned&lt;br&gt;
The below table provides few common use cases &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft55d2kaa24igogltcscw.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft55d2kaa24igogltcscw.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How Aurora Serverless Scales the Capacity&lt;/strong&gt;&lt;br&gt;
AWS scales the compute capacity of Reader and writer independently based on the dynamic resource demand. With Aurora Serverless v2, your cluster can contain readers in addition to the writer. You can also have mix of reader and writer aurora instances, i.e., provisioned writer and Aurora Serverless v2 readers or vice versa.  Each Aurora Serverless v2 writer and reader can scale between the minimum and maximum capacity values. Thus, the total capacity of your Aurora Serverless v2 cluster depends on both the capacity range that you defined for your DB cluster and the number of writers and readers in the cluster. The capacity is defined at Aurora Capacity Unit (ACU), where each ACU is approximately 2 gibibytes (GiB) of memory, corresponding CPU, and networking. You can scale from 0.5 ACU to 128 ACUs in the increment of 0.5 ACU units. &lt;br&gt;
For details, refer: &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Size your minimum and maximum requirements.&lt;/strong&gt;&lt;br&gt;
Although it is highly benefitted to consider the permissible minimum ACU (0.5) as the minimum capacity, it will have challenges in terms of scaling the capacity. Kindly go through this AWS documentation on guidelines for selecting minimum and maximum ACUs. &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.setting-capacity.html#aurora-serverless-v2.min_capacity_considerations" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.setting-capacity.html#aurora-serverless-v2.min_capacity_considerations&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitor Your Aurora Serverless&lt;/strong&gt;&lt;br&gt;
AWS takes care of scaling of the Aurora capacity behind the hoods and always ensure the capacity demands are satisfied. It is imperative to understand the capacity scales to relook the judgment of serverless or provisioned Aurora cluster and make conversion if necessary.&lt;br&gt;
You can view your Aurora Serverless v2 DB instances in CloudWatch to monitor the capacity consumed by each DB instance with the ServerlessDatabaseCapacity and ACUUtilization metrics. &lt;br&gt;
As the price of AWS Aurora Serverless is higher than the provisioned capacity, it is essential re-validate the serverless option by those metrics. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to migrate between provisioned and serverless Aurora&lt;/strong&gt;&lt;br&gt;
You can migrate a provisioned aurora cluster to Serverless and vice versa by introducing a “Serverless” or “provisioned” Replica on to the existing cluster, promote the replica as “writer” instance through failover process. Using this simple, transparent and easy way, you can convert your existing provisioned cluster to serverless cluster and vice versa.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pricing of Serverless Cluster&lt;/strong&gt;&lt;br&gt;
Aurora Serverless measures database capacity in Aurora Capacity Units (ACUs) and billed per second. $0.18 per ACU Hour is the price for Serverless V2 in “Mumbai” region. Suppose if I run average 8 ACU (i.e., 16GB Memory) for one month (730 Hrs.), the AWS bill will be 730 * .18(ACU Hour) * 8(Avg ACU), which is $1051.2. &lt;br&gt;
The provisioned Aurora cluster with the memory capacity of 16 GB (db.r6g.large) on-demand is priced at $189.80. With this simple calculation, it reflects that Aurora pricing is almost 5 times higher than provisioned cluster.&lt;br&gt;
The above price comparison is not the real way to calculate the TCO of running a serverless Aurora cluster, as the cluster is assumed to be used in a situation that dial down to minimum of 0.5 (i.e. 1 GB) ACU , which can bring down the cost eventually. The above pricing calculation is just the price of AWS DB instance , there are other cost factor also to be consider such as monitoring and management etc. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Select the best suitable Amazon Aurora capacity model for your database based on the use cases. It is evident that the feature of Aurora serverless (v2) is a good fit for spiky and many specific use cases however the cost of serverless is high, hence leverage the capability as appropriate.  &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Learn 15+ unique exciting AWS RDS features to empower your application demand!!!</title>
      <dc:creator>Karthik R</dc:creator>
      <pubDate>Tue, 07 Jun 2022 17:31:09 +0000</pubDate>
      <link>https://dev.to/aws-builders/learn-15-unique-exciting-aws-rds-features-to-empower-your-application-demand-1138</link>
      <guid>https://dev.to/aws-builders/learn-15-unique-exciting-aws-rds-features-to-empower-your-application-demand-1138</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;br&gt;
AWS Added Relational Database Services (RDS) to their service offering portfolio in the year 2009 starting by providing MySQL as a managed service. The service offering widened with Microsoft SQL and Oracle, Enterprise grade Databases MySQL , PostgreSQL and Maria DB, open source databases. As RDS adoption intensified, AWS introduced Cloud Relational Database, Aurora to their database stream with plenty of advanced features. &lt;br&gt;
There are plethora of lighting features added to the RDS during last couple of years. Through this blog, I intend to organize these rich feature sets mapped with AWS Well Architecture framework pillars &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Why AWS Well Architecture Framework *&lt;/em&gt;&lt;br&gt;
AWS Well Architecture Framework is a well-established architectural best practice for designing and operating application in secure, reliable, efficient and cost-effective, fashion in the AWS Cloud. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--6FUssmTV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jp0mlurvlneagwzu0wwr.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--6FUssmTV--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jp0mlurvlneagwzu0wwr.jpg" alt="Image description" width="798" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Centralized backup through AWS Backup&lt;/strong&gt;&lt;br&gt;
AWS backup is centralized and automated data protection services to backup RDS instances across your accounts and regions. It offers a cost-effective, fully managed, policy-based service that further simplifies data protection at scale. &lt;br&gt;
Certain features to enhance the protection of your critical databases are :-&lt;br&gt;
1)  Multiple backups: - As per your organizations protection policy ,weekly, monthly and yearly dedicated full backups to be scheduled through AWS backup policies&lt;br&gt;
2)  Cross Region and Account replication: - You should protect one copy at a central and a dedicated account to restore in case of any account damages, Ransomware attacks or regional failure. &lt;br&gt;
3)  Backup Vault lock: - To protect for any accidental deletion of your backup copies as well as to adhere compliance requirements, protect backup vault as per the compliance policy.&lt;/p&gt;

&lt;p&gt;Please refer this blog post for AWS backup capabilities. &lt;a href="https://www.linkedin.com/pulse/how-sustain-from-ransomware-attacks-using-aws-backup-technics-nair/?trackingId=jJlZdHitYDTw6trbOS86%2FA%3D%3D"&gt;https://www.linkedin.com/pulse/how-sustain-from-ransomware-attacks-using-aws-backup-technics-nair/?trackingId=jJlZdHitYDTw6trbOS86%2FA%3D%3D&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring and alert management&lt;/strong&gt;&lt;br&gt;
You should utilize “performance insight”, database aware deep monitoring capabilities along with the default AWS CloudWatch to monitor, record events and alert management for your RDS instances. “Performance insight” gives a single dashboard with detailed metrics like database load, active sessions, Wait Events, TOP SQL Statements etc.&lt;br&gt;
To Learn more about Performance insight, Visit:- &lt;a href="https://aws.amazon.com/rds/performance-insights/"&gt;https://aws.amazon.com/rds/performance-insights/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Log management&lt;/strong&gt;&lt;br&gt;
It is a best practice to store the database transaction logs for longer duration based on your company policy. By Default, these logs will be rotated based on the specific database engine configuration. You can publish Database logs to be stored on “AWS Cloud Watch Logs”  .&lt;/p&gt;

&lt;p&gt;Please refer this link to enable log publishing: - &lt;a href="https://aws.amazon.com/premiumsupport/knowledge-center/rds-aurora-mysql-logs-cloudwatch/"&gt;https://aws.amazon.com/premiumsupport/knowledge-center/rds-aurora-mysql-logs-cloudwatch/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Event Management&lt;/strong&gt;&lt;br&gt;
It is imperative to know the status /events of the RDS instances and act swiftly to curtail downtimes or inadvertently. AWS Event Subscription can be configured to alert status change in Snapshots, Instances, Security Group cluster and Parameter Group. Some of the best practice can be&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--_6YTIb12--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/74xdlzmrgc3efvxgyr4v.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--_6YTIb12--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/74xdlzmrgc3efvxgyr4v.jpg" alt="Image description" width="489" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Auto start and Stop using Systems manager&lt;/strong&gt;&lt;br&gt;
To drive best cost saving on your non-production RDS instance, configure Auto start and stop. In parallel, you can also manually stop RDS instance if they are not used . &lt;br&gt;
Start and Stop using AWS Systems manager: - &lt;a href="https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html"&gt;https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-stop-and-start-an-amazon-rds-db-instance-using-aws-systems-manager-maintenance-windows.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Aurora Autoscaling&lt;/strong&gt;&lt;br&gt;
Prepend compute power based on the real need, the foundational principle of Cloud computing has extended to the Amazon Aurora (both MySQL and PostGre SQL) through addition of “Replica” nodes in the cluster. When the connectivity or workload (CPU threshold) decreases, Aurora Auto Scaling removes unnecessary Aurora Replicas.&lt;br&gt;
How to configure Aurora autoscaling:- &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Integrating.AutoScaling.html#Aurora.Integrating.AutoScaling.AddConsole"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Integrating.AutoScaling.html#Aurora.Integrating.AutoScaling.AddConsole&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;RDS High Availability and Disaster Recovery&lt;/strong&gt;&lt;br&gt;
Based on the RDS DB Engines (Aurora or others), distinct methodologies are available for High Availability. Both options use a replica/stand-by DB instance in distinct availability zone (in relation to the primary/master/writer DB instance), transparently and instantaneously switch from the primary/master DB instance in the event of a failure.&lt;/p&gt;

&lt;p&gt;The default model for HA is provided within the region, however the DR capabilities can be extended across multiple regions using Replica/Read Replicas for both RDS and RDS Aurora. Read replicas can be created in any region/regions that asynchronously replicate change records from the master/primary instance. Converting the read replicas to Read/Write instance is activity needs to trigger in the DR scenario. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aurora Global Databases&lt;/strong&gt;&lt;br&gt;
Aurora Global Database is a single Amazon Aurora database span across multiple AWS Regions. An Aurora global database has a primary DB cluster in one Region, and up to five secondary DB clusters in different Regions. Globally Distributed applications are regional failure are few use-cases where Aurora Global Database can be leveraged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aurora Fault Injection Simulator&lt;/strong&gt;&lt;br&gt;
You can test the fault tolerance of your Aurora PostgreSQL DB cluster by using fault injection queries. Fault injection queries are issued as SQL commands to an Amazon Aurora instance. Fault injection queries enable you to schedule simulated tests of the following events:&lt;/p&gt;

&lt;p&gt;• Testing an instance crash&lt;br&gt;
• Testing an Aurora Replica failure&lt;br&gt;
• Testing a disk failure&lt;br&gt;
• Testing disk congestion&lt;/p&gt;

&lt;p&gt;When a fault injection query specifies a crash, it forces a crash of the Aurora PostgreSQL DB instance. The other fault injection queries result in simulations of failure events, but don't cause the event to occur. When you submit a fault injection query, you also specify an amount of time for the failure event simulation to occur.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Native Security Controls&lt;/strong&gt;&lt;br&gt;
You can leverage AWS native security controls like Security Groups to protect at instance level, i.e you can control which source IP, Subnet, Security Group can communicate with your DB instance. While Security Group provides network level protection for your DB instances (RDS), Key Management Systems (KMS) provides encryption services for your DB instances. You can have your own Keys or customer generated key materials as features to enhance security controls. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Identity and Access Control (IAM)&lt;/strong&gt;&lt;br&gt;
AWS IAM can be used to authenticate at the database level in addition to the default authentication capabilities at the RDS level (Platform). This credential management will be a better choice than management at the individual database layer. With this authentication method, you don't need to use a password when you connect to a DB instance. Instead, you use an authentication token. Which is a unique string of characters that Amazon RDS generates on request using AWS Signature Version 4 with lifetime of 15 minutes.&lt;/p&gt;

&lt;p&gt;Please refer: &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.DBAccounts.html&lt;/a&gt;&lt;br&gt;
There are limitations for this feature, To know about limitation, please refer:- &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html#UsingWithRDS.IAMDBAuth.Availability"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html#UsingWithRDS.IAMDBAuth.Availability&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Secret Manager for RDS&lt;/strong&gt;&lt;br&gt;
AWS Secrets Manager protect secrets needed to access your RDS database instance. This service enables you to easily rotate, manage, and retrieve database credentials and other secrets throughout their lifecycle. This service eliminates the need of DB credentials stored as plan text in application configuration file.&lt;br&gt;
To learn more and integration with RDS: &lt;a href="https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html"&gt;https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SSL Enforcement on RDS instance&lt;/strong&gt;&lt;br&gt;
To enhance security, use SSL or Transport Layer Security (TLS) while connecting from your source systems to a DB instance running MySQL, MariaDB, Microsoft SQL Server, Oracle, or PostgreSQL. Amazon RDS creates an SSL certificate and installs the certificate on the DB instance that is signed by a Certificate Authority. &lt;br&gt;
You must configure the RDS DB instance to accept ONLY SSL connection through RDS parameter group configuration, this ensures that any non-SSL connection attempts are always failed . &lt;br&gt;
 Note:- Each DB engine configuration will be different. The below screenshot is of MY SQL Parameter group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--rw9r_qPz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5jt0mffrz511h3c26qf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--rw9r_qPz--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5jt0mffrz511h3c26qf.jpg" alt="Image description" width="625" height="198"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--fQNFWVPp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fclrtxxv658dp024yeei.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--fQNFWVPp--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fclrtxxv658dp024yeei.jpg" alt="Image description" width="625" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above configuration is of PostGreSQL 11.0&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read Replicas&lt;/strong&gt;&lt;br&gt;
Segregating read and write requests into separate databases (ensuring data consistency) is one of the models adopted to achieve best performance. Read replicas are available for MySQL, PostgreSQL , Maria DB and Aurora. To extend the capability of the Read Replicas for HA, we must deploy them into multiple Availability Zones (AZs), this provides AZ level failure protection &lt;/p&gt;

&lt;p&gt;AWS Aurora supports load balancing of read replicas and quick conversion of read replica as writer instance in the event of master failure (HA). You can also independently scale replicas based on your read requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Aurora Serverless&lt;/strong&gt;&lt;br&gt;
It was a breakthrough achievement when autoscaling on database succeeded, incredible value of cloud became reality when elasticity implemented in Database technologies through Serverless. With Aurora Serverless, you create a database, specify the desired database capacity range, and connect your applications. Amazon Aurora Serverless scales instantly to hundreds of thousands of transactions in a fraction of a second. As it scales, it adjusts capacity in fine-grained increments to provide the right amount of database resources that the application needs.&lt;/p&gt;

&lt;p&gt;In case you want to explore serverless Aurora on your existing provisioned Aurora cluster, you can add a new Reader node as serverless. Through this model, you can leverage the capability of serverless in your existing Aurora cluster and learn how often the reader DB instances scale up and down. &lt;/p&gt;

&lt;p&gt;Distinct strategies are being adopted in industry with the serverless, few use cases are applications that have infrequent, intermittent, or unpredictable workloads to the most demanding, business critical applications that require high scale and rapid incremental scale, unpredictable database capacity needs, Infrequent critical applications etc.&lt;/p&gt;

&lt;p&gt;To learn more about Aurora serverless, please refer: &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-serverless-v2.how-it-works.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
As detailed above, there are wealth of capabilities and features added to AWS RDS platform to superlatively support your application demand, scalability needs, security requirements and cost efficiency.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Data points you need to know about ARM for your application code migration.</title>
      <dc:creator>Karthik R</dc:creator>
      <pubDate>Tue, 03 May 2022 09:26:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/data-points-you-need-to-know-about-arm-for-your-application-code-migration-5c0f</link>
      <guid>https://dev.to/aws-builders/data-points-you-need-to-know-about-arm-for-your-application-code-migration-5c0f</guid>
      <description>&lt;p&gt;&lt;strong&gt;1. Introduction&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Advanced RISC Machines or ARM&lt;/strong&gt; architecture has wide adoption on the compute filed since last one decade. Due to ARM’s architectural built-in capabilities, first time adoptions were more in mobile phones, tablets, setup boxes, smart TVs, electronic wearable’s and Special hardware devices such as IOT devices as their Micro Controllers or minicomputer. The low power consumption and reduced heat generation made a great fitment later in the compute field as well. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How are ARM different from X86_64&lt;/strong&gt;&lt;br&gt;
The existing X86_64, widely adopted in desktops, laptops, Servers and other general-purpose compute infrastrcture are evolved from traditional Microprocessors 8085 and 8086. The basic difference between X86 (Intel) and ARM is, x86 is CISC (Complex Instruction Set Computing) and ARM is Arm is RISC (Reduced Instruction Set Computing) based.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ARM on AWS&lt;/strong&gt;&lt;br&gt;
AWS is one of the early adopters of ARM architecture in their compute offering stream (AWS EC2). AWS has developed own processor family Gravition, based on ARM architecture. AWS Graviton processors are designed to deliver the best price performance. AWS made further enhancement by introducing Graviton-2 with 40% performance improvement in contrast to X86 processor family. Graviton2 processors are custom built by AWS using 64-bit Arm Neoverse cores. All Graviton processors include dedicated cores &amp;amp; caches for each vCPU, along with additional security features courtesy of AWS Nitro System; the Graviton2 processors add support for always-on memory encryption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Latest ARM based technology in AWS&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Graviton-3 based instance family (g7) are in preview mode which is expected to give 25% performance improvement of their Gravition (g6) family. The performance improvement will be varied based on the workload characters, higher improvements are expected in Machine Learning (ML) and cryptographic workload due to the larger floating point. DDR5 memory in Graviton-3 will compliment the performance improvement. &lt;br&gt;
Read details on &lt;a href="https://aws.amazon.com/blogs/aws/join-the-preview-amazon-ec2-c7g-instances-powered-by-new-aws-graviton3-processors/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/aws/join-the-preview-amazon-ec2-c7g-instances-powered-by-new-aws-graviton3-processors/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS services use ARM (Graviton) behind the hood&lt;/strong&gt;&lt;br&gt;
There are plenty of AWS Managed services uses Graviton processor behind the scenes including Amazon Aurora, &lt;u&gt;Amazon ElastiCache, Amazon EMR, AWS Lambda, and AWS Fargate.&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ARM options in AWS EC2&lt;/strong&gt;&lt;br&gt;
AWS added Gravition processor based “g” series in their AWS EC2 family to extend the choice of instance models by Year 2019. This had given flexibility for many customers to explore the ARM capabilities and benchmark performance improvement. Graviton instances are available in most of the common set of instance families such as “tg” for burstable general purpose, “mg” for general purpose workloads, “cg” for compute-intensive workloads, “rg” for memory-intensive workloads, “ig” for storage-intensive workloads. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Software programs available in ARM&lt;/strong&gt;&lt;br&gt;
There are plenty of programing language with ARM support to enhance the adoption of your application platform. The latest edition of these languages has the ARM support packages/libraries available, below list provides the supported version of some of those programing language. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e23g0ymz5mmhiejuwv7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e23g0ymz5mmhiejuwv7.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reference: - &lt;a href="https://segmentfault.com/a/1190000041272174/en" rel="noopener noreferrer"&gt;https://segmentfault.com/a/1190000041272174/en&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The application already complied for X86_64 may not be exactly ported to ARM, you would need to upgrade the packages for the same, ideally need to re-compile the source code either for multi-architecture or for ARM alone. If your application code is developed recently using latest version of the language, it might be already having the capabilities which can go directly to a test bed proceeded with sanity checks.&lt;/p&gt;

&lt;p&gt;There are also third-party solutions available for recompiling jobs, &lt;/p&gt;

&lt;p&gt;AWS has Graviton Ready program which validates software products built by its Partners that integrate with specific services such as AWS Graviton.&lt;br&gt;
Details on:- &lt;a href="https://aws.amazon.com/blogs/apn/introducing-the-aws-graviton-ready-program-for-graviton-enabled-software-products/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/apn/introducing-the-aws-graviton-ready-program-for-graviton-enabled-software-products/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operating System for ARM,&lt;/strong&gt;&lt;br&gt;
To run your code, you also need the Operating System (Platform) to support the ARM. The existing X86_64-bit Operating System would not be able to run the ARM version of the code hence you need the relevant ARM version of the Operating System as well. Most of the Linux distribution have ARM support on their latest version, some of them are &lt;u&gt;Ubuntu 20.04, Red Hat Enterprise Linux 7.4 for ARM, Debian GNU/Linux 9 supports , Fedora, Linaro, OpenSUSE&lt;/u&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Database Engines available in ARM&lt;/strong&gt;&lt;br&gt;
Most of the popular open-source Database technologies like MySQL, PostGreSQL and MariaDB has the ARM version available already, ARM already proved for higher performance compared to its X86_64 architecture, Database is one of the technologies will enjoy the benefit. Performance benchmarking done with Intel and ARM for Database workload had shown significant performance advantage when high-concurrency activities. &lt;/p&gt;

&lt;p&gt;Database Performance benchmarking report is available in &lt;a href="https://mysqlonarm.github.io/MySQL-on-x86-vs-ARM/" rel="noopener noreferrer"&gt;https://mysqlonarm.github.io/MySQL-on-x86-vs-ARM/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MySQL&lt;/strong&gt;:- MySQL 8.0 supports ARM on Oracle Linux 8 / Red Hat Enterprise Linux 8 / CentOS 8 Operating System. MySQL 8.0 also has Docker image supported on OracleLinux in ARM Architecture. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;PostGreSQL&lt;/strong&gt;:- 13, 12.3 &amp;amp; higher supports ARM based processor in multiple Operating System platforms. &lt;br&gt;
There are multiple PostGreSQL docker images are available in the given repo:-&lt;a href="https://hub.docker.com/_/postgres?tab=tags" rel="noopener noreferrer"&gt;https://hub.docker.com/_/postgres?tab=tags&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MariaDB&lt;/strong&gt;: MariaDB Enterprise Server (10.2-10.6) support ARM64 .&lt;br&gt;
&lt;a href="https://mysqlonarm.github.io/" rel="noopener noreferrer"&gt;https://mysqlonarm.github.io/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS Managed Database in ARM.&lt;/strong&gt;&lt;br&gt;
While deploying respective supported Database engines in AWS RDS, Graviton-2 based DB instances are available along with the Intel based instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wdbpzuyf3a4vg0wv9tc.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6wdbpzuyf3a4vg0wv9tc.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the above diagram, db.m5.16xlarge is the DB instance with X86 Architecture and db.m6g.large is with ARM (Graviton-2) based architecture. Although the instance selection is just a matter of choice, but there are fundamental differences between the way both Architecture works.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Migrating X86 Architecture to Graviton-2&lt;/strong&gt;&lt;br&gt;
Migrating the DB instances to Graviton-2 is quite simple task considering zero eco system changes needed at the application, connector side to the database layer. &lt;br&gt;
Please go through the given link for the DB migration steps.&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/database/key-considerations-in-moving-to-graviton2-for-amazon-rds-and-amazon-aurora-databases/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/database/key-considerations-in-moving-to-graviton2-for-amazon-rds-and-amazon-aurora-databases/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container platform in ARM&lt;/strong&gt;&lt;br&gt;
As most of the Linux distributions have ARM available, container platform has greater flexibility on ARM adoption. As captured above, the serverless model of AWS container services, Fargate uses ARM based EC2 instances behind the scene to spin up compute resources for your tasks. &lt;br&gt;
In both AWS Managed Container Services (ECS and EKS), you can use Graviton-2 based instances on your container cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqq1j9qcxkyrscjv3slfe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqq1j9qcxkyrscjv3slfe.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The above diagram is while creating ECS cluster where you can select the instance type from “g”. On the similar line, while creating NodeGroup in EKS cluster, you can select Amazon Linux 2 ARM or Bottlerocket ARM 64 for Graviton-2 based systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccty0usrnxjfb8osg8jh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccty0usrnxjfb8osg8jh.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Running Mix of X86 and Graviton on ECS cluster&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/compute/supporting-aws-graviton2-and-x86-instance-types-in-the-same-auto-scaling-group/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/compute/supporting-aws-graviton2-and-x86-instance-types-in-the-same-auto-scaling-group/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/compute/how-to-quickly-setup-an-experimental-environment-to-run-containers-on-x86-and-aws-graviton2-based-amazon-ec2-instances-effort-to-port-a-container-based-application-from-x86-to-graviton2/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/compute/how-to-quickly-setup-an-experimental-environment-to-run-containers-on-x86-and-aws-graviton2-based-amazon-ec2-instances-effort-to-port-a-container-based-application-from-x86-to-graviton2/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Approach to Move to ARM&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To benefit the performance improvement, most of the application codes can be re-ported/re-complied in ARM from their existing code base, there are no hard and thumb rules on migrating/porting the application code, these are totally to the customer’s priority. Some of those adoption approaches are as below&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnih6qw7ggomzqfj2ix8.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvnih6qw7ggomzqfj2ix8.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The compute space is overwhelmed with the ARM based architecture performance benefits and cost efficiencies. The software application space is also adopting this change in large scale, it is the time for the end customer and their business applications to adopt these performant compute infrastrcture. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>awscloud</category>
      <category>cloud</category>
      <category>serverless</category>
    </item>
  </channel>
</rss>
