<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hari Karthigasu</title>
    <description>The latest articles on DEV Community by Hari Karthigasu (@harik8).</description>
    <link>https://dev.to/harik8</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/harik8"/>
    <language>en</language>
    <item>
      <title>Boost Developer productivity and DBOps efficiency with AWS Aurora Cloning</title>
      <dc:creator>Hari Karthigasu</dc:creator>
      <pubDate>Wed, 12 Nov 2025 10:39:41 +0000</pubDate>
      <link>https://dev.to/aws-builders/boost-developer-productivity-and-dbops-efficiency-with-aws-aurora-cloning-5cc8</link>
      <guid>https://dev.to/aws-builders/boost-developer-productivity-and-dbops-efficiency-with-aws-aurora-cloning-5cc8</guid>
      <description>&lt;h2&gt;
  
  
  Context
&lt;/h2&gt;

&lt;p&gt;There are numerous situations where we need an independent and isolated DB. For example, developers want to test their code or the Ops team needs to run upgrade tests and etc. The common approach is restoring a snapshot. However, it can become expensive and inefficient over time for repetitive tasks.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Aurora Clone
&lt;/h2&gt;

&lt;p&gt;The "AWS Aurora Clone" becomes handy in this situation, where it provides a more efficient solution in terms of operation and cost.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;The Aurora clone uses &lt;code&gt;copy-on-write&lt;/code&gt; protocol and works in a way that it shares the same volume between the source and clone cluster(s), but the updates are only visible to the respective cluster.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I have shared, how the Aurora clone has been integrated into our GH actions to enable developers, spin up an independent and isolated Aurora RDS cluster, which contains the same data as the given (source) cluster to test their code on demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution
&lt;/h2&gt;

&lt;p&gt;In my organization, we have dedicated AWS accounts for GH runners and applications. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyypx666i9zizf2w8p70.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyypx666i9zizf2w8p70.jpg" alt=" " width="800" height="692"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Application accounts have two lambda functions. &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Aurora Clone - Which is invoked by the GH action and performs the Aurora clone operation.&lt;/li&gt;
&lt;li&gt;Aurora Clone Purge - Which is responsible for purging the cloned DB cluster, after a certain period of days where it is no longer needed; Invoked by an Event bridge scheduler.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5z9aotozc6k5b6o4kdhf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5z9aotozc6k5b6o4kdhf.jpg" alt=" " width="371" height="541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A developer creates a PR with a label &lt;code&gt;clone-db&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;GH action invokes the lambda function that requires the name of the &lt;code&gt;SOURCE_DB_CLUSTER&lt;/code&gt; and &lt;code&gt;PR_NUMBER&lt;/code&gt; as inputs.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws lambda invoke \
    --function-name arn:aws:lambda:$AWS_REGION:$AWS_ACCOUNT_ID:function:aurora_clone \
    --region $AWS_REGION \
    --cli-binary-format raw-in-base64-out \
    --payload '{ "SOURCE_DB_CLUSTER":"$SOURCE_DB_CLUSTER", "PR_NUMBER":"$PR_NUMBER" }' response.json
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;response/response.json&lt;/code&gt; file contains the endpoint of the clone cluster.&lt;/li&gt;
&lt;li&gt;The deployment manifest will be updated with the clone cluster's endpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Even though &lt;code&gt;aws lambda invoke&lt;/code&gt; is a synchronous call, it won't freeze the pipeline, as the Aurora cloning call is asynchronous.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;HAPPY CLONING!  &lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Managing.Clone.html" rel="noopener noreferrer"&gt;Aurora clone documentation&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/harik8/PaC/tree/main/tofu/src/lambda_aurora_clone" rel="noopener noreferrer"&gt;Source code of Lambdas&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/harik8/PaC/blob/main/tofu/lambda.aurora-clone.tofu" rel="noopener noreferrer"&gt;Tofu code to deploy the stack&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>githubactions</category>
      <category>rds</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Why S3 Intelligent-Tiering Should Be Your Default Storage Class for Large-Scale Buckets?</title>
      <dc:creator>Hari Karthigasu</dc:creator>
      <pubDate>Wed, 27 Aug 2025 20:46:52 +0000</pubDate>
      <link>https://dev.to/aws-builders/why-s3-intelligent-tiering-should-be-your-default-storage-class-for-large-scale-buckets-417m</link>
      <guid>https://dev.to/aws-builders/why-s3-intelligent-tiering-should-be-your-default-storage-class-for-large-scale-buckets-417m</guid>
      <description>&lt;h1&gt;
  
  
  Context
&lt;/h1&gt;

&lt;p&gt;Storing objects in S3 for an extended period increases both storage size and cost. Especially when you don't access them frequently. You can configure life cycle rules or implement a custom solution to move them between the storage classes, but it may come with drawbacks.&lt;/p&gt;

&lt;h1&gt;
  
  
  S3 Intelligent-Tiering
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;S3 Intelligent-Tiering&lt;/strong&gt; moves an object to low-cost S3 storage based on its access frequency pattern, while preserving low latency and high throughput.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frequent Access, Infrequent Access, and Archive Instant Access tiers&lt;/strong&gt; are the three access tiers that S3 Intelligent-Tiering storage class automatically stores the objects. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frequent Access tier&lt;/strong&gt; is optimized for frequent access, &lt;strong&gt;Infrequent Access tier&lt;/strong&gt; is a lower-cost tier optimized for infrequent access, and &lt;strong&gt;Archive Instant Access&lt;/strong&gt;, which is a very low-cost tier optimized for rarely accessed data. &lt;/p&gt;

&lt;p&gt;Your object(s) will float across these 3 tiers depending on your object access frequency patterns. They all provide low latency and high-throughput performance to your objects. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If an object in the Infrequent Access tier or Archive Instant Access tier is accessed later, it is automatically moved back to the Frequent Access tier.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The Archive Access tier and the Deep Archive Access tier are optional and operate only when activated if you want to get the lowest storage cost for data that can be accessed in minutes to hours.&lt;/p&gt;

&lt;p&gt;If not activated, your objects will stay in the Archive Instant Access tier, which provides you with low latency and high-throughput performance to your objects.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Frequent Access Tier
         |
- Object has not been accessed for 30 days
         |
Infrequent Access Tier
         |
- Object has not been accessed for 90 days
         |
Archive Instant Access Tier 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Activate S3 Intelligent-Tiering
&lt;/h1&gt;

&lt;p&gt;When you create a new S3 bucket, straight away you can select the storage class as S3 Intelligent-Tiering. For an existing bucket, create a life cycle rule to move the objects to S3 Intelligent-Tiering.&lt;/p&gt;

&lt;h1&gt;
  
  
  Visualization
&lt;/h1&gt;

&lt;p&gt;After activating S3 Intelligent-Tiering for an existing bucket, &lt;/p&gt;

&lt;p&gt;The (image below shows) objects have been transferred to the S3 Intelligent-Tiering while the bucket is scaling. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66l70xb8oipmolepdjnc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F66l70xb8oipmolepdjnc.png" alt=" " width="800" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Orange - Standard&lt;/code&gt; &lt;code&gt;Rest   - S3 Intelligent-Tiering&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;However, you may be surprised after seeing the huge spike in the following month's bill. It'll be double the amount from the previous month.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xjuxvh6ie1wq3t09cy5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5xjuxvh6ie1wq3t09cy5.png" alt="Cost of an Intelligent-Tier enabled S3 bucket" width="501" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;S3 Intelligent-Tiering was enabled in month 7, and the cost doubled in month 8.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;WHY?&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;During the first month(s), the life cycle rule transfers the objects from the STANDARD to the S3 INTELLIGENT-TIERING class.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;$0.01 per 1,000 transitions to Intelligent Tiering&lt;/code&gt; -&amp;gt; So, if you have 200 million objects, you'll pay&lt;code&gt;~ $2000&lt;/code&gt;. In addition to that, &lt;br&gt;
S3 Intelligent-Tiering monitors objects that are more than &lt;code&gt;128KB&lt;/code&gt; in size to transfer among the storage classes. &lt;code&gt;$0.0025 per 1,000 Objects per month in Intelligent-Tiering&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Nevertheless, it'll be hugely beneficial in the long run for large-scale S3 buckets, which have objects of unpredictable/fluctuating access patterns.&lt;br&gt;
&lt;strong&gt;If you see the graph, the cost is slowly going down after the 8th month and by the 16th month, it becomes lower than 0th month.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>s3</category>
      <category>costoptmization</category>
      <category>devops</category>
    </item>
    <item>
      <title>AWS MSK IAM Authentication CLI commands</title>
      <dc:creator>Hari Karthigasu</dc:creator>
      <pubDate>Sun, 10 Aug 2025 21:02:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-msk-iam-authentication-cli-commands-4il4</link>
      <guid>https://dev.to/aws-builders/aws-msk-iam-authentication-cli-commands-4il4</guid>
      <description>&lt;p&gt;When you have a Kafka cluster in AWS MSK with IAM auth, there will be situations where you need to interact with its CLI to view the resources or for troubleshooting. During authentication, you should pass a properties file containing auth parameters. &lt;/p&gt;

&lt;p&gt;This bash script will set up the Kafka CLI to connect to the MSK cluster.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# variables
BROKER_ENDPOINT=$MSK_ENDPOINT
KAFKA_VERSION=3.8.1
BINARY_VERSION=2.13
IAM_AUTH_CLI_VERSION=2.13.1

# Download Kafka Binary
wget https://archive.apache.org/dist/kafka/$KAFKA_VERSION/kafka_$BINARY_VERSION-$KAFKA_VERSION.tgz
tar -zxvf kafka_$BINARY_VERSION-$KAFKA_VERSION.tgz
cd kafka_$BINARY_VERSION-$KAFKA_VERSION
cd libs/

# Download AWS MSK IAM CLI
wget https://github.com/aws/aws-msk-iam-auth/releases/download/v$BINARY_VERSION/aws-msk-iam-auth-$IAM_AUTH_CLI_VERSION-all.jar
cd ../bin/

# AWS IAM Auth file 
cat &amp;lt;&amp;lt;EOF&amp;gt; client.properties
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd kafka_$IAM_AUTH_CLI_VERSION-$KAFKA_VERSION/bin
./kafka-topics.sh --bootstrap-server $BROKER_ENDPOINT --command-config client.properties --list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>msk</category>
      <category>kafka</category>
    </item>
    <item>
      <title>Set custom configuration in AWS EKS CoreDNS Addon</title>
      <dc:creator>Hari Karthigasu</dc:creator>
      <pubDate>Fri, 18 Jul 2025 21:32:40 +0000</pubDate>
      <link>https://dev.to/aws-builders/set-custom-configuration-in-aws-eks-coredns-addon-fh2</link>
      <guid>https://dev.to/aws-builders/set-custom-configuration-in-aws-eks-coredns-addon-fh2</guid>
      <description>&lt;p&gt;When you enable managed addons in EKS, they come with predefined configurations. Nevertheless, there are situations where we have to override them. This &lt;strong&gt;gist&lt;/strong&gt; shows how to set custom configuration for the CoreDNS addon using &lt;code&gt;terraform-aws-modules/terraform-aws-eks&lt;/code&gt; and via the AWS console.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;...
addons = {
    coredns = {
      addon_version = "v1.11.4-eksbuild.2"
      most_recent   = true
      configuration_values = &amp;lt;&amp;lt;EOT
      {
        "corefile": ".:53 {\n  errors\n  health {\n    lameduck 5s\n  }\n  ready\n  kubernetes cluster.local in-addr.arpa ip6.arpa {\n    pods insecure\n    fallthrough in-addr.arpa ip6.arpa\n  }\n  prometheus :9153\n  forward . /etc/resolv.conf\n  cache 30\n  loop\n  reload\n  loadbalance\n}",
        "autoScaling": {
          "enabled": true,
          "minReplicas": 4,
          "maxReplicas": 8
        },
        "tolerations": [
          {
            "key": "AppsOnly",
            "effect": "NoSchedule",
            "operator": "Equal",
            "value": "apps"
          },
          {
            "key": "CriticalAddonsOnly",
            "effect": "NoSchedule",
            "operator": "Exists"
          }
        ]
      }
      EOT
    }
  }
...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "eks" {
  source = "terraform-aws-modules/terraform-aws-eks"
  ...
  cluster_addons = var.addons
  ...
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7reoylvq8ex4j5ddgi47.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7reoylvq8ex4j5ddgi47.png" alt=" " width="800" height="783"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;&lt;strong&gt;OpsGist - Tried‑and‑worked snippets and insights I’ve come across.&lt;/strong&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>aws</category>
      <category>eks</category>
      <category>opsgist</category>
    </item>
    <item>
      <title>AWS Cross-Account Read-Only RDS access via Private Link</title>
      <dc:creator>Hari Karthigasu</dc:creator>
      <pubDate>Sun, 06 Jul 2025 20:21:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-cross-account-read-only-rds-access-via-private-link-4l8o</link>
      <guid>https://dev.to/aws-builders/aws-cross-account-read-only-rds-access-via-private-link-4l8o</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;CONTEXT&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Cross-account resource sharing is one of the critical operations in AWS. I have elaborated on the solution to grant READ-ONLY RDS database access to an external AWS account.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;SOLUTION&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy79oji490j7w1u1n3q44.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy79oji490j7w1u1n3q44.jpg" alt=" " width="800" height="332"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;DESIGN RATIONALE&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1..&lt;/strong&gt; VPC Endpoint Service with Private Link&lt;/p&gt;

&lt;p&gt;AWS VPC Endpoint Service, powered by PrivateLink, enables secure and effortless connectivity between two VPCs with fine-grained access controls. Unlike VPC peering or Transit Gateway (TGW) integration, which provides broader network access, PrivateLink ensures a more restricted and secure connection.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Check out my article &lt;a href="https://dev.to/aws-builders/aws-vpc-endpoint-services-for-nlb-powered-by-private-link-5b2j"&gt;AWS VPC endpoint services for NLB powered by Private Link&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;2..&lt;/strong&gt; RDS Proxy&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;RDS Proxy Read-Only endpoint&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The most important requirement is to grant read-only access to the RDS. This can be achieved by creating a database user with SELECT-only privileges at the database level. However, human error could modify these permissions. The RDS Proxy read-only endpoint enforces read-only operations for clients by routing traffic exclusively to read replica instances in the backend. regardless of the user’s database-level permissions, providing an extra layer of protection.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Static IPs&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An RDS Proxy endpoint maintains a static IP address throughout its lifecycle. This allows you to create a target group behind a Network Load Balancer (NLB) using the RDS proxy’s read-only endpoint IPs, enabling consistent and reliable connectivity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3..&lt;/strong&gt; A Secrets Manager encrypted by a CMK&lt;/p&gt;

&lt;p&gt;RDS Proxy requires database credentials to connect to the database. These credentials are stored in AWS Secrets Manager and encrypted using a CMK, as they need to be shared with an external account for access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4..&lt;/strong&gt; Lambda function&lt;/p&gt;

&lt;p&gt;To enhance security, the database credentials should be rotated on a regular schedule. A Lambda function handles the rotation by updating the credentials both in AWS Secrets Manager and the database.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Once the VPC Endpoint Service is established between two accounts or VPCs, the external account can connect to our database using one of the endpoints provided by the service.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;CHALLENGES&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Even though IAM authentication is enabled on the RDS cluster, the RDS Proxy (or any client connecting through it) still requires database credentials to establish a connection.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As a standard practice in my organization, we use SSM Parameter Store to manage and store secrets. In this case, using AWS Secrets Manager added an additional layer. To align with our standards, I tried to implement referencing &lt;a href="https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html" rel="noopener noreferrer"&gt;AWS Secrets Manager secrets from Parameter Store parameters&lt;/a&gt;. However, the integration was unsuccessful due to a bug, and I have raised a ticket with AWS Support. Currently, there is no ETA for resolution.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;If this approach had been successful, we could have created Advanced  Tier Parameter Store entries that reference Secrets Manager, making it easy to share them with the external account. This would have reduced direct calls to Secrets Manager from the external application.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;After a successful implementation in the development environment, the same setup encountered issues in production. A user was unable to execute queries when connecting to the RDS via RDS Proxy. The issue was resolved after recreating the RDS Proxy. Contacted AWS Support, although the root cause remains unclear.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>rds</category>
    </item>
    <item>
      <title>AWS SSM Association - Schedule Stop and Start RDS</title>
      <dc:creator>Hari Karthigasu</dc:creator>
      <pubDate>Sat, 14 Jun 2025 18:49:57 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-ssm-association-schedule-stop-and-start-rds-power-of-aws-ssm-ep-1-6g4</link>
      <guid>https://dev.to/aws-builders/aws-ssm-association-schedule-stop-and-start-rds-power-of-aws-ssm-ep-1-6g4</guid>
      <description>&lt;p&gt;&lt;strong&gt;CONTEXT&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AWS RDS is an essential and high-cost service. Improving its cost efficiency will help control an AWS account's overall expenses. For non-production environments, it is advisable to shut down RDS databases outside of working hours to reduce the unnecessary costs they incur. &lt;/p&gt;

&lt;p&gt;Usually, we utilize an event bridge scheduler to start and stop an RDS service via a Lambda function. This post shows the step-by-step Terraform code that elaborates on implementing this solution using AWS Systems Manager (SSM) Association.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;SOLUTION&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_iam_policy_document" "iam_ssm_policy_stop_aurora_cluster" {
  count = var.environment == "prod" ? 0 : 1

  statement {
    sid    = "StopAuroraCluster"
    effect = "Allow"
    actions = [
      "rds:StopDBCluster",
      "rds:StartDBCluster"
    ]
    resources = ["arn:aws:rds:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:cluster:${var.rds_cluster_name}]
  }

  statement {
    sid    = "DescribeAuroraClusters"
    effect = "Allow"
    actions = [
      "rds:DescribeDBClusters"
    ]
    resources = ["*"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "iam_ssm_policy_stop_aurora_cluster" {
  count   = var.environment == "prod" ? 0 : 1
  source  = "terraform-aws-modules/iam/aws//modules/iam-policy"

  name        = "rds-start-stop-aurora-cluster-policy"
  path        = "/"
  description = "IAM Policy to allow SSM to start and stop cluster."

  policy = data.aws_iam_policy_document.iam_ssm_policy_stop_aurora_cluster[0].json
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "iam_assumable_role_stop_aurora_cluster" {
  count   = var.environment == "prod" ? 0 : 1
  source  = "terraform-aws-modules/iam/aws//modules/iam-assumable-role"

  create_role             = true
  create_instance_profile = false

  role_name              = "start-stop-aurora-cluster-role"
  role_requires_mfa      = false
  allow_self_assume_role = false

  trusted_role_services = [
    "ssm.amazonaws.com"
  ]

  custom_role_policy_arns = concat(
    [
      module.iam_ssm_policy_stop_aurora_cluster[0].arn,
    ]
  )
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ssm_association" "ssm_stop_aurora_cluster_association" {
  count = var.environment == "prod" ? 0 : 1

  name                = "AWS-StartStopAuroraCluster"
  association_name    = "stop-aurora-cluster"
  schedule_expression = "cron(0 18 * * ? *)"

  parameters = {
    ClusterName          = "${var.rds_cluster_name}"
    AutomationAssumeRole = module.iam_assumable_role_stop_aurora_cluster[0].iam_role_arn
    Action               = "Stop"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_ssm_association" "ssm_start_aurora_cluster_association" {
  count = var.environment == "prod" ? 0 : 1

  name                = "AWS-StartStopAuroraCluster"
  association_name    = "start-aurora-cluster"
  schedule_expression = "cron(0 8 * * ? *)"

  parameters = {
    ClusterName          = "${var.rds_cluster_name}"
    AutomationAssumeRole = module.iam_assumable_role_stop_aurora_cluster[0].iam_role_arn
    Action               = "Start"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
    </item>
    <item>
      <title>AWS VPC endpoint services for NLB powered by Private Link.</title>
      <dc:creator>Hari Karthigasu</dc:creator>
      <pubDate>Tue, 07 Jan 2025 23:02:42 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-vpc-endpoint-services-for-nlb-powered-by-private-link-5b2j</link>
      <guid>https://dev.to/aws-builders/aws-vpc-endpoint-services-for-nlb-powered-by-private-link-5b2j</guid>
      <description>&lt;p&gt;Recently in my organization, there was a requirement to connect to a private endpoint in &lt;strong&gt;Account A&lt;/strong&gt; from &lt;strong&gt;Account B&lt;/strong&gt;. When such a requirement comes, VPC peering is the first solution that comes to our mind. However, if the given endpoint is hosted behind an NLB, it can simply connected via a VPC endpoint service which is powered by AWS Private Link. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;In Account A&lt;/em&gt;&lt;/strong&gt;, create an NLB and service endpoints respectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc_endpoint_service" "this" {
  # The ARN of the NLB
  network_load_balancer_arns = [module.nlb.arn]

  # DNS of the private endpoint
  private_dns_name    = var.private_dns_name

  # Accept or Reject endpoint connections from other AWS accounts
  acceptance_required = true

  tags = {
    Name = "${terraform.workspace}-nlb"
  }
}

resource "aws_vpc_endpoint_service_allowed_principal" "this" {
  vpc_endpoint_service_id = aws_vpc_endpoint_service.this.id

  # Allow principal to create endpoint connection
  principal_arn = "arn:aws:iam::${var.account_b_id}:root"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;Service name&lt;/code&gt; is required when we configure the VPC endpoint in Account B.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhlcarjyv7o2h6dbaft5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwhlcarjyv7o2h6dbaft5.png" alt="vpc endpoint service" width="800" height="49"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add the &lt;code&gt;TXT&lt;/code&gt; record to your Domain. After a successful validation the &lt;code&gt;Domain verification status&lt;/code&gt; will be shown as &lt;strong&gt;Verified&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4oylb1teeaac2dfgi0f5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4oylb1teeaac2dfgi0f5.png" alt="private dns" width="800" height="84"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;In Account B&lt;/em&gt;&lt;/strong&gt;, create a VPC endpoint for the VPC endpoint service created above.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "vpc_endpoints" {
  source  = "terraform-aws-modules/vpc/aws//modules/vpc-endpoints"
  ...
  endpoints = {
    "nlb" = {
      service_name        = "com.amazonaws.vpce.eu-north-1.vpce-svc-0f61ad0e435a4680c"
      subnet_ids          = module.vpc.private_subnets
      private_dns_enabled = true
      service_type        = "Interface"
      tags                = { Name = "${terraform.workspace}-nlb" }
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dcnjt72wfnemjlrq4yv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4dcnjt72wfnemjlrq4yv.png" alt="vpc endpoint" width="800" height="63"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go back to  &lt;strong&gt;Account A&lt;/strong&gt; and accept the endpoint connection request that comes from &lt;strong&gt;Account B&lt;/strong&gt;, under the &lt;strong&gt;Endpoint connections&lt;/strong&gt; tab in &lt;strong&gt;Endpoint services&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Now try to access the private endpoint hosted in &lt;strong&gt;Account A&lt;/strong&gt; from &lt;strong&gt;Account B&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ curl nlb.petproject.my
&amp;lt;!DOCTYPE html&amp;gt;
&amp;lt;html&amp;gt;
&amp;lt;head&amp;gt;
&amp;lt;title&amp;gt;Welcome to nginx!&amp;lt;/title&amp;gt;
&amp;lt;style&amp;gt;
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
&amp;lt;/style&amp;gt;
&amp;lt;/head&amp;gt;
&amp;lt;body&amp;gt;
&amp;lt;h1&amp;gt;Welcome to nginx!&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;For online documentation and support please refer to
&amp;lt;a href="http://nginx.org/"&amp;gt;nginx.org&amp;lt;/a&amp;gt;.&amp;lt;br/&amp;gt;
Commercial support is available at
&amp;lt;a href="http://nginx.com/"&amp;gt;nginx.com&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;

&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Thank you for using nginx.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/body&amp;gt;
&amp;lt;/html&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thank you for reading!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>nlb</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>Understanding self-assumption and scoped-down policy in AWS IAM</title>
      <dc:creator>Hari Karthigasu</dc:creator>
      <pubDate>Mon, 08 Jul 2024 21:13:55 +0000</pubDate>
      <link>https://dev.to/aws-builders/understanding-self-assumption-and-scoped-down-policy-in-aws-iam-2io</link>
      <guid>https://dev.to/aws-builders/understanding-self-assumption-and-scoped-down-policy-in-aws-iam-2io</guid>
      <description>&lt;p&gt;AWS IAM is a fundamental service for all 200+ AWS services, as it enables interaction with AWS principals. An AWS IAM role consists of two components: a policy and a trust relationship. The trust relationship handles authentication, while the policy is for authorization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e3k60zid5nx0s4f1ab5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9e3k60zid5nx0s4f1ab5.jpg" alt=" " width="441" height="151"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The trust relationship has rules specifying which AWS principals are allowed to assume the role. What is assume the role? In a nutshell, entities can use AWS STS to assume the role by calling the &lt;code&gt;aws sts assume-role&lt;/code&gt;. If an entity is able to assume the role, it can execute the actions specified in the attached policy. Therefore it's important to follow best practices and choose suitable patterns when implementing IAM. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Self-assumption&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Have you ever encountered a scenario where an IAM role assumes itself? It may sound awkward, yet it's real. An IAM role needs to be explicitly allowed to assume itself as it doesn't have self-assumption capabilities by default. It is to improve consistency and visibility of a role's privileges.&lt;/p&gt;

&lt;p&gt;To elaborate, I have an IAM role &lt;code&gt;GHAction-Role&lt;/code&gt; with &lt;code&gt;AssumeRoleWithWebIdentity&lt;/code&gt; to authenticate GithubActions in AWS and a github action respectively.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "GithubOidcAuth",
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:AssumeRoleWithWebIdentity"
            ],
            "Condition": {
                "StringLike": {
                    "token.actions.githubusercontent.com:sub": "repo:harik8/services:*"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;  STS:
    runs-on: ubuntu-latest
    needs: [CI]
    steps:
    - name: Git clone the repository
      uses: actions/checkout@v4

    - name: configure aws credentials
      uses: aws-actions/configure-aws-credentials@v4
      with:
        role-to-assume: ${{ vars.IAM_ROLE_ARN }} # ARN of the GHAction-Role
        aws-region: ${{ vars.AWS_REGION }}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output of above action.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Run aws-actions/configure-aws-credentials@v4
  with:
    role-to-assume: arn:aws:iam::123456789012:role/GHAction-Role
    aws-region: eu-north-1
    audience: sts.amazonaws.com
Assuming role with OIDC
Authenticated as assumedRoleId AROASIGA2HTHJOXZFKTPL:GitHubActions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The GitHub workflow is able to assume the role using WebIdentity. However, if the GitHub workflow tries to perform &lt;code&gt;sts:AssumeRole&lt;/code&gt; against the &lt;code&gt;GHAction-Role&lt;/code&gt;, it will encounter an issue.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws sts assume-role --role-arn arn:aws:iam::123456789012:role/GHAction-Role --role-session-name GitHubActions 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::12345678912:assumed-role/GHAction-Role/GitHubActions is not authorized to perform: sts:AssumeRole on resource: arn:aws:sts::12345678912:assumed-role/GHAction-Role/GitHubActions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The trust policy of the &lt;code&gt;GHAction-Role&lt;/code&gt; doesn't currently allow it to assume the role. To resolve this, the &lt;code&gt;GHAction-Role&lt;/code&gt; needs to be able to assume itself. Therefore, the &lt;code&gt;GHAction-Role&lt;/code&gt;'s ARN &lt;code&gt;arn:aws:iam::123456789012:role/GHAction-Role&lt;/code&gt; should be added to the trust policy to permit this action as shown below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "GithubOidcAuth",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:role/GHAction-Role",
                "Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:AssumeRoleWithWebIdentity"
            ],
            "Condition": {
                "StringLike": {
                    "token.actions.githubusercontent.com:sub": "repo:harik8/services:*"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Scoped down policy&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3k9ejfqdxu3cde6y9txs.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3k9ejfqdxu3cde6y9txs.jpg" alt=" " width="461" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If a given GitHub Action runs more than one job and requires different permissions for each, using a single &lt;code&gt;GHAction-Role&lt;/code&gt; with maximum permissions is not a good design practice as it violates IAM's principle of least privilege. This is where scoped-down policies come into play.&lt;/p&gt;

&lt;p&gt;A scoped-down policy refers to a policy that grants the minimum set of permissions required for a user, group, or role to perform their necessary tasks.&lt;/p&gt;

&lt;p&gt;Instead of having one generic role, &lt;code&gt;GHAction-Role&lt;/code&gt;, with all required policies attached, we should create a specific role for each job with the necessary least privileges. For example, we would have roles like &lt;code&gt;GHAction-Role-S3&lt;/code&gt;, &lt;code&gt;GHAction-Role-EC2&lt;/code&gt;, and &lt;code&gt;GHAction-Role-EKS&lt;/code&gt;. These roles would be assumed by &lt;code&gt;GHAction-Role&lt;/code&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1qkptmbke6r8jxftxhe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh1qkptmbke6r8jxftxhe.jpg" alt=" " width="461" height="511"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, the trust policy for the above roles will look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "GHAction-S3",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:role/GHAction-Role"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Even though self-assumption is suitable for certain use cases. Scoped-down policies generally provide more secure, control and manageable permissions.&lt;/p&gt;

&lt;p&gt;HAPPY ASSUMING!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iam</category>
      <category>security</category>
      <category>githubactions</category>
    </item>
    <item>
      <title>Share Securely</title>
      <dc:creator>Hari Karthigasu</dc:creator>
      <pubDate>Tue, 25 Oct 2022 18:34:19 +0000</pubDate>
      <link>https://dev.to/aws-builders/share-securely-32j3</link>
      <guid>https://dev.to/aws-builders/share-securely-32j3</guid>
      <description>&lt;p&gt;Securely sharing confidential information between team members is one of the critical tasks we have to perform during our day-to-day life.&lt;/p&gt;

&lt;p&gt;There are platforms we can use to share passwords or sensitive data. Such as onetimesecret.com, scrt.link and etc. Primarily they provide a one-time link to access your secret. The link will be disappeared once you access it.&lt;/p&gt;

&lt;p&gt;In this article, I’ll be illustrating how we can implement a similar application via the AWS Serverless ecosystem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnj65hax5rivs0m3rr87.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnnj65hax5rivs0m3rr87.jpeg" alt="Architecture" width="576" height="625"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown above in the diagram, the web application has been hosted in AWS Amplify. It allows users to store and read their secrets.&lt;/p&gt;

&lt;p&gt;The web application is backed by two lambda functions. They manage the DB operations. (For demo purpose I have used lambda functionalURL.)&lt;/p&gt;

&lt;p&gt;The data will be stored in DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7xxe9izsi0j2il342ls.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn7xxe9izsi0j2il342ls.png" alt="add_secret_1" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxhuay5e1nvaj90qtvo1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjxhuay5e1nvaj90qtvo1.png" alt="add_secret_2" width="786" height="527"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Add a Secret&lt;/strong&gt;&lt;br&gt;
Adding a secret has three steps,&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter the Message.&lt;/li&gt;
&lt;li&gt;Enter a secret key to protect your message.&lt;/li&gt;
&lt;li&gt;Select the expiration time for the secret.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After the submission, the web application will invoke a Lambda functional URL to insert the data into DynamoDB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Read a Secret&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access the shared link&lt;/li&gt;
&lt;li&gt;Enter the provided Secret Key&lt;/li&gt;
&lt;li&gt;Your secret will be displayed.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0r8cuyillh885no6pclr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0r8cuyillh885no6pclr.png" alt="read_secret_1" width="800" height="242"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faz3lreba134z0shanxhp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faz3lreba134z0shanxhp.png" alt="read_secret_2" width="786" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After a successful retrieval of a secret. The secret will be deleted from the database immediately.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv65svwfe3hvcg6172nfw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv65svwfe3hvcg6172nfw.png" alt="secret_share_workflow" width="607" height="1551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;DynamoDB has 4 attributes. &lt;em&gt;SecretID (PK) and ExpirationTime (SK), Message, _and&lt;/em&gt; SecretKey_.&lt;/p&gt;

&lt;p&gt;The TTL has been enabled on the ExpirationTime attribute. DyanmoDB deletes the record once it reaches the TTL value. &lt;strong&gt;This operation doesn’t consume a write capacity&lt;/strong&gt;. However, DynamoDB TTL is not real-time. It’d take 24H-48H to remove a record from the DB. The lambda that reads the data has a logic to validate whether the requested secret is expired or not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Demo&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;URL : &lt;a href="https://secretshare.forexample.link" rel="noopener noreferrer"&gt;https://secretshare.forexample.link&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=q4W8R18ItzI" rel="noopener noreferrer"&gt;https://www.youtube.com/watch?v=q4W8R18ItzI&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Source&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://github.com/harik8/temp-secret-share" rel="noopener noreferrer"&gt;https://github.com/harik8/temp-secret-share&lt;/a&gt;&lt;/p&gt;

</description>
      <category>lambda</category>
      <category>aws</category>
      <category>security</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Discount Portal - Send Discount Newsletters</title>
      <dc:creator>Hari Karthigasu</dc:creator>
      <pubDate>Mon, 10 Jan 2022 19:51:35 +0000</pubDate>
      <link>https://dev.to/harik8/discount-portal-send-discount-newsletters-1e02</link>
      <guid>https://dev.to/harik8/discount-portal-send-discount-newsletters-1e02</guid>
      <description>&lt;h3&gt;
  
  
  Overview of My Submission
&lt;/h3&gt;

&lt;p&gt;Discount Portal is Web based application that sends out discount newsletters to a subscriber or list of subscribers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Submission Category:
&lt;/h3&gt;

&lt;p&gt;Choose Your Own Adventure&lt;/p&gt;

&lt;h3&gt;
  
  
  Link to Code
&lt;/h3&gt;


&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fassets.dev.to%2Fassets%2Fgithub-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/harik8" rel="noopener noreferrer"&gt;
        harik8
      &lt;/a&gt; / &lt;a href="https://github.com/harik8/discount-portal" rel="noopener noreferrer"&gt;
        discount-portal
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Discount Portal is a Web application send product discount newsletters to subscribers.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Discount Portal&lt;/h1&gt;

&lt;/div&gt;
&lt;p&gt;Discount Portal is a Web application that sends product discount newsletters to subscribers. Click &lt;a href="https://portal.forexample.link/" rel="nofollow noopener noreferrer"&gt;here&lt;/a&gt; to access the portal.&lt;/p&gt;
&lt;p&gt;This is application is developed for Mongo Atlas Hackathon 2021/2022.&lt;/p&gt;
&lt;p&gt;(The portal requires Google Auth to login. Currently it's only allowed to my user.)&lt;/p&gt;
&lt;p&gt;Click &lt;a href="https://dev.to/harik8/discount-portal-send-discount-newsletters-1e02" rel="nofollow"&gt;here&lt;/a&gt; to view the dev.to page about this application.&lt;/p&gt;
&lt;br&gt;
&lt;p&gt;
  &lt;a rel="noopener noreferrer" href="https://github.com/harik8/discount-portalassets/screenshots/discount.png"&gt;&lt;img width="200" height="200" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fharik8%2Fdiscount-portalassets%2Fscreenshots%2Fdiscount.png"&gt;&lt;/a&gt;
&lt;/p&gt;



&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Folder Structure&lt;/h1&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;assets    - Screenshots and images.&lt;/li&gt;
&lt;li&gt;backend   - Backend service (server.py) which add records to Mongo Atlas Database via Data API.&lt;/li&gt;
&lt;li&gt;functions - AWS Lambda function to send mails and Realm scheduler function to clean DB.&lt;/li&gt;
&lt;li&gt;templates - AWS SES Email template.&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Technologies Used&lt;/h1&gt;

&lt;/div&gt;

&lt;ul&gt;
&lt;li&gt;React JS             - Front end&lt;/li&gt;
&lt;li&gt;Python               - Back end&lt;/li&gt;
&lt;li&gt;Mongo Atlas Services&lt;/li&gt;
&lt;/ul&gt;



&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;Service Flow Chart&lt;/h1&gt;

&lt;/div&gt;

&lt;p&gt;&lt;a rel="noopener noreferrer" href="https://github.com/harik8/discount-portalassets/screenshots/discount-portal.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fgithub.com%2Fharik8%2Fdiscount-portalassets%2Fscreenshots%2Fdiscount-portal.png" alt="image"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;/div&gt;
&lt;br&gt;
&lt;br&gt;
  &lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/harik8/discount-portal" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;
&lt;br&gt;


&lt;h3&gt;
  
  
  Additional Resources / Info
&lt;/h3&gt;

&lt;p&gt;React JS - Frontend&lt;br&gt;
Python - Backend&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.atlas.mongodb.com/" rel="noopener noreferrer"&gt;Mongo Atlas&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.mongodb.com/realm/functions/" rel="noopener noreferrer"&gt;Realm Functions&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.mongodb.com/realm/triggers/trigger-types/" rel="noopener noreferrer"&gt;Triggers&lt;/a&gt;&lt;br&gt;
&lt;a href="https://docs.atlas.mongodb.com/api/data-api/" rel="noopener noreferrer"&gt;Data API&lt;/a&gt;&lt;br&gt;
&lt;a href="https://draw.io" rel="noopener noreferrer"&gt;Diagrams and Image&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/NthhIFHp0rA"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h4&gt;
  
  
  Flow Chart Diagram
&lt;/h4&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuocbd4xd6wmjtfueqr5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuocbd4xd6wmjtfueqr5l.png" alt="Discount Portal Flow Chart" width="731" height="1541"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(Please click on the image to view clearly or view on Github).&lt;/p&gt;

</description>
      <category>atlashackathon</category>
    </item>
    <item>
      <title>Journey from Elastic Cloud to AWS Elastic Search</title>
      <dc:creator>Hari Karthigasu</dc:creator>
      <pubDate>Tue, 26 Oct 2021 12:19:50 +0000</pubDate>
      <link>https://dev.to/aws-builders/journey-from-elastic-cloud-to-aws-elastic-search-13ao</link>
      <guid>https://dev.to/aws-builders/journey-from-elastic-cloud-to-aws-elastic-search-13ao</guid>
      <description>&lt;p&gt;This article demonstrates the steps that we should follow when we migrate Elastic search data from Elastic cloud to AWS. &lt;/p&gt;

&lt;p&gt;Let's get started.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The Elastic Cloud supports &lt;strong&gt;Major.Minor.Patch&lt;/strong&gt; (eg: x.y.z) version of Elastic Search, where AWS supports only &lt;strong&gt;Major. Minor&lt;/strong&gt; (eg: x.y).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When you build the target cluster on AWS, ensure you created the cluster using the next latest Minor version of the source cluster. (If the source cluster version is 7.7.1, the target cluster version must be 7.8).&lt;/p&gt;

&lt;p&gt;Elastic Search doesn’t allow migrating from the upper version to the lower version.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{"error":{"root_cause":[{"type":"snapshot_restore_exception","reason":"[test:cloud-snapshot-2021.06.01-ucssrgkdq9oyci-vbbfyfw/QYwmbA6TTJOXG2T83AtTTw] the snapshot was created with Elasticsearch version
 [7.7.1] which is higher than the version of this node [7.7.0]"}],"type":"snapshot_restore_exception","reason":"[test:cloud-snapshot-2021.06.01-ucssrgkdq9oyci-vbbfyfw/QYwmbA6TTJOXG2T83AtTTw] the snapsh
ot was created with Elasticsearch version [7.7.1] which is higher than the version of this node [7.7.0]"},"status":500}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfmx8bmumcs7ahbpqpwt.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvfmx8bmumcs7ahbpqpwt.jpeg" alt="Elasticsearch to AWS Opensearch" width="521" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;At first, create an S3 bucket in your AWS account as a custom repository to store ES snapshots/backups.&lt;/p&gt;

&lt;p&gt;You could find the steps to set up a custom repository on the Elastic Cloud at their site. Therefore, I am not going to describe them here.&lt;/p&gt;

&lt;p&gt;Create an EC2 instance which will be used to execute commands during the data restoration. (Even t2.nano is sufficient).&lt;/p&gt;

&lt;p&gt;Create two roles such as &lt;strong&gt;Role-S3&lt;/strong&gt;, &lt;strong&gt;Role-ES&lt;/strong&gt; and two policies such as &lt;strong&gt;Policy-S3&lt;/strong&gt; and &lt;strong&gt;Policy-ES&lt;/strong&gt; and attach them to the roles respectively. &lt;/p&gt;

&lt;h6&gt;
  
  
  Policy-S3
&lt;/h6&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [ 
 "arn:aws:s3:::&amp;lt;es-snapshot-bucket-name&amp;gt;"
            ]
        },
        {
            "Action": [
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject"
            ],
            "Effect": "Allow",
            "Resource": [ 
  "arn:aws:s3::: &amp;lt;es-snapshot-bucket-name&amp;gt;/*”
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h6&gt;
  
  
  Policy-ES
&lt;/h6&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": "iam:PassRole",
                "Resource": "arn:aws:iam::&amp;lt;account_id&amp;gt;:role/Role-S3"  
            },
            {
                "Effect": "Allow",
                "Action": "es:ESHttpPut",
                "Resource": [
                         "arn:aws:es:&amp;lt;region&amp;gt;:&amp;lt;account_id&amp;gt;:domain/&amp;lt;es_domain_name&amp;gt;/*"
                  ]
            }
        ]
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Attach the role Role-ES to the EC2 instance and log in to the EC2.&lt;/p&gt;

&lt;p&gt;Trigger a curl command to AWS ES endpoint to check if the cluster is reachable.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl &amp;lt;es_endpoint&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Install the following python libs in the EC2.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;$ pip3 install boto3
$ pip3 install requests
$ pip3 install requests_aws4auth
$ pip3 install --upgrade request
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Copy and Paste the given python script, change the appropriate values and run the script.&lt;/p&gt;

&lt;p&gt;This script will register the S3 bucket as a snapshot repository for the ES cluster that we created.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import requests
from requests_aws4auth 
import AWS4Auth

host = &amp;lt;es_domain_endpoint/&amp;gt;    # Enter the ES domain endpoint and trailing ‘/’
region = &amp;lt;region&amp;gt;       
service = 'es'

credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)

# Steps to Register snapshot repository
path = '_snapshot/&amp;lt;repository-name&amp;gt;'   # the ES API endpoint
url = host + path
payload = {
  "type": "s3",
  "settings": {
    "bucket": &amp;lt;es_snapshot_bucket_name&amp;gt;, 
    "region": &amp;lt;region&amp;gt;,  # Specify region for S3 bucket. If the S3 bucket is in the us-east-1 region use endpoint
    "endpoint": "s3.amazonaws.com", 
    "role_arn": "arn:aws:iam::&amp;lt;account_id&amp;gt;:role/Role-S3
  }
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers)
print(r.status_code)
print(r.text)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script would print output as shown below if it is executed successfully.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;200
{"acknowledged":true}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List down the snapshots.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl &amp;lt;es_endpoint&amp;gt;/_snapshot/&amp;lt;repository_name&amp;gt;/_all?pretty
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restore a snapshot&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -XPOST &amp;lt;es_endpoint&amp;gt;/_snapshot/&amp;lt;repository-name&amp;gt;/&amp;lt;snapshot_name&amp;gt;/_restore
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Restore a specific index from a snapshot with settings&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -XPOST &amp;lt;es_endpoint&amp;gt;/_snapshot/&amp;lt;repository-name&amp;gt;/&amp;lt;snapshot_name&amp;gt;/_restore -d '{
"indices": "&amp;lt;index_name&amp;gt;",
"index_settings": {
  "index.routing.allocation.require.data": null
  }
}' -H 'Content-Type: application/json'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List down the indices.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -XGET &amp;lt;es_endpoint&amp;gt;/_cat/indices
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You'll be able to view the indices, if the restoration is successful.&lt;/p&gt;

&lt;h6&gt;
  
  
  ** Key Points **
&lt;/h6&gt;

&lt;blockquote&gt;
&lt;p&gt;Don't restore the .kibana or system indices. It could throw errors. &lt;br&gt;
Don't delete .kibana index in target cluster when restoring.&lt;/p&gt;
&lt;/blockquote&gt;

</description>
      <category>elasticsearch</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
