<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: gurpreet kaur</title>
    <description>The latest articles on DEV Community by gurpreet kaur (@gurpreet_kaur_29).</description>
    <link>https://dev.to/gurpreet_kaur_29</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gurpreet_kaur_29"/>
    <language>en</language>
    <item>
      <title>🚨 Incident Response in AWS: A Step-by-Step Guide for Beginners</title>
      <dc:creator>gurpreet kaur</dc:creator>
      <pubDate>Mon, 28 Jul 2025 23:14:48 +0000</pubDate>
      <link>https://dev.to/gurpreet_kaur_29/incident-response-in-aws-a-step-by-step-guide-for-beginners-291e</link>
      <guid>https://dev.to/gurpreet_kaur_29/incident-response-in-aws-a-step-by-step-guide-for-beginners-291e</guid>
      <description>&lt;p&gt;Cloud environments are dynamic and powerful, but they also open the door to security incidents if not monitored effectively. Imagine this: a suspicious login attempt is detected in your AWS infrastructure—what would you do?&lt;/p&gt;

&lt;p&gt;In this blog, we’ll walk through how to detect, respond to, and isolate a potentially compromised EC2 instance using AWS native tools like CloudWatch, SNS, Lambda, and Systems Manager.&lt;/p&gt;

&lt;p&gt;By the end, you’ll not only learn how to set up an automated incident response pipeline but also understand the “why” behind each step—even if you're new to AWS.&lt;/p&gt;

&lt;p&gt;🎯 &lt;strong&gt;Objectives of this Lab&lt;/strong&gt;&lt;br&gt;
Here’s what we’ll accomplish:&lt;br&gt;
✅ Understand what infrastructure incidents are and why they matter.&lt;br&gt;
✅ Configure your application to log events into Amazon CloudWatch.&lt;br&gt;
✅ Create alarms and notifications when malicious activity is detected.&lt;br&gt;
✅ Automate EC2 isolation using Lambda and Security Groups.&lt;br&gt;
✅ Notify stakeholders using Amazon SNS.&lt;/p&gt;

&lt;p&gt;🛠 &lt;strong&gt;AWS Services You’ll Use&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;Amazon CloudWatch&lt;/em&gt; – For monitoring logs and creating alarms.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AWS Lambda&lt;/em&gt; – To automate response actions (like isolating EC2).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Amazon SNS (Simple Notification Service)&lt;/em&gt; – To notify stakeholders.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AWS Systems Manager (Fleet Manager &amp;amp; Run Command)&lt;/em&gt; – For managing EC2 without SSH.&lt;/p&gt;

&lt;p&gt;Pro tip for beginners: Each of these services is serverless, meaning you don’t need to manage servers or infrastructure to run them!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7nmxmt7egui0t63ulo33.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7nmxmt7egui0t63ulo33.png" alt=" " width="800" height="359"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔍&lt;strong&gt;Step 1: Understand Infrastructure Incidents&lt;/strong&gt;&lt;br&gt;
An incident in cloud security typically means unexpected or unauthorized activity within your environment—think failed login attempts, privilege escalations, or abnormal traffic spikes.&lt;/p&gt;

&lt;p&gt;In AWS, such incidents are often detected through logs (CloudTrail, application logs) and metrics (unusual CPU/network usage). That’s why centralized logging and monitoring are essential.&lt;/p&gt;

&lt;p&gt;📝 &lt;strong&gt;Step 2: Configure CloudWatch Logging&lt;/strong&gt;&lt;br&gt;
Before you can detect anything, you need visibility! Let’s enable CloudWatch Agent to collect logs:&lt;/p&gt;

&lt;p&gt;Commands to Install and Configure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This wizard will:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ask for OS type (choose Linux if using EC2 Linux AMI).&lt;/li&gt;
&lt;li&gt;Configure user permissions (root or cwagent).&lt;/li&gt;
&lt;li&gt;Enable log collection (provide your application log path, e.g.,  /home/ssm-user/record.log).&lt;/li&gt;
&lt;li&gt;Save and push configuration to AWS Systems Manager Parameter Store for consistency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After running, your logs are shipped to CloudWatch → Log Groups where they can be monitored.&lt;/p&gt;

&lt;p&gt;🚨 &lt;strong&gt;Step 3: Detect Suspicious Activity with CloudWatch Alarms&lt;/strong&gt;&lt;br&gt;
Now that logs are centralized:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to CloudWatch → Logs → Log Groups → record.log.&lt;/li&gt;
&lt;li&gt;Create a metric filter for suspicious patterns (e.g., HTTP 401 Unauthorized errors).&lt;/li&gt;
&lt;li&gt;Name it something meaningful like "LabApplications".&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Why?&lt;br&gt;
This filter will scan logs in real-time for your defined pattern. If a match is found, CloudWatch triggers an alarm.&lt;/p&gt;

&lt;p&gt;📢 &lt;strong&gt;Step 4: Set Up Notifications (Amazon SNS)&lt;/strong&gt;&lt;br&gt;
Create an SNS topic named IncidentResponse-Alerts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to Amazon SNS → Topics → Create Topic.&lt;/li&gt;
&lt;li&gt;Choose "Standard".&lt;/li&gt;
&lt;li&gt;Add email subscriptions for stakeholders (e.g., &lt;a href="mailto:security@company.com"&gt;security@company.com&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now, link your CloudWatch Alarm (from Step 3) to this SNS topic. This ensures real-time alerts when an incident occurs.&lt;/p&gt;

&lt;p&gt;🖥 &lt;strong&gt;Step 5: Automate Instance Isolation with AWS Lambda&lt;/strong&gt;&lt;br&gt;
Here’s where automation shines. Instead of manually logging in and isolating the instance (which wastes precious time during an incident), let Lambda do it for you.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Lambda Function 1&lt;/em&gt;– Traffic Generator (for testing):&lt;br&gt;
This function simulates failed login attempts to trigger alarms:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3, requests, logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)
ec2_client = boto3.client('ec2')

def lambda_handler(event, context):
    pub_ip = get_public_ip("App-Server")
    url = f"http://{pub_ip}:8443/"
    for _ in range(40):
        requests.post(url, data="username=admin&amp;amp;password=test123")

def get_public_ip(tag):
    response = ec2_client.describe_instances(Filters=[{'Name':'tag:Name','Values':[tag]}])
    return response['Reservations'][0]['Instances'][0]['PublicIpAddress']
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Lambda Function 2&lt;/em&gt; – Instance Isolation:&lt;br&gt;
Triggered by the alarm SNS notification, it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Removes IAM role (to cut off API access).&lt;/li&gt;
&lt;li&gt;Attaches an Isolated Security Group (no ingress/egress).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Code snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def lambda_handler(event, context):
    message = json.loads(event['Records'][0]["Sns"]['Message'])
    instance_id = message['Trigger']['MetricName']
    isolate_instance(instance_id)

def isolate_instance(instance_id):
    ec2_client.modify_instance_attribute(
        InstanceId=instance_id,
        Groups=[create_isolated_sg()]
    )

def create_isolated_sg():
    sg = ec2_resource.create_security_group(GroupName="Isolated_SG", VpcId=vpc_id)
    sg.revoke_egress(IpPermissions=[{'IpProtocol':'-1','IpRanges':[{'CidrIp':'0.0.0.0/0'}]}])
    return sg.id
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔒&lt;strong&gt;Step 6: Validate the Response&lt;/strong&gt;&lt;br&gt;
Once triggered:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check EC2 IAM Role – it should now be removed.&lt;/li&gt;
&lt;li&gt;Check Security Groups – EC2 should be assigned Isolated_SG (no ingress/egress rules).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this stage, the instance is quarantined, preventing lateral movement by attackers while you perform forensic analysis.&lt;/p&gt;

&lt;p&gt;🛠 &lt;strong&gt;Step 7: Manage Instances Securely (Systems Manager Fleet Manager)&lt;/strong&gt;&lt;br&gt;
Instead of SSH/RDP:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use AWS Systems Manager Fleet Manager to run commands securely (ideal for restricted or isolated instances).&lt;/li&gt;
&lt;li&gt;Run automated scripts or view logs directly without opening risky network ports.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅&lt;strong&gt;What Did We Achieve?&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized logging using CloudWatch.&lt;/li&gt;
&lt;li&gt;Real-time threat detection with alarms &amp;amp; metric filters.&lt;/li&gt;
&lt;li&gt;Automated incident response via SNS + Lambda.&lt;/li&gt;
&lt;li&gt;Instance isolation to stop threats instantly.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This workflow is scalable and serverless, making it perfect for enterprises and even AWS learners experimenting in a sandbox.&lt;/p&gt;

&lt;p&gt;🔥 &lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
Security incidents are inevitable, but preparedness makes the difference. By combining AWS-native services, you can build an automated, proactive incident response pipeline without external tools.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automating Resource Tagging in AWS : Lambda, AWS Config &amp; Systems Manager</title>
      <dc:creator>gurpreet kaur</dc:creator>
      <pubDate>Wed, 23 Jul 2025 23:26:39 +0000</pubDate>
      <link>https://dev.to/gurpreet_kaur_29/automating-resource-tagging-in-aws-lambda-aws-config-systems-manager-2iap</link>
      <guid>https://dev.to/gurpreet_kaur_29/automating-resource-tagging-in-aws-lambda-aws-config-systems-manager-2iap</guid>
      <description>&lt;p&gt;In cloud environments, enforcing compliance is not just a best practice—it’s a necessity. One simple yet powerful way to do this in AWS is through resource tagging. Tags allow you to identify, organize, and control your resources by assigning meaningful key-value metadata such as Environment: Prod.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll walk through a hands-on approach to automatically enforce compliance using a combination of AWS services including Lambda, EC2, AWS Config, and Systems Manager (SSM). Our goal: ensure all EC2 instances are correctly tagged with Environment: Prod.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy9qsfg3ctfu3aou47f7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzy9qsfg3ctfu3aou47f7.png" alt=" " width="760" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🧠 &lt;strong&gt;Understanding the AWS Services Involved&lt;/strong&gt;&lt;br&gt;
To automate compliance enforcement using tagging, we’ll leverage the following AWS services:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AWS Lambda&lt;/em&gt;:&lt;br&gt;
A serverless compute service that runs your code in response to events—like changes in AWS resources—without needing to manage servers. It’s ideal for lightweight automation tasks.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AWS Systems Manager (SSM)&lt;/em&gt;:&lt;br&gt;
A management service that helps you automate operational tasks across AWS resources. In this context, it triggers Lambda functions as part of automated remediation workflows.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;AWS Config&lt;/em&gt;:&lt;br&gt;
A monitoring service that tracks AWS resource configurations and evaluates them against compliance rules. It alerts and remediates when resources don’t meet defined standards—like missing required tags.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps involved:&lt;/strong&gt;&lt;br&gt;
🏷️&lt;em&gt;&lt;strong&gt;Step 1&lt;/strong&gt;: Tag Your EC2 Instance&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Start by launching an EC2 instance and apply a few initial tags manually (such as Owner: DevOps, Project: Alpha, etc.). This forms the baseline for what compliant tagging should look like.&lt;/p&gt;

&lt;p&gt;🧠 &lt;em&gt;&lt;strong&gt;Step 2&lt;/strong&gt;: Create a Lambda Function to Auto-Tag Resources&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Next, create an AWS Lambda function in Python that auto-applies the Environment: Prod tag to a given EC2 instance. Here's the core functionality:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Accepts an instance ID as input.&lt;/li&gt;
&lt;li&gt;Constructs the correct resource ARN.&lt;/li&gt;
&lt;li&gt;Uses the tag_resources method of boto3’s resourcegroupstaggingapi to apply tags.&lt;/li&gt;
&lt;li&gt;Returns a compliance annotation.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import logging

logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    logger.info(event)

    client = boto3.client('sts')
    account_id = client.get_caller_identity()['Account']

    instance_id = event['instanceId']
    resource_arn = f"arn:aws:ec2:us-east-1:{account_id}:instance/{instance_id}"

    tagging_client = boto3.client('resourcegroupstaggingapi')
    try:
        response = tagging_client.tag_resources(
            ResourceARNList=[resource_arn],
            Tags={'Environment': 'Prod'}
        )
        print(response)
    except Exception as e:
        logger.exception(e)

    return {
        "compliance_type": "COMPLIANT",
        "annotation": "This resource is compliant with the rule."
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;⚙️ &lt;em&gt;&lt;strong&gt;Step 3&lt;/strong&gt;: Create an SSM Automation Document&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Head over to AWS Systems Manager and create a custom SSM Automation Document that invokes the Lambda function created in Step 2. This automation will apply the required Environment: Prod tag to any EC2 instance found non-compliant.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;schemaVersion: '0.3'
parameters:
  InstanceId:
    type: String
    description: ID of the instance to be tagged
mainSteps:
  - name: updatetags
    action: aws:invokeLambdaFunction
    isEnd: true
    inputs:
      InvocationType: Event
      Payload: |
        {
          "instanceId": "{{ InstanceId }}"
        }
      FunctionName: arn:aws:lambda:us-east-1:&amp;lt;account_id&amp;gt;:function:labFunction
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📊 &lt;em&gt;&lt;strong&gt;Step 4&lt;/strong&gt;: Set Up AWS Config to Monitor Compliance&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In AWS Config, create a configuration recorder that logs changes in your resource states and sends them to an S3 bucket.&lt;/p&gt;

&lt;p&gt;Make sure to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose an S3 bucket prefixed with config-bucket-.&lt;/li&gt;
&lt;li&gt;Set a prefix such as Config to help organize config data.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ Tip: AWS Config's Rules Development Kit (RDK) is highly useful for implementing compliance-as-code patterns, especially when working with custom Lambda-backed rules.&lt;/p&gt;

&lt;p&gt;🛠️ &lt;em&gt;&lt;strong&gt;Step 5&lt;/strong&gt;: Create a Config Rule to Enforce Tags&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let’s now create a rule in AWS Config that checks whether EC2 instances have the required Environment: Prod tag.&lt;/p&gt;

&lt;p&gt;Resource type: AWS::EC2::Instance&lt;br&gt;
Parameter tag1Key: Environment&lt;br&gt;
Parameter tag1Value: Prod&lt;/p&gt;

&lt;p&gt;⚠️ Note: Tag keys and values are case-sensitive in AWS Config.&lt;br&gt;
Once set, this rule automatically flags any EC2 instance that lacks the specified tag as non-compliant.&lt;/p&gt;

&lt;p&gt;🔁 &lt;em&gt;&lt;strong&gt;Step 6&lt;/strong&gt;: Automate Remediation Through AWS Config&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;After creating the rule:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Click the rule and choose Manage Remediation.&lt;/li&gt;
&lt;li&gt;Select Manual remediation.&lt;/li&gt;
&lt;li&gt;Choose the remediation action that runs your SSM document (from Step 3).&lt;/li&gt;
&lt;li&gt;Map the Resource ID parameter to instanceId.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Finally:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select the instance from the Resources in scope.&lt;/li&gt;
&lt;li&gt;Click Remediate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After a few minutes, refresh the page. The resource's compliance status should change to Compliant once the tag is successfully applied.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Outcome&lt;/strong&gt;: Automated Compliance for EC2 Instances&lt;br&gt;
With this setup, any EC2 instance missing the Environment: Prod tag is automatically detected and remediated, ensuring your environment stays compliant with organizational tagging policies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbxmc75ueqwkn5pt54sr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjbxmc75ueqwkn5pt54sr.png" alt=" " width="800" height="158"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This approach leverages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lambda for automation,&lt;/li&gt;
&lt;li&gt;AWS Config for compliance monitoring,&lt;/li&gt;
&lt;li&gt;SSM for remediation,&lt;/li&gt;
&lt;li&gt;And tags as the foundation of governance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🚀** Final Thoughts**&lt;br&gt;
Tagging isn’t just for cost management or organization—it’s also a crucial part of security and compliance enforcement. By combining native AWS services, you can ensure that your cloud environment remains compliant, auditable, and easy to manage at scale.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>🔐 Securing Amazon RDS Credentials with AWS Secrets Manager</title>
      <dc:creator>gurpreet kaur</dc:creator>
      <pubDate>Sat, 19 Jul 2025 19:17:26 +0000</pubDate>
      <link>https://dev.to/gurpreet_kaur_29/securing-amazon-rds-credentials-with-aws-secrets-manager-5f7m</link>
      <guid>https://dev.to/gurpreet_kaur_29/securing-amazon-rds-credentials-with-aws-secrets-manager-5f7m</guid>
      <description>&lt;p&gt;In cloud-native environments, secrets management is critical. Hardcoding database credentials or API keys within code repositories is not only bad practice—it’s a serious &lt;em&gt;security risk&lt;/em&gt;. In this guide, I’ll walk you through how to securely manage Amazon RDS credentials using AWS Secrets Manager, including automatic secret rotation with AWS Lambda.&lt;/p&gt;

&lt;p&gt;As part of my hands-on learning, I implemented this solution to secure database credentials for an application deployed in AWS Lambda. This walkthrough covers storing, retrieving, and rotating secrets using native AWS integrations—enabling secure, uninterrupted database connectivity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🔧 Why Use AWS Secrets Manager?&lt;/strong&gt;&lt;br&gt;
AWS Secrets Manager allows you to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Securely store and encrypt secrets (e.g., database credentials).&lt;/li&gt;
&lt;li&gt;Programmatically retrieve secrets via applications or scripts.&lt;/li&gt;
&lt;li&gt;Enable automated rotation of secrets to meet compliance needs.&lt;/li&gt;
&lt;li&gt;Eliminate hardcoded secrets in your codebase.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🧩 Architecture Overview&lt;/strong&gt;&lt;br&gt;
Here’s what the architecture looks like:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application runs inside AWS Lambda.&lt;/li&gt;
&lt;li&gt;Lambda retrieves secrets from AWS Secrets Manager.&lt;/li&gt;
&lt;li&gt;Secret contains RDS credentials and is set up for automated rotation.&lt;/li&gt;
&lt;li&gt;Rotation is handled using another Lambda function.&lt;/li&gt;
&lt;li&gt;Interface VPC Endpoints are used to securely access Secrets Manager  inside the VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmecvh10rt58ah7xgd95t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmecvh10rt58ah7xgd95t.png" alt=" " width="800" height="488"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;🪜 Step-by-Step Implementation&lt;/strong&gt;&lt;br&gt;
✅ &lt;em&gt;Step 1: Launch an RDS Instance&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create an Amazon RDS (MySQL/Aurora) instance.&lt;/li&gt;
&lt;li&gt;Ensure the Security Group attached to the instance allows inbound traffic on TCP port 3306 (MySQL/Aurora default).&lt;/li&gt;
&lt;li&gt;Note down the DB endpoint and credentials (we'll use them in the secret).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ &lt;em&gt;Step 2: Store Credentials in AWS Secrets Manager&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to AWS Secrets Manager &amp;gt; Store a new secret.&lt;/li&gt;
&lt;li&gt;Choose Credentials for RDS database.&lt;/li&gt;
&lt;li&gt;Add the database username, password, and connection details.&lt;/li&gt;
&lt;li&gt;Provide an encryption key (KMS) and link to the RDS database from Step1.&lt;/li&gt;
&lt;li&gt;Name your secret (e.g., MyRDS/ProdApp).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ &lt;em&gt;Step 3: Create a VPC Interface Endpoint&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate to VPC &amp;gt; Endpoints &amp;gt; Create Endpoint.&lt;/li&gt;
&lt;li&gt;Choose com.amazonaws.region.secretsmanager.&lt;/li&gt;
&lt;li&gt;Select the VPC and subnets from each AZ.&lt;/li&gt;
&lt;li&gt;Attach a Security Group that allows Lambda functions in the VPC to access the endpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This enables AWS PrivateLink connectivity to Secrets Manager.&lt;/p&gt;

&lt;p&gt;✅ &lt;em&gt;Step 4: Create a Lambda to Retrieve Secrets&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a Lambda function inside the same VPC.&lt;/li&gt;
&lt;li&gt;Attach IAM permissions: secretsmanager:GetSecretValue, rds:Connect.&lt;/li&gt;
&lt;li&gt;Install PyMySQL library via Lambda layers or zip package.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sample code snippet:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import pymysql
import json
import os

def lambda_handler(event, context):
    secret_name = os.environ['SECRET_NAME']
    region = os.environ['AWS_REGION']
    client = boto3.client('secretsmanager', region_name=region)
    response = client.get_secret_value(SecretId=secret_name)
    secret = json.loads(response['SecretString'])

    connection = pymysql.connect(
        host=secret['host'],
        user=secret['username'],
        password=secret['password'],
        db='your_db_name',
        port=3306
    )
    print("Connection successful!")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test the Lambda to ensure it connects to RDS using the secret.&lt;/p&gt;

&lt;p&gt;✅ &lt;em&gt;Step 5: Set Up Rotation Lambda&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a second Lambda function to handle rotation logic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This function follows four steps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;createSecret: Generate new credentials.&lt;/li&gt;
&lt;li&gt;setSecret: Update credentials in the RDS DB.&lt;/li&gt;
&lt;li&gt;testSecret: Test connectivity.&lt;/li&gt;
&lt;li&gt;finishSecret: Promote new secret to current.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Grant the following resource-based policy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Sid": "secret-rotation",
  "Effect": "Allow",
  "Principal": {
    "Service": "secretsmanager.amazonaws.com"
  },
  "Action": "lambda:InvokeFunction",
  "Resource": "arn:aws:lambda:region:account-id:function:rotation-function-name"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Add the PyMySQL layer as a custom Lambda layer.&lt;/p&gt;

&lt;p&gt;✅ &lt;em&gt;Step 6: Enable Secret Rotation&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Go to the secret created in Step 2.&lt;/li&gt;
&lt;li&gt;Enable automatic rotation.&lt;/li&gt;
&lt;li&gt;Assign the Lambda rotation function from Step 5.&lt;/li&gt;
&lt;li&gt;Choose a rotation interval (e.g., every 30 days).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;✅ &lt;em&gt;Step 7: Test Secret Rotation&lt;/em&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trigger a manual rotation.&lt;/li&gt;
&lt;li&gt;Check logs in CloudWatch Logs.&lt;/li&gt;
&lt;li&gt;Confirm: Secret is rotated.&lt;/li&gt;
&lt;li&gt;Lambda rotation function updated the RDS database.&lt;/li&gt;
&lt;li&gt;Your application still connects successfully using new credentials.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;🧪 Validating Rotation via AWS CLI&lt;/strong&gt;&lt;br&gt;
You can also retrieve secrets using the CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws secretsmanager get-secret-value --secret-id MyRDS/ProdApp
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;🔄 How Secret Rotation Works (Under the Hood)&lt;/strong&gt;&lt;br&gt;
AWS Secrets Manager uses Lambda and version labels:&lt;/p&gt;

&lt;p&gt;AWSPENDING: New version created by rotation function.&lt;br&gt;
AWSCURRENT: Active version used by applications.&lt;br&gt;
AWSPREVIOUS: Previous version before rotation.&lt;/p&gt;

&lt;p&gt;Each rotation function must:&lt;br&gt;
Generate new credentials.&lt;br&gt;
Update the RDS DB.&lt;br&gt;
Test connectivity.&lt;br&gt;
Finalize the rotation and update version labels.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;✅ Benefits of This Approach&lt;/strong&gt;&lt;br&gt;
✔ No hardcoded credentials in code&lt;br&gt;
✔ Automated compliance with rotation policies&lt;br&gt;
✔ Reduced risk of credential leakage&lt;br&gt;
✔ Secure, programmatic access to secrets from inside VPC&lt;br&gt;
✔ Seamless integration with Lambda and RDS&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;📝 Final Thoughts&lt;/strong&gt;&lt;br&gt;
Secrets management isn’t just a best practice—it’s a necessity. AWS Secrets Manager, coupled with Lambda and RDS, provides a powerful solution to automate secret handling and reduce security risks.&lt;/p&gt;

&lt;p&gt;I’ve personally implemented this solution to securely manage database access for cloud applications—and it's become a foundational security building block in my AWS learning journey.&lt;/p&gt;

&lt;p&gt;🔒 &lt;em&gt;Stay secure, automate wisely, and always validate with logs!&lt;/em&gt;&lt;br&gt;
Let me know in comments if you’ve tried this or plan to implement it.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>🔐 Encrypting and Decrypting Files Using AWS KMS and Data Keys</title>
      <dc:creator>gurpreet kaur</dc:creator>
      <pubDate>Thu, 17 Jul 2025 22:06:38 +0000</pubDate>
      <link>https://dev.to/gurpreet_kaur_29/encrypting-and-decrypting-files-using-aws-kms-and-data-keys-4m8b</link>
      <guid>https://dev.to/gurpreet_kaur_29/encrypting-and-decrypting-files-using-aws-kms-and-data-keys-4m8b</guid>
      <description>&lt;p&gt;A Practical Guide to Using KMS Keys, Data Keys, and Envelope Encryption in AWS&lt;/p&gt;

&lt;p&gt;In today’s cloud-first world, security is paramount — especially when handling sensitive data. AWS Key Management Service (KMS) provides a secure and scalable way to create and manage cryptographic keys and control their usage across a wide range of AWS services and applications.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll walk through a step-by-step process to encrypt and decrypt files using AWS KMS keys and generated data keys, covering both symmetric encryption and envelope encryption scenarios.&lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;Step 1: Create an EC2 Instance with IAM Role&lt;/strong&gt;&lt;br&gt;
Launch an EC2 instance and attach an IAM role that allows it to communicate with other AWS services securely.&lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;Step 2: Attach IAM Policy to EC2 Role&lt;/strong&gt;&lt;br&gt;
Add the following IAM policy to allow access to EC2, KMS, and S3:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "ec2:DescribeInstances",
        "kms:*",
        "s3:GetObject",
        "s3:ListBucket",
        "s3:PutObject"
      ],
      "Resource": "*",
      "Effect": "Allow"
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📝 Note: IAM policies define what actions identities (like roles or users) are allowed to perform on resources.&lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;Step 3: Create a Symmetric KMS Key&lt;/strong&gt;&lt;br&gt;
In the AWS KMS console, create a symmetric key in a specific region.&lt;br&gt;
Symmetric keys use the same key for both encryption and decryption. These 256-bit keys never leave AWS unencrypted.&lt;/p&gt;

&lt;p&gt;🔐 &lt;em&gt;Key Concepts:&lt;/em&gt;&lt;br&gt;
Symmetric Key: Same key for encrypt/decrypt.&lt;br&gt;
Asymmetric Key: Public-private key pair for encryption or signing.&lt;br&gt;
Key Rotation: Ensure it is enabled for better key lifecycle security.&lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;Step 4: Review the Automatically Generated Key Policy&lt;/strong&gt;&lt;br&gt;
When assigning Key Administrators and Key Users, AWS automatically generates a policy like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "Sid": "Allow use of the key",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::&amp;lt;account-id&amp;gt;:role/YourRole"
  },
  "Action": [
    "kms:Encrypt",
    "kms:Decrypt",
    "kms:GenerateDataKey*"
  ],
  "Resource": "*"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Key policy governs access control to KMS keys.&lt;/p&gt;

&lt;p&gt;📌 &lt;strong&gt;Step 5: Connect to EC2 Using Session Manager&lt;/strong&gt;&lt;br&gt;
Use AWS Session Manager to connect securely without exposing ports.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo su ec2-user
cd ../../home/ec2-user
echo "This is my Secret Text to encrypt." &amp;gt; samplesecret.txt
cat samplesecret.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔐 &lt;em&gt;Generate a Data Key Using AWS KMS&lt;/em&gt;&lt;br&gt;
Run the command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws kms generate-data-key \
  --key-id alias/myKMSKey \
  --key-spec AES_256 \
  --encryption-context project=practice \
  --region us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This returns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A plaintext data key (for encryption)&lt;/li&gt;
&lt;li&gt;A ciphertext blob (encrypted data key, safe to store)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Save both to files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo '&amp;lt;PlaintextKey&amp;gt;' | base64 --decode &amp;gt; datakeyPlainText.txt
echo '&amp;lt;CiphertextBlob&amp;gt;' | base64 --decode &amp;gt; datakeyEncrypted.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔐 &lt;em&gt;Encrypt a File Using the Data Key&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;openssl enc -e -aes256 -in samplesecret.txt -out encryptedSecret.txt -k fileb://datakeyPlainText.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔍 &lt;em&gt;View encrypted data:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;more encryptedSecret.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🧹 &lt;em&gt;Remove plaintext key for security:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;rm datakeyPlainText.txt

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🔓_ Decrypt the File_&lt;br&gt;
First, decrypt the data key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws kms decrypt \
  --encryption-context project=practice \
  --ciphertext-blob fileb://datakeyEncrypted.txt \
  --region us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then decode and use it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo '&amp;lt;DecryptedPlaintextKey&amp;gt;' | base64 --decode &amp;gt; datakeyPlainText.txt
openssl enc -d -aes256 -in encryptedSecret.txt -k fileb://datakeyPlainText.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;🎉 &lt;em&gt;You should now see the original text — decryption successful!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;🛠️ &lt;em&gt;Encrypting Without Generating Data Keys (Direct KMS Encryption)&lt;/em&gt;&lt;br&gt;
Create a new file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo "New secret file: encrypt without using a data key." &amp;gt; NewSecretFile.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Encrypt it directly using your KMS key:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws kms encrypt \
  --key-id alias/myKMSKey \
  --plaintext fileb://NewSecretFile.txt \
  --encryption-context project=practice \
  --output text \
  --query CiphertextBlob \
  --region=us-east-1 | base64 --decode &amp;gt; NewSecretsEncryptedFile.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Decrypt it with:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws kms decrypt \
  --ciphertext-blob fileb://NewSecretsEncryptedFile.txt \
  --encryption-context project=practice \
  --output text \
  --query Plaintext \
  --region=us-east-1 | base64 --decode &amp;gt; NewSecretsDecryptedFile.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Verify:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat NewSecretsDecryptedFile.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;📌 &lt;strong&gt;Important Notes on AWS KMS&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS KMS is optimized for small data (&amp;lt;4KB).&lt;/li&gt;
&lt;li&gt;Use Envelope Encryption: Encrypt large data with a data key, and encrypt   the data key with KMS.
-Always delete plaintext keys from memory after use.&lt;/li&gt;
&lt;li&gt;KMS does not store or manage your data keys — you manage them outside using tools like OpenSSL or AWS Encryption SDK.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🔐 &lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
Using AWS KMS with data keys allows you to build highly secure, scalable encryption workflows. Whether you’re protecting secrets, encrypting files, or automating secure data flows, mastering KMS is a crucial cloud skill.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>🔐 Amazon S3 Security: Best Practices for Data Protection</title>
      <dc:creator>gurpreet kaur</dc:creator>
      <pubDate>Sun, 13 Jul 2025 19:31:02 +0000</pubDate>
      <link>https://dev.to/gurpreet_kaur_29/amazon-s3-security-best-practices-for-data-protection-4b0n</link>
      <guid>https://dev.to/gurpreet_kaur_29/amazon-s3-security-best-practices-for-data-protection-4b0n</guid>
      <description>&lt;p&gt;As cloud adoption continues to rise, securing data stored in Amazon S3 becomes a top priority for organizations. This post explores a comprehensive approach to S3 security using encryption, versioning, replication, and lifecycle policies—ensuring your data is protected from unauthorized access, loss, or corruption.&lt;/p&gt;

&lt;p&gt;🛡️ &lt;strong&gt;Core Security Features Implemented&lt;/strong&gt;&lt;br&gt;
The following solution demonstrates how to secure files stored in Amazon S3:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Encryption at Rest&lt;/em&gt;:&lt;br&gt;
Files uploaded to the primary S3 bucket are automatically encrypted using Server-Side Encryption with S3-Managed Keys (SSE-S3).&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Versioning Enabled&lt;/em&gt;:&lt;br&gt;
When users upload updated versions of a file, Amazon S3 maintains previous versions. This protects against accidental deletions or overwrites.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Cross-Region Replication (CRR)&lt;/em&gt;:&lt;br&gt;
Live replication ensures that every new object uploaded to the primary S3 bucket is automatically copied to a secondary backup bucket in another region.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Server Access Logging&lt;/em&gt;:&lt;br&gt;
All requests made to the primary bucket are logged to a designated logging bucket. These logs are essential for security audits and access tracking.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Lifecycle Policies for Archival&lt;/em&gt;:&lt;br&gt;
Older versions of files are automatically transitioned to a cheaper storage class using S3 Lifecycle rules, reducing storage costs while maintaining data durability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frlydpdyt4imdojoh7oz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frlydpdyt4imdojoh7oz4.png" alt=" " width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📁 &lt;strong&gt;Step-by-Step: S3 Security Lab&lt;/strong&gt;&lt;br&gt;
To implement this S3 security setup, follow these practical steps:&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Create an S3 Bucket&lt;/em&gt;:&lt;br&gt;
Enable encryption, versioning, and server access logging during setup.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltuuzm5xfvvhhhby3yt0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fltuuzm5xfvvhhhby3yt0.png" alt=" " width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wstzxngnqwkq35mkus4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3wstzxngnqwkq35mkus4.png" alt=" " width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Configure a Lifecycle Policy&lt;/em&gt;:&lt;br&gt;
Define rules to automatically transition older versions of files to Amazon S3 Glacier or Glacier Deep Archive for long-term storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fylt2tv7jzb24alur0m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8fylt2tv7jzb24alur0m.png" alt=" " width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz99wqm9ntp302rfhg1e9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz99wqm9ntp302rfhg1e9.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Enable Server Access Logging&lt;/em&gt;:&lt;br&gt;
Choose a destination logging bucket. Grant Log Delivery Group write access to the target bucket to receive logs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpobeitkk88bb4doj0dg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvpobeitkk88bb4doj0dg.png" alt=" " width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs35pvrdwbmzn5ptktxyl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs35pvrdwbmzn5ptktxyl.png" alt=" " width="800" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8964q51j6vjvgl03ap0f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8964q51j6vjvgl03ap0f.png" alt=" " width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Upload a File&lt;/em&gt;:&lt;br&gt;
Upload a sample file (e.g., record.txt) containing private information. It will be encrypted automatically using SSE-S3.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqoib21wyetkjhscg4vc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faqoib21wyetkjhscg4vc.png" alt=" " width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Update and Re-Upload the File&lt;/em&gt;:&lt;br&gt;
Modify the file and upload it again. S3 will retain the previous version while treating the new upload as the current version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6fxgd2kv07nu803jnek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb6fxgd2kv07nu803jnek.png" alt=" " width="800" height="192"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Enable Cross-Region Replication&lt;/em&gt;:&lt;br&gt;
Configure replication rules to copy files from the primary bucket to the backup bucket automatically.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpw7ckl36wk2qzuqevwih.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpw7ckl36wk2qzuqevwih.png" alt=" " width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4qou2qto0l8z35icjz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb4qou2qto0l8z35icjz4.png" alt=" " width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaiv56bxcsn05k1rqujl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feaiv56bxcsn05k1rqujl.png" alt=" " width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔐 &lt;strong&gt;Managing Access with ACLs&lt;/strong&gt;&lt;br&gt;
Amazon S3 provides Access Control Lists (ACLs) to define access permissions at the bucket and object levels.&lt;br&gt;
By default, if another AWS account uploads an object to your bucket, that account owns the object.&lt;br&gt;
ACLs let you grant read/write access to specific AWS accounts or predefined groups.&lt;/p&gt;

&lt;p&gt;📦 &lt;strong&gt;Understanding S3 Lifecycle Policies&lt;/strong&gt;&lt;br&gt;
S3 Lifecycle configurations automate storage management:&lt;br&gt;
Transition Actions: Move objects to different storage classes (e.g., S3 Standard → Glacier).&lt;br&gt;
Expiration Actions: Automatically delete outdated or unnecessary objects.&lt;/p&gt;

&lt;p&gt;❄️ &lt;strong&gt;Amazon S3 Glacier for Archival&lt;/strong&gt;&lt;br&gt;
For long-term storage and compliance requirements:&lt;br&gt;
S3 Glacier Flexible Retrieval provides low-cost storage with expedited, standard, and bulk retrieval options.&lt;br&gt;
S3 Glacier Deep Archive is designed for data that is rarely accessed but must be retained for years.&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Practice Lab Goals&lt;/strong&gt;&lt;br&gt;
To reinforce your understanding, here are your practice goals:&lt;/p&gt;

&lt;p&gt;✅ Create an Amazon S3 bucket with logging, encryption, and versioning.&lt;/p&gt;

&lt;p&gt;✅ Upload and re-upload a file to simulate version control.&lt;/p&gt;

&lt;p&gt;✅ Enable replication to a secondary bucket.&lt;/p&gt;

&lt;p&gt;✅ Create an S3 Lifecycle rule to transition previous versions to an archival class.&lt;/p&gt;

&lt;p&gt;✅ View and analyze S3 server access logs.&lt;/p&gt;

&lt;p&gt;💡 &lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
Implementing S3 security best practices not only protects sensitive data but also helps you meet compliance and governance requirements. By combining encryption, access control, versioning, replication, and lifecycle policies, you're building a highly resilient and secure storage strategy within AWS.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Custom Amazon VPC from Scratch 🚀</title>
      <dc:creator>gurpreet kaur</dc:creator>
      <pubDate>Wed, 25 Jun 2025 12:47:40 +0000</pubDate>
      <link>https://dev.to/gurpreet_kaur_29/building-a-custom-amazon-vpc-from-scratch-15ek</link>
      <guid>https://dev.to/gurpreet_kaur_29/building-a-custom-amazon-vpc-from-scratch-15ek</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj6srafz70irmfwwv79h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj6srafz70irmfwwv79h.png" alt="Image description" width="800" height="688"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Amazon VPC?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon Virtual Private Cloud (VPC) is a logically isolated section of the AWS Cloud where you can define and control your networking environment. It allows you to launch AWS resources securely, controlling access and connectivity without exposing everything to the public internet.&lt;br&gt;
In short, VPCs enable secure, private communication between your cloud resources while providing flexibility in network design.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;How I Used Amazon VPC in This Project&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Today, I built a custom Amazon VPC setup that included:&lt;br&gt;
• A public subnet with internet access&lt;br&gt;
• A private subnet isolated from the internet&lt;br&gt;
• Internet Gateway to connect the VPC to the internet&lt;br&gt;
• Route tables for directing traffic between subnets and internet&lt;br&gt;
• Security Groups and Network ACLs for layered security&lt;br&gt;
• EC2 instances deployed in both public and private subnets&lt;br&gt;
• Connectivity setup between EC2 instances for internal communication&lt;/p&gt;

&lt;p&gt;This project gave me hands-on experience with networking fundamentals on AWS and how different VPC components work together.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Key Concepts Explored&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Virtual Private Cloud (VPC)&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
A VPC is your private network within AWS. It uses a CIDR block (IP address range) to organize resources and enables secure resource communication without exposing them publicly. AWS provides default VPCs, but creating your own gives you full control.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Subnets&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Subnets partition the VPC into smaller network segments.&lt;br&gt;
• Public subnet: Connected to the internet via an Internet Gateway; EC2 instances here can have public IPs.&lt;br&gt;
• Private subnet: No direct internet access; used for sensitive or backend resources.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Internet Gateway&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
This gateway attaches to your VPC and allows resources in public subnets to access the internet and receive inbound connections.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Route Tables&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Route tables control the traffic flow within your VPC. They contain rules (routes) that determine where network traffic is directed—whether to other subnets, the internet gateway, or virtual private gateways. Each subnet is associated with a route table that defines how its traffic is routed.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Security Groups&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Security Groups act as virtual firewalls at the instance level. They control inbound and outbound traffic by allowing or denying specific IP addresses, ports, and protocols. Unlike NACLs, security groups are stateful, meaning they automatically allow response traffic for requests initiated by the instance.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Network ACLs (NACLs)&lt;/strong&gt;&lt;/em&gt;&lt;br&gt;
Network Access Control Lists act as a firewall at the subnet level. They provide an additional layer of security by controlling inbound and outbound traffic for subnets using stateless rules, which are evaluated before security groups.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nntw5p4w5599x1yv3id.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1nntw5p4w5599x1yv3id.png" alt="Image description" width="786" height="161"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Step-by-Step: What I Built&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;☁️ Create a VPC: You've taken your first steps by setting up a Virtual Private Cloud (VPC) using Amazon VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgmeuxoz0h6wscubrsc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwgmeuxoz0h6wscubrsc1.png" alt="Image description" width="685" height="517"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🥅 Create subnets: Moving deeper into your VPC, you created subnets, which act like neighborhoods within your city, each with unique access rules. You learned the difference between public and private subnets and set up a subnet to allow instances within it to automatically receive public IP addresses, making them accessible from the internet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1iyii8hizyin7t9dtgo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj1iyii8hizyin7t9dtgo.png" alt="Image description" width="800" height="113"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🚪 Set up an internet gateway: Lastly, you added an internet gateway to your VPC, acting as the main gate that allows data to flow in and out. This setup is essential for any applications that require internet access, such as web servers. You've configured the gateway and linked it to your VPC, ensuring your public instances can reach the outside world and vice versa.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7baopgylkpupokjxoy5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm7baopgylkpupokjxoy5.png" alt="Image description" width="800" height="89"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🚏 Bonus - configure IP addresses and CIDR blocks: You've configured your VPC with an IPv4 CIDR block, understanding that IP addresses are like street addresses for your resources! You explored how different CIDR blocks dictate the size and scale of your VPC&lt;/p&gt;

&lt;p&gt;🚏 Set up route tables: You configured a route table in your VPC to send Internet-bound traffic to your internet gateway, turning your subnet into a public subnet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslcwe2wd3gxxz7o7gu9c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslcwe2wd3gxxz7o7gu9c.png" alt="Image description" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👮‍♀️ Implement security groups: You created a security group to control inbound and outbound traffic at a resource level, specifying allowed IP addresses, protocols, and ports.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3j5rt8vx4lnz3zjtse6h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3j5rt8vx4lnz3zjtse6h.png" alt="Image description" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📋 Deploy network ACLs: You set up network ACLs as an additional layer of security, managing both incoming and outgoing traffic at the subnet level.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46oyfzd3tsw8dck1fe2r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46oyfzd3tsw8dck1fe2r.png" alt="Image description" width="800" height="276"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🚷 Create a private subnet: You created a new subnet and set its CIDR block to avoid an overlap with your public subnet.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faefkgul8px396fpyatrh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faefkgul8px396fpyatrh.png" alt="Image description" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🚧 Create a private route table: You also made this subnet private by assigning it to a dedicated route table that doesn't route traffic to an internet gateway!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpo02ea32roohwf9fl2t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqpo02ea32roohwf9fl2t.png" alt="Image description" width="800" height="258"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🚔 Create a private network ACL: Then, you set up custom network ACLs to control inbound and outbound traffic for this private subnet - denying all traffic by default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdcjffjk6d3b6jrmz4fu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqdcjffjk6d3b6jrmz4fu.png" alt="Image description" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💻 Launch a public EC2 instance You launched an EC2 instance in your public subnet, set up the appropriate AMI and instance type, and configured key pairs for secure access.&lt;/p&gt;

&lt;p&gt;🤐 Launch a private EC2 instance You launched an EC2 instance in your private subnet, created a security group within the same flow, and used the same key pair for access.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;How Traffic Flows in My VPC&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• 💻 Client/User: A user enters the URL of your website into their web browser and hits enter.&lt;br&gt;
• 🚪 Internet Gateway: The request is sent from the user's browser through the internet and reaches your internet gateway.&lt;br&gt;
• 🌐 VPC: The internet gateway forwards the user's request to the VPC it's attached to.&lt;br&gt;
• 🚏 Route Table: Your VPC has a route table for your public subnet, which directs traffic to your EC2 instance hosting the website. The user's request get put on the local route in the route table.&lt;br&gt;
• 📋 Network ACL: While en route to your EC2 instance, the request has to pass through the network ACL associated with your public subnet. The network ACL has an inbound rule (rule 100) that lets in traffic from anywhere (0.0.0.0/0), so your request is let through.&lt;br&gt;
• 🥅 Public Subnet: The request enters your public subnet and travels to your EC2 instance within the subnet.&lt;br&gt;
• Response is sent back following the reverse path through Security Group, subnet, route table, VPC, and Internet Gateway&lt;br&gt;
This flow ensures that traffic is filtered and secured at multiple points.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What I Learned&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;• The importance of subnet segmentation for security and organization&lt;br&gt;
• How routing and gateways connect your VPC to the internet safely&lt;br&gt;
• The role of Security Groups and Network ACLs in protecting your cloud environment&lt;br&gt;
• Setting up EC2 instances in both public and private contexts&lt;br&gt;
• Hands-on experience with configuring and visualizing network components in AWS&lt;/p&gt;




&lt;p&gt;Building this custom VPC gave me a foundational understanding of AWS networking essentials and prepared me for more complex cloud infrastructure projects ahead.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Have you worked with AWS VPCs before? What’s your go-to architecture setup? Let me know in the comments!&lt;/strong&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>💬 Building BankerBot: A Conversational Banking Assistant Using Amazon Lex, Lambda &amp; CloudFormation</title>
      <dc:creator>gurpreet kaur</dc:creator>
      <pubDate>Sat, 14 Jun 2025 19:03:33 +0000</pubDate>
      <link>https://dev.to/gurpreet_kaur_29/building-bankerbot-a-conversational-banking-assistant-using-amazon-lex-lambda-cloudformation-56m3</link>
      <guid>https://dev.to/gurpreet_kaur_29/building-bankerbot-a-conversational-banking-assistant-using-amazon-lex-lambda-cloudformation-56m3</guid>
      <description>&lt;p&gt;In today’s fast-paced digital world, users expect quick, intelligent, and accessible banking services—often without speaking to a human agent. That’s exactly what inspired BankerBot: a cloud-native chatbot designed to help users check account balances and transfer funds between accounts seamlessly via natural conversation.   &lt;/p&gt;

&lt;p&gt;_&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybandjz2msdc0gak5ewu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fybandjz2msdc0gak5ewu.png" alt="Image description" width="796" height="409"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;🚀 &lt;strong&gt;Project Overview&lt;/strong&gt;&lt;br&gt;
BankerBot is a voice/text-enabled chatbot developed using Amazon Lex, capable of assisting users with everyday banking queries such as:&lt;br&gt;
• Checking account balances for credit, savings, and checking accounts&lt;br&gt;
• Initiating fund transfers between accounts&lt;br&gt;
• Responding to general inquiries with fallback handling&lt;br&gt;
The bot was integrated with AWS Lambda to handle backend logic and uses CloudFormation for automated deployment—making it a robust and scalable cloud-based solution.&lt;/p&gt;




&lt;p&gt;🧰 &lt;strong&gt;Tools &amp;amp; Services Used&lt;/strong&gt;&lt;br&gt;
• Amazon Lex – Conversational interface to understand and respond to user input&lt;br&gt;
• AWS Lambda – Serverless compute to fulfill intent logic&lt;br&gt;
• Amazon CloudWatch – Monitoring logs and debugging&lt;br&gt;
• AWS CloudFormation – Infrastructure-as-Code to deploy chatbot components&lt;/p&gt;




&lt;p&gt;🛠️ &lt;strong&gt;Step-by-Step Implementation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;📌 &lt;em&gt;Project 1: Welcome &amp;amp; Fallback Intents&lt;/em&gt;&lt;br&gt;
• Defined a basic WelcomeIntent with a friendly greeting.&lt;br&gt;
• Configured FallbackIntent to catch unrecognized input.&lt;br&gt;
• Used MessageGroups to randomize welcome messages.&lt;br&gt;
• Tested interactions using both text and speech input.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrg2dbetvs0nvnpprivs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrg2dbetvs0nvnpprivs.png" alt="Image description" width="720" height="243"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;💳 &lt;em&gt;Project 2: CheckBalance Intent&lt;/em&gt;&lt;br&gt;
• Introduced a custom slot type for account types (credit, savings, checking).&lt;br&gt;
• Bound built-in slots and parsed them directly from the user’s utterance.&lt;br&gt;
• Allowed users to say things like "What's my savings balance?" for contextual understanding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6y3u2f0ag2s65scbcmiu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6y3u2f0ag2s65scbcmiu.png" alt="Image description" width="762" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🧠 &lt;em&gt;Project 3: Lambda Integration&lt;/em&gt;&lt;br&gt;
• Created and deployed a Lambda function to fulfill backend logic.&lt;br&gt;
• Integrated the Lambda function with the chatbot using code hooks.&lt;br&gt;
• Enabled the bot to return live balance details by simulating backend response.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa94wpz0mvvf7nn58fjyz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa94wpz0mvvf7nn58fjyz.png" alt="Image description" width="700" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔁 &lt;em&gt;Project 4: Context Carryover&lt;/em&gt;&lt;br&gt;
• Implemented slot value carryover between related intents.&lt;br&gt;
• Maintained context across multi-turn conversations, improving UX.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4uljtz7ulcievw6ce62w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4uljtz7ulcievw6ce62w.png" alt="Image description" width="680" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔄 &lt;em&gt;Project 5: Fund Transfer Flow&lt;/em&gt;&lt;br&gt;
• Configured multiple slots to collect transfer details (source, destination, amount).&lt;br&gt;
• Added confirmation prompts before executing simulated transfers.&lt;br&gt;
• Enhanced user flow using Lex's visual conversation builder.&lt;br&gt;
• Automated deployment using CloudFormation templates.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq0n8ed6dp1519ljqqdi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwq0n8ed6dp1519ljqqdi.png" alt="Image description" width="800" height="259"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;🧩 &lt;strong&gt;Challenges &amp;amp; Learnings&lt;/strong&gt;&lt;br&gt;
One major hurdle emerged during Lambda integration:&lt;br&gt;
After deploying the bot, Amazon Lex couldn't invoke the Lambda function due to missing permissions.&lt;br&gt;
Solution:&lt;br&gt;
By navigating to the Lambda function’s permissions tab, I added a resource-based policy statement allowing Lex to access the function—resolving the issue and restoring full bot functionality.&lt;br&gt;
This experience taught me the importance of cross-service permissions in AWS and how essential it is to understand IAM policies when working in a serverless environment.&lt;/p&gt;




&lt;p&gt;📘 &lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Projects like BankerBot demonstrate the powerful synergy between AI and cloud services. By combining natural language processing (Amazon Lex) with serverless execution (AWS Lambda) and IaC (CloudFormation), we can rapidly develop intelligent applications that scale effortlessly.&lt;br&gt;
As AI becomes more embedded into user-facing systems, cloud-native chatbot development offers a glimpse into the future of customer experience, automation, and intelligent assistance—driving efficiency and user satisfaction across industries.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ai</category>
      <category>banking</category>
      <category>cloudformation</category>
    </item>
  </channel>
</rss>
