<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gurudev Prasad Teketi</title>
    <description>The latest articles on DEV Community by Gurudev Prasad Teketi (@gurudev_prasadteketi).</description>
    <link>https://dev.to/gurudev_prasadteketi</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gurudev_prasadteketi"/>
    <language>en</language>
    <item>
      <title>AWS Cost Reporting: Why Parquet Data Doesn't Match Your Console (And How to Fix It)</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Mon, 13 Oct 2025 20:46:18 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/aws-cost-reporting-why-parquet-data-doesnt-match-your-console-and-how-to-fix-it-5dlh</link>
      <guid>https://dev.to/gurudev_prasadteketi/aws-cost-reporting-why-parquet-data-doesnt-match-your-console-and-how-to-fix-it-5dlh</guid>
      <description>&lt;h2&gt;
  
  
  1. The Problem
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Real production issue: Lambda showing 10x higher costs than console&lt;/li&gt;
&lt;li&gt;User expectation vs reality ($1,262 console vs $12,918 Lambda)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Initial Investigation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Athena/parquet approach seemed logical&lt;/li&gt;
&lt;li&gt;Adjustment factors didn't solve service-level discrepancies&lt;/li&gt;
&lt;li&gt;Individual service ratios were inconsistent (4.0x to 5.7x)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Research &amp;amp; Discovery
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS documentation deep-dive&lt;/li&gt;
&lt;li&gt;Key differences between Cost Explorer API vs Cost and Usage Reports&lt;/li&gt;
&lt;li&gt;Data freshness, calculation methods, service groupings&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. The Solution
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Cost Explorer API implementation&lt;/li&gt;
&lt;li&gt;Exact console matching with proper error handling&lt;/li&gt;
&lt;li&gt;99.999% accuracy achieved&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Lessons Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;When to use CUR vs Cost Explorer API&lt;/li&gt;
&lt;li&gt;AWS billing reconciliation best practices&lt;/li&gt;
&lt;li&gt;Real-world cost considerations ($0.30/month vs engineering time)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Value:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Practical code examples in Python/boto3&lt;/li&gt;
&lt;li&gt;AWS architecture decisions with reasoning&lt;/li&gt;
&lt;li&gt;Performance optimization and cost trade-offs&lt;/li&gt;
&lt;li&gt;Enterprise-grade error handling&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This would be valuable for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DevOps/FinOps teams building cost monitoring&lt;/li&gt;
&lt;li&gt;AWS architects choosing between cost data sources&lt;/li&gt;
&lt;li&gt;Developers debugging billing discrepancies&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;99.999% console accuracy using Cost Explorer API&lt;/li&gt;
&lt;li&gt;Enterprise error handling with proper logging&lt;/li&gt;
&lt;li&gt;Cost-effective: ~$0.30/month for daily reports&lt;/li&gt;
&lt;li&gt;No adjustment factors needed&lt;/li&gt;
&lt;li&gt;Production-ready with comprehensive validation
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
  import boto3
  import logging
  import os
  from datetime import datetime, timedelta
  from typing import Dict, List
  import urllib3

  logger = logging.getLogger()
  logger.setLevel(os.getenv('LOG_LEVEL', 'INFO'))

  def lambda_handler(event, context):

      try:
          logger.info("Starting daily cost usage report processing with Cost Explorer API")

          # Initialize Cost Explorer client (must be us-east-1)
          ce_client = boto3.client('ce', region_name='us-east-1')

          # Get environment variables
          slack_webhook_url = os.getenv('SLACK_WEBHOOK_URL')

          if not slack_webhook_url:
              raise ValueError("SLACK_WEBHOOK_URL environment variable is required")

          # Get current month date range for Cost Explorer API
          today = datetime.now()
          month_start = datetime(today.year, today.month, 1)

          # Format dates for Cost Explorer API (YYYY-MM-DD format)
          start_date = month_start.strftime('%Y-%m-%d')
          end_date = (today + timedelta(days=1)).strftime('%Y-%m-%d')  # API end date is exclusive

          logger.info(f"Processing costs from {start_date} to {end_date} using Cost Explorer API")

          # Get cost data using Cost Explorer API
          cost_data = get_cost_data_from_cost_explorer(
              ce_client,
              start_date,
              end_date
          )

          # Format and send Slack message
          slack_message = format_slack_message(cost_data, month_start, today)
          send_slack_message(slack_webhook_url, slack_message)

          logger.info("Successfully completed daily cost usage report")

          return {
              'statusCode': 200,
              'body': json.dumps({
                  'message': 'Cost report sent successfully',
                  'total_cost': cost_data['total_cost'],
                  'services_count': len(cost_data['services'])
              })
          }

      except Exception as e:
          logger.error(f"Error processing cost report: {str(e)}")
          return {
              'statusCode': 500,
              'body': json.dumps({
                  'error': str(e)
              })
          }
  def get_cost_data_from_cost_explorer(ce_client, start_date: str, end_date: str) -&amp;gt; Dict:
      logger.info(f"Querying Cost Explorer API from {start_date} to {end_date}")

      try:
          # Cost Explorer API call - matches exactly what AWS console uses
          response = ce_client.get_cost_and_usage(
              TimePeriod={
                  'Start': start_date,
                  'End': end_date
              },
              Granularity='MONTHLY',
              Metrics=['UnblendedCost'],
              GroupBy=[
                  {
                      'Type': 'DIMENSION',
                      'Key': 'SERVICE'
                  }
              ]
          )

          # Parse Cost Explorer API response
          services = []
          total_cost = 0.0

          # Validate response structure
          if not response.get('ResultsByTime') or len(response['ResultsByTime']) == 0:
              logger.warning("No cost data in Cost Explorer API response")
              return {'services': [], 'total_cost': 0.0, 'currency': 'USD'}

          # Get the first (and only) time period result
          result = response['ResultsByTime'][0]

          # Process each service group
          for group in result.get('Groups', []):
              try:
                  service_name = group['Keys'][0] if group.get('Keys') else 'Unknown'

                  # Get unblended cost amount
                  cost_data = group.get('Metrics', {}).get('UnblendedCost', {})
                  cost_amount_str = cost_data.get('Amount', '0')
                  currency = cost_data.get('Unit', 'USD')

                  # Safely parse cost amount
                  try:
                      cost_amount = float(cost_amount_str)
                  except (ValueError, TypeError):
                      logger.warning(f"Invalid cost amount for service {service_name}: {cost_amount_str}")
                      continue

                  if cost_amount &amp;gt; 0.01:  # Only include services with meaningful cost
                      services.append({
                          'name': service_name,
                          'cost': round(cost_amount, 2),
                          'currency': currency
                      })
                      total_cost += cost_amount

              except Exception as service_error:
                  logger.warning(f"Error processing service group: {str(service_error)}")
                  continue

          # Sort by cost (highest first)
          services.sort(key=lambda x: x['cost'], reverse=True)

          logger.info(f"Retrieved {len(services)} services with total cost: ${total_cost:.2f}")

          return {
              'services': services,
              'total_cost': round(total_cost, 2),
              'currency': 'USD'
          }

      except Exception as e:
          logger.error(f"Error calling Cost Explorer API: {str(e)}")
          raise Exception(f"Cost Explorer API call failed: {str(e)}")

  def format_slack_message(cost_data: Dict, start_date: datetime, end_date: datetime) -&amp;gt; Dict:

      total_cost = cost_data['total_cost']
      services = cost_data['services']
      currency = cost_data.get('currency', 'USD')

      # Header
      date_range = f"{start_date.strftime('%b %d')} - {end_date.strftime('%b %d, %Y')}"
      header = f"💰 *AWS Cost Report: {date_range}*\n"
      header += f"💳 *Total Cost: ${total_cost:,.2f} {currency}*\n\n"

      # Services table
      if services:
          services_text = "```

\n"
          services_text += f"{'AWS Service':&amp;lt;30} {'Cost':&amp;lt;12}\n"
          services_text += f"{'-' * 30} {'-' * 12}\n"

          # Show top 15 services
          for service in services[:15]:
              cost = service['cost']
              name = service['name']

              # Format service name for readability
              display_name = name.replace('Amazon', '').replace('AWS', '').strip()
              if not display_name:
                  display_name = name

              # Truncate long service names
              if len(display_name) &amp;gt; 29:
                  display_name = display_name[:26] + "..."

              services_text += f"{display_name:&amp;lt;30} ${cost:&amp;gt;11,.2f}\n"

          services_text += f"{'-' * 30} {'-' * 12}\n"
          services_text += f"{'TOTAL':&amp;lt;30} ${total_cost:&amp;gt;11,.2f}\n"
          services_text += "

```"

          if len(services) &amp;gt; 15:
              remaining = len(services) - 15
              services_text += f"\n_+ {remaining} more services with smaller costs_"
      else:
          services_text = "No cost data available for this period."

      message_text = header + services_text

      return {
          "text": f"AWS Daily Cost Report - ${total_cost:,.2f}",
          "blocks": [
              {
                  "type": "section",
                  "text": {
                      "type": "mrkdwn",
                      "text": message_text
                  }
              }
          ]
      }

  def send_slack_message(webhook_url: str, message: Dict):

      http = urllib3.PoolManager()

      logger.info("Sending message to Slack channel")

      response = http.request(
          'POST',
          webhook_url,
          body=json.dumps(message),
          headers={'Content-Type': 'application/json'}
      )

      if response.status == 200:
          logger.info("Successfully sent message to Slack")
      else:
          logger.error(f"Failed to send Slack message: {response.status}")
          raise Exception(f"Slack webhook failed with status {response.status}")

  Required IAM Policy:

  {
      "Version": "2012-10-17",
      "Statement": [
          {
              "Effect": "Allow",
              "Action": [
                  "ce:GetCostAndUsage",
                  "ce:GetUsageReport"
              ],
              "Resource": "*"
          }
      ]
  }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Environment Variables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SLACK_WEBHOOK_URL: Your Slack webhook URL&lt;/li&gt;
&lt;li&gt;LOG_LEVEL: INFO (optional)`&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>api</category>
      <category>aws</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Building a True Dual-Destination Analytics Pipeline: Real-Time Streaming with S3 Backup and Recovery</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Fri, 10 Oct 2025 19:26:36 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/building-a-true-dual-destination-analytics-pipeline-real-time-streaming-with-s3-backup-and-recovery-php</link>
      <guid>https://dev.to/gurudev_prasadteketi/building-a-true-dual-destination-analytics-pipeline-real-time-streaming-with-s3-backup-and-recovery-php</guid>
      <description>&lt;h2&gt;
  
  
  Executive Summary
&lt;/h2&gt;

&lt;p&gt;This article details the implementation of a sophisticated dual-destination analytics pipeline that combines real-time data streaming to AWS Redshift Serverless with robust S3 backup and recovery capabilities. The architecture provides the best of both worlds: immediate analytics insights with comprehensive data protection and quality assurance mechanisms.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Objective
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Primary Goal&lt;/strong&gt;: Design and implement a resilient dual-destination data pipeline that delivers real-time analytics capabilities while maintaining comprehensive data backup, recovery, and quality assurance processes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Requirements&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-time analytics&lt;/strong&gt; with sub-5-minute data availability&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive data backup&lt;/strong&gt; in S3 for recovery and compliance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero data loss&lt;/strong&gt; guarantee during failures&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data quality assurance&lt;/strong&gt; with validation and reconciliation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-effective&lt;/strong&gt; architecture balancing performance and expenses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalable&lt;/strong&gt; solution supporting growing data volumes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational simplicity&lt;/strong&gt; with automated monitoring and recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Architecture Overview&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;True Dual-Destination Pattern&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;📱 Reading App
    ↓
🔄 Enhanced Lambda Function
    ↓                    ↓
🔥 Redshift Firehose   🔥 S3 Firehose
    ↓                    ↓
🏭 Redshift (Real-time) 📦 S3 (Backup)
    ↑                    ↓
    ←── ETL Reconciliation ──
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Core Principle&lt;/strong&gt;: &lt;strong&gt;Write Twice, Query Once&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Every data record flows to both destinations simultaneously&lt;/li&gt;
&lt;li&gt;Real-time analytics from Redshift&lt;/li&gt;
&lt;li&gt;Backup and recovery from S3&lt;/li&gt;
&lt;li&gt;Periodic reconciliation ensures data consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Enhanced Lambda Function Design
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Dual-Destination Lambda Architecture
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;timezone&lt;/span&gt;

&lt;span class="n"&gt;firehose_client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;boto3&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;client&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;firehose&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Dual-destination configuration
&lt;/span&gt;&lt;span class="n"&gt;REDSHIFT_FIREHOSE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analytics-realtime-delivery-dev&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;S3_FIREHOSE_BOOK_ACTIVITY&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analytics-backup-s3-dev-book-activity&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;S3_FIREHOSE_DYNAMIC&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analytics-backup-s3-dev-dynamic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;No request body provided&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})}&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;JSONDecodeError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Invalid JSON format&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})}&lt;/span&gt;

    &lt;span class="n"&gt;records&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;isinstance&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;records&lt;/span&gt; &lt;span class="ow"&gt;is&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Invalid data format&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})}&lt;/span&gt;

    &lt;span class="c1"&gt;# Prepare dual-destination records
&lt;/span&gt;    &lt;span class="n"&gt;s3_records&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;S3 bucket&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[],&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dynamic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[]}&lt;/span&gt;
    &lt;span class="n"&gt;redshift_records&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;records&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Validation
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recordId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ledgerCategory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;400&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Missing required fields&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;})}&lt;/span&gt;

        &lt;span class="c1"&gt;# Add server timestamp
&lt;/span&gt;        &lt;span class="n"&gt;utc_now&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;timezone&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;utc&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;serverTimestamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;utc_now&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%dT%H:%M:%SZ&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pipelineTimestamp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;utc_now&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%dT%H:%M:%SZ&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# S3 backup records (original format)
&lt;/span&gt;        &lt;span class="n"&gt;s3_target&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;S3 Bucket&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Category&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;S3 Bucket&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dynamic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="n"&gt;s3_record&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)}&lt;/span&gt;
        &lt;span class="n"&gt;s3_records&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;s3_target&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s3_record&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Redshift real-time records (optimized format)
&lt;/span&gt;        &lt;span class="n"&gt;redshift_record&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recordId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recordId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ledgerCategory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ledgerCategory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;userId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;userId&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;eventData&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;processedAt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;utc_now&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;strftime&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;%Y-%m-%dT%H:%M:%SZ&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="n"&gt;redshift_records&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;redshift_record&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Dual-destination delivery
&lt;/span&gt;        &lt;span class="n"&gt;delivery_results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

        &lt;span class="c1"&gt;# 1. Send to S3 Firehose (backup)
&lt;/span&gt;        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;firehose_name&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;S3 Bucket&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;S3_FIREHOSE_BOOK_ACTIVITY&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dynamic&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;S3_FIREHOSE_DYNAMIC&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;]:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;s3_records&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
                &lt;span class="n"&gt;s3_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;firehose_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put_record_batch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                    &lt;span class="n"&gt;DeliveryStreamName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;firehose_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                    &lt;span class="n"&gt;Records&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;s3_records&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
                &lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="n"&gt;delivery_results&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;s3_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;delivered&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s3_records&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;target&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;s3_response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FailedPutCount&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;failed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;s3_response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FailedPutCount&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
                &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# 2. Send to Redshift Firehose (real-time)
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;redshift_records&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;redshift_response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;firehose_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;put_record_batch&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                &lt;span class="n"&gt;DeliveryStreamName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;REDSHIFT_FIREHOSE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="n"&gt;Records&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;redshift_records&lt;/span&gt;
            &lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;delivery_results&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;redshift_realtime&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;delivered&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;redshift_records&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;redshift_response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FailedPutCount&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;failed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;redshift_response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FailedPutCount&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;

        &lt;span class="c1"&gt;# Check for failures
&lt;/span&gt;        &lt;span class="n"&gt;total_failed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;failed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;delivery_results&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;values&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;total_failed&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Partial delivery failure&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;details&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;delivery_results&lt;/span&gt;
            &lt;span class="p"&gt;})}&lt;/span&gt;

        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;status&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;success&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;delivery_results&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;delivery_results&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dual_destination_enabled&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;
        &lt;span class="p"&gt;})}&lt;/span&gt;

    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;statusCode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;body&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;error&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Dual-destination delivery failed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;message&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)[:&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="p"&gt;})}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Redshift Direct Delivery Configuration&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Real-Time Firehose to Redshift Setup&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"DeliveryStreamName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"analytics-realtime-delivery-dev"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"RedshiftDestinationConfiguration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"RoleARN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::123456789012:role/AnalyticsFirehoseRedshiftRole"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ClusterJDBCURL"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"jdbc:redshift://analytics-cluster.us-east-2.redshift.amazonaws.com:5439/analytics"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"CopyCommand"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"DataTableName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"events.realtime_staging"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"DataTableColumns"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"record_id,ledger_category,user_id,event_data,processed_at"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"CopyOptions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FORMAT AS JSON 'auto' TIMEFORMAT 'auto' TRUNCATECOLUMNS ACCEPTINVCHARS"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Username"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"firehose_user"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Password"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SecurePassword123"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"S3Configuration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"RoleARN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:iam::123456789012:role/AnalyticsFirehoseS3Role"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"BucketARN"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:s3:::analytics-backup-dev"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Prefix"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"redshift-backup/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"BufferingHints"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"SizeInMBs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;5&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"IntervalInSeconds"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"CompressionFormat"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GZIP"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ProcessingConfiguration"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Enabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"Processors"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="nl"&gt;"Type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Lambda"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="nl"&gt;"Parameters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                            &lt;/span&gt;&lt;span class="nl"&gt;"ParameterName"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"LambdaArn"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                            &lt;/span&gt;&lt;span class="nl"&gt;"ParameterValue"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:us-east-2:123456789012:function:analytics-data-transformer"&lt;/span&gt;&lt;span class="w"&gt;
                        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"RetryOptions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"DurationInSeconds"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3600&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Real-Time Staging Table Design&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Real-time staging table for immediate analytics&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;realtime_staging&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;record_id&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ledger_category&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;user_id&lt;/span&gt; &lt;span class="nb"&gt;INTEGER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;event_data&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;65535&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;processed_at&lt;/span&gt; &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;load_timestamp&lt;/span&gt; &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;GETDATE&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;DISTKEY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;SORTKEY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;processed_at&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ledger_category&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Production analytics views&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;VIEW&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;realtime_activity&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt;
    &lt;span class="n"&gt;record_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;JSON_EXTRACT_PATH_TEXT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'bookID'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;book_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;JSON_EXTRACT_PATH_TEXT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'eventType'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;event_type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;JSON_EXTRACT_PATH_TEXT&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'pageNumber'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;page_number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;processed_at&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;load_timestamp&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;realtime_staging&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;ledger_category&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'book_activity'&lt;/span&gt;
&lt;span class="k"&gt;AND&lt;/span&gt; &lt;span class="n"&gt;processed_at&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;DATEADD&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;hour&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;24&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;GETDATE&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Monitoring and Alerting
&lt;/h2&gt;

&lt;h3&gt;
  
  
  CloudWatch Dashboard Configuration**
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"widgets"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"metric"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"metrics"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"AWS/Kinesis/Firehose"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DeliveryToRedshift.Records"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DeliveryStreamName"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"analytics-realtime-delivery-dev"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"AWS/Kinesis/Firehose"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DeliveryToS3.Records"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"DeliveryStreamName"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"analytics-backup-s3-dev-book-activity"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"AWS/Lambda"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Invocations"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FunctionName"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"analytics-dual-destination-processor"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
                    &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"AWS/Lambda"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Errors"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"FunctionName"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"analytics-reconciliation-processor"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"period"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"stat"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Sum"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"region"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us-east-2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Dual-Destination Pipeline Health"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"log"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"properties"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"query"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"SOURCE '/aws/lambda/analytics-dual-destination-processor'&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;| fields @timestamp, @message&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;| filter @message like /ERROR/&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;| sort @timestamp desc&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;| limit 100"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"region"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"us-east-2"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                &lt;/span&gt;&lt;span class="nl"&gt;"title"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Recent Pipeline Errors"&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Automated Alerting&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# CloudWatch alarm for dual-destination failures
&lt;/span&gt;&lt;span class="n"&gt;aws&lt;/span&gt; &lt;span class="n"&gt;cloudwatch&lt;/span&gt; &lt;span class="n"&gt;put&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;metric&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;alarm&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;alarm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Dual-Destination-Pipeline-Failures&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;alarm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Alert on dual-destination delivery failures&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;metric&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Errors&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;namespace&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AWS/Lambda&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;statistic&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Sum&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;threshold&lt;/span&gt; &lt;span class="mi"&gt;5&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;comparison&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;operator&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;GreaterThanThreshold&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;dimensions&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;FunctionName&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="n"&gt;Value&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;analytics&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;dual&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;destination&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;processor&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;alarm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;actions&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;arn:aws:sns:us-east-2:123456789012:analytics-alerts&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="c1"&gt;# Data discrepancy alarm
&lt;/span&gt;&lt;span class="n"&gt;aws&lt;/span&gt; &lt;span class="n"&gt;cloudwatch&lt;/span&gt; &lt;span class="n"&gt;put&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;metric&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;alarm&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;alarm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Data-Reconciliation-Discrepancy&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;alarm&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Alert on high data discrepancy between S3 and Redshift&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;metric&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DiscrepancyPercentage&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;namespace&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analytics/Reconciliation&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;statistic&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Average&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;period&lt;/span&gt; &lt;span class="mi"&gt;3600&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;threshold&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt; \
    &lt;span class="o"&gt;--&lt;/span&gt;&lt;span class="n"&gt;comparison&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;operator&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;GreaterThanThreshold&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Deployment Architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Infrastructure as Code (CloudFormation)&lt;/strong&gt;
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;AWSTemplateFormatVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2010-09-09'&lt;/span&gt;
&lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Dual-Destination&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Analytics&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Pipeline'&lt;/span&gt;

&lt;span class="na"&gt;Parameters&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;Environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;String&lt;/span&gt;
    &lt;span class="na"&gt;Default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dev&lt;/span&gt;
    &lt;span class="na"&gt;AllowedValues&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;dev&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;qa&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;prod&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

&lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="c1"&gt;# Redshift Serverless Namespace&lt;/span&gt;
  &lt;span class="na"&gt;RedshiftNamespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::RedshiftServerless::Namespace&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;NamespaceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;analytics-ledger-${Environment}'&lt;/span&gt;
      &lt;span class="na"&gt;AdminUsername&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin&lt;/span&gt;
      &lt;span class="na"&gt;AdminUserPassword&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RedshiftPassword&lt;/span&gt;
      &lt;span class="na"&gt;DbName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;analytics&lt;/span&gt;

  &lt;span class="c1"&gt;# Redshift Serverless Workgroup&lt;/span&gt;
  &lt;span class="na"&gt;RedshiftWorkgroup&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::RedshiftServerless::Workgroup&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;WorkgroupName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;analytics-datalake-${Environment}'&lt;/span&gt;
      &lt;span class="na"&gt;NamespaceName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RedshiftNamespace&lt;/span&gt;
      &lt;span class="na"&gt;PubliclyAccessible&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;BaseCapacity&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8&lt;/span&gt;

  &lt;span class="c1"&gt;# Dual-Destination Lambda&lt;/span&gt;
  &lt;span class="na"&gt;DualDestinationLambda&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Lambda::Function&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;FunctionName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;analytics-dual-destination-${Environment}'&lt;/span&gt;
      &lt;span class="na"&gt;Runtime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python3.9&lt;/span&gt;
      &lt;span class="na"&gt;Handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dual_destination.lambda_handler&lt;/span&gt;
      &lt;span class="na"&gt;Role&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;LambdaExecutionRole.Arn&lt;/span&gt;
      &lt;span class="na"&gt;Code&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;ZipFile&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
          &lt;span class="s"&gt;# Lambda function code here&lt;/span&gt;
      &lt;span class="na"&gt;Environment&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;Variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;REDSHIFT_FIREHOSE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RedshiftFirehose&lt;/span&gt;
          &lt;span class="na"&gt;S3_FIREHOSE_BOOK&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;S3FirehoseBook&lt;/span&gt;
          &lt;span class="na"&gt;S3_FIREHOSE_DYNAMIC&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;S3FirehoseDynamic&lt;/span&gt;

  &lt;span class="c1"&gt;# Redshift Firehose Stream&lt;/span&gt;
  &lt;span class="na"&gt;RedshiftFirehose&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::KinesisFirehose::DeliveryStream&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;DeliveryStreamName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;analytics-realtime-delivery-${Environment}'&lt;/span&gt;
      &lt;span class="na"&gt;DeliveryStreamType&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;DirectPut&lt;/span&gt;
      &lt;span class="na"&gt;RedshiftDestinationConfiguration&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;RoleARN&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;FirehoseRole.Arn&lt;/span&gt;
        &lt;span class="na"&gt;ClusterJDBCURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;jdbc:redshift://${RedshiftEndpoint}:5439/analytics'&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;RedshiftEndpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;RedshiftWorkgroup.Workgroup.Endpoint.Address&lt;/span&gt;
        &lt;span class="na"&gt;CopyCommand&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;DataTableName&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;events.realtime_staging&lt;/span&gt;
          &lt;span class="na"&gt;CopyOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;FORMAT&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;AS&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;JSON&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'auto'&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;TIMEFORMAT&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;'auto'"&lt;/span&gt;
        &lt;span class="na"&gt;Username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;admin&lt;/span&gt;
        &lt;span class="na"&gt;Password&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Ref&lt;/span&gt; &lt;span class="s"&gt;RedshiftPassword&lt;/span&gt;

  &lt;span class="c1"&gt;# EventBridge Rule for Reconciliation&lt;/span&gt;
  &lt;span class="na"&gt;ReconciliationSchedule&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Events::Rule&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;analytics-reconciliation-${Environment}'&lt;/span&gt;
      &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Daily&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;reconciliation&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;between&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;S3&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;and&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Redshift'&lt;/span&gt;
      &lt;span class="na"&gt;ScheduleExpression&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;cron(0&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;6&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;?&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;*)'&lt;/span&gt;  &lt;span class="c1"&gt;# Daily at 6 AM&lt;/span&gt;
      &lt;span class="na"&gt;State&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ENABLED&lt;/span&gt;
      &lt;span class="na"&gt;Targets&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Arn&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;ReconciliationLambda.Arn&lt;/span&gt;
          &lt;span class="na"&gt;Id&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ReconciliationTarget&lt;/span&gt;

&lt;span class="na"&gt;Outputs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;RedshiftEndpoint&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Description&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Redshift&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Serverless&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;Endpoint'&lt;/span&gt;
    &lt;span class="na"&gt;Value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!GetAtt&lt;/span&gt; &lt;span class="s"&gt;RedshiftWorkgroup.Workgroup.Endpoint.Address&lt;/span&gt;
    &lt;span class="na"&gt;Export&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;!Sub&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;${AWS::StackName}-RedshiftEndpoint'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Performance Optimization
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Redshift Table Design Best Practices
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="c1"&gt;-- Optimized real-time staging table&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="k"&gt;TABLE&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;realtime_staging&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;record_id&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;256&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;ledger_category&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;user_id&lt;/span&gt; &lt;span class="nb"&gt;INTEGER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;event_data&lt;/span&gt; &lt;span class="nb"&gt;VARCHAR&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;65535&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;processed_at&lt;/span&gt; &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt; &lt;span class="k"&gt;NOT&lt;/span&gt; &lt;span class="k"&gt;NULL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;load_timestamp&lt;/span&gt; &lt;span class="nb"&gt;TIMESTAMP&lt;/span&gt; &lt;span class="k"&gt;DEFAULT&lt;/span&gt; &lt;span class="n"&gt;GETDATE&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
    &lt;span class="n"&gt;partition_date&lt;/span&gt; &lt;span class="nb"&gt;DATE&lt;/span&gt; &lt;span class="k"&gt;GENERATED&lt;/span&gt; &lt;span class="n"&gt;ALWAYS&lt;/span&gt; &lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;DATE&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;processed_at&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;DISTKEY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;SORTKEY&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;partition_date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;processed_at&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ledger_category&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;-- Automatic table maintenance&lt;/span&gt;
&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;EVENT&lt;/span&gt; &lt;span class="n"&gt;analytics_table_maintenance&lt;/span&gt;
&lt;span class="k"&gt;ON&lt;/span&gt; &lt;span class="n"&gt;SCHEDULE&lt;/span&gt; &lt;span class="s1"&gt;'cron(0 2 * * ? *)'&lt;/span&gt;  &lt;span class="c1"&gt;-- Daily at 2 AM&lt;/span&gt;
&lt;span class="k"&gt;AS&lt;/span&gt; &lt;span class="err"&gt;$$&lt;/span&gt;
  &lt;span class="c1"&gt;-- Vacuum and analyze tables&lt;/span&gt;
  &lt;span class="k"&gt;VACUUM&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;realtime_staging&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="k"&gt;ANALYZE&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;realtime_staging&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

  &lt;span class="c1"&gt;-- Drop old partitions (older than 90 days)&lt;/span&gt;
  &lt;span class="k"&gt;DELETE&lt;/span&gt; &lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;events&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;realtime_staging&lt;/span&gt;
  &lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;partition_date&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;DATEADD&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;day&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;GETDATE&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
&lt;span class="err"&gt;$$&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Firehose Optimization Settings**
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"BufferingHints"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"SizeInMBs"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"IntervalInSeconds"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"CompressionFormat"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GZIP"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"DataTransformation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"ProcessorType"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Lambda"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Parameters"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"LambdaArn"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"arn:aws:lambda:us-east-2:123456789012:function:analytics-data-optimizer"&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"DynamicPartitioning"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"Enabled"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"RetryOptions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
            &lt;/span&gt;&lt;span class="nl"&gt;"DurationInSeconds"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;3600&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Cost Analysis and Optimization**
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Cost Breakdown
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Component&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Monthly Cost (Dev)&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Monthly Cost (Prod)&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Redshift Serverless&lt;/td&gt;
&lt;td&gt;$45-75&lt;/td&gt;
&lt;td&gt;$200-400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Kinesis Firehose&lt;/td&gt;
&lt;td&gt;$15-25&lt;/td&gt;
&lt;td&gt;$50-100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lambda Executions&lt;/td&gt;
&lt;td&gt;$5-10&lt;/td&gt;
&lt;td&gt;$20-40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;S3 Storage&lt;/td&gt;
&lt;td&gt;$10-20&lt;/td&gt;
&lt;td&gt;$50-100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Data Transfer&lt;/td&gt;
&lt;td&gt;$5-15&lt;/td&gt;
&lt;td&gt;$25-50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$80-145&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$345-690&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Cost Optimization Strategies&lt;/strong&gt;
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Redshift Serverless Scaling&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;   &lt;span class="c1"&gt;-- Set appropriate base capacity&lt;/span&gt;
   &lt;span class="k"&gt;ALTER&lt;/span&gt; &lt;span class="n"&gt;WORKGROUP&lt;/span&gt; &lt;span class="n"&gt;analytics&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;datalake&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;dev&lt;/span&gt;
   &lt;span class="k"&gt;SET&lt;/span&gt; &lt;span class="n"&gt;base_capacity&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;  &lt;span class="c1"&gt;-- Start small, auto-scale up&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;S3 Lifecycle Policies&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="w"&gt;   &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="nl"&gt;"Rules"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="nl"&gt;"Status"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Enabled"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="nl"&gt;"Transitions"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="w"&gt;
                   &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                       &lt;/span&gt;&lt;span class="nl"&gt;"Days"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                       &lt;/span&gt;&lt;span class="nl"&gt;"StorageClass"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"STANDARD_IA"&lt;/span&gt;&lt;span class="w"&gt;
                   &lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
                   &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
                       &lt;/span&gt;&lt;span class="nl"&gt;"Days"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;90&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
                       &lt;/span&gt;&lt;span class="nl"&gt;"StorageClass"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"GLACIER"&lt;/span&gt;&lt;span class="w"&gt;
                   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
               &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
           &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
       &lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
   &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Firehose Buffer Optimization&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Increase buffer size to reduce frequency&lt;/li&gt;
&lt;li&gt;Use compression to reduce storage costs&lt;/li&gt;
&lt;li&gt;Optimize batch sizes for Redshift COPY operations&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Advantages of Dual-Destination Architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;✅ Real-Time Analytics with Safety Net&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Sub-5-minute latency&lt;/strong&gt; for business-critical dashboards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complete data backup&lt;/strong&gt; in S3 for recovery scenarios&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero data loss&lt;/strong&gt; guarantee through dual writes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic reconciliation&lt;/strong&gt; ensures data consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;✅ Operational Resilience&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fault tolerance&lt;/strong&gt;: Failure in one stream doesn't affect the other&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Recovery capabilities&lt;/strong&gt;: Can rebuild Redshift from S3 if needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disaster recovery&lt;/strong&gt;: Geographic backup through S3 replication&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Version control&lt;/strong&gt;: S3 maintains historical data versions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;✅ Performance Benefits&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Optimized queries&lt;/strong&gt;: Purpose-built Redshift tables for analytics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Concurrent workloads&lt;/strong&gt;: S3 and Redshift serve different use cases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reduced load&lt;/strong&gt;: Redshift not used for data lake operations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalable architecture&lt;/strong&gt;: Each component scales independently&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;✅ Business Value&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Immediate insights&lt;/strong&gt;: Real-time user behavior analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Historical analysis&lt;/strong&gt;: Deep-dive analytics using S3 data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance ready&lt;/strong&gt;: Audit trails and data lineage tracking&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Future-proof&lt;/strong&gt;: Ready for ML and advanced analytics&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Considerations and Trade-offs&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;❌ Increased Complexity&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;More components&lt;/strong&gt;: Additional Firehose streams and monitoring&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dual maintenance&lt;/strong&gt;: Both S3 and Redshift schemas need updates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error handling&lt;/strong&gt;: More failure scenarios to handle&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost overhead&lt;/strong&gt;: ~30-50% increase over single-destination&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;❌ Operational Overhead&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monitoring complexity&lt;/strong&gt;: Track multiple data flows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reconciliation requirements&lt;/strong&gt;: Daily consistency checks needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema coordination&lt;/strong&gt;: Changes must be synchronized&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging challenges&lt;/strong&gt;: Multiple data paths to troubleshoot&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;❌ Resource Utilization&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compute costs&lt;/strong&gt;: Additional Lambda executions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage duplication&lt;/strong&gt;: Data exists in both S3 and Redshift&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Network bandwidth&lt;/strong&gt;: Higher data transfer volumes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Always-on costs&lt;/strong&gt;: Redshift Serverless base capacity charges&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;When to Choose Dual-Destination&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Ideal Use Cases&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mission-critical analytics&lt;/strong&gt; requiring real-time insights&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High-value data&lt;/strong&gt; where loss is unacceptable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance requirements&lt;/strong&gt; for data retention and auditability&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mixed workloads&lt;/strong&gt; requiring both real-time and batch processing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Growing organizations&lt;/strong&gt; planning to scale analytics capabilities&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Not Recommended When&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simple analytics needs&lt;/strong&gt; satisfied by batch processing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost-sensitive environments&lt;/strong&gt; where latency is acceptable&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Small data volumes&lt;/strong&gt; that don't justify complexity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Limited operational resources&lt;/strong&gt; for monitoring multiple systems&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Future Enhancements and Roadmap&lt;/strong&gt;
&lt;/h2&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Phase 2: Advanced Analytics&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Machine Learning Integration&lt;/strong&gt;: Real-time ML inference on streaming data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex Event Processing&lt;/strong&gt;: Multi-stream correlation and pattern detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Predictive Analytics&lt;/strong&gt;: User behavior prediction models&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anomaly Detection&lt;/strong&gt;: Real-time fraud and abuse detection&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Phase 3: Multi-Region Architecture&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Global data replication&lt;/strong&gt; for disaster recovery&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regional analytics clusters&lt;/strong&gt; for performance optimization&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-region reconciliation&lt;/strong&gt; for data consistency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance-specific regions&lt;/strong&gt; for data sovereignty&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Phase 4: Advanced Data Governance&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Data lineage tracking&lt;/strong&gt; across all pipeline components&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated data quality&lt;/strong&gt; monitoring and alerting&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PII detection and masking&lt;/strong&gt; for privacy compliance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated schema evolution&lt;/strong&gt; and compatibility checking&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;The dual-destination analytics pipeline represents a sophisticated approach to modern data architecture, balancing the immediate needs for real-time analytics with the long-term requirements for data durability, compliance, and recovery. This architecture pattern is particularly valuable for organizations that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cannot afford data loss&lt;/strong&gt; in their analytics pipeline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Need both real-time and historical analytics&lt;/strong&gt; capabilities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plan to scale&lt;/strong&gt; their data and analytics operations significantly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Value operational resilience&lt;/strong&gt; over architectural simplicity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Success Factors&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Comprehensive monitoring&lt;/strong&gt; across all pipeline components&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated reconciliation&lt;/strong&gt; to ensure data consistency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Proper cost management&lt;/strong&gt; to justify the additional complexity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clear operational procedures&lt;/strong&gt; for failure scenarios and recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Expected Outcomes&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;99.9% data availability&lt;/strong&gt; with sub-5-minute analytics latency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero data loss&lt;/strong&gt; through redundant storage and processing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Scalable foundation&lt;/strong&gt; supporting future analytics and ML workloads&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational confidence&lt;/strong&gt; through comprehensive backup and recovery&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This architecture serves as a robust foundation for organizations transitioning from batch-oriented data lakes to real-time analytics platforms while maintaining the safety and compliance benefits of traditional data lake approaches.&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>analytics</category>
      <category>architecture</category>
      <category>aws</category>
    </item>
    <item>
      <title>Automating Publisher Reporting on AWS: A Serverless Architecture with Slack Alerts</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Wed, 03 Sep 2025 03:27:51 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/automating-publisher-reporting-on-aws-a-serverless-architecture-with-slack-alerts-3004</link>
      <guid>https://dev.to/gurudev_prasadteketi/automating-publisher-reporting-on-aws-a-serverless-architecture-with-slack-alerts-3004</guid>
      <description>&lt;h2&gt;
  
  
  Overview
&lt;/h2&gt;

&lt;p&gt;In this article, I walk you through building an automated reporting pipeline using AWS services. The goal was to generate daily summary reports on publisher readership, detect missing metadata (like publishers), store the results as CSVs in S3, and deliver structured Slack notifications to internal stakeholders — all without manual intervention.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjt361b6z0v9flvw60ui.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frjt361b6z0v9flvw60ui.png" alt=" " width="800" height="570"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow:&lt;/strong&gt;&lt;br&gt;
1.Amazon EventBridge (Scheduler) triggers the workflow daily.&lt;/p&gt;

&lt;p&gt;2.Lambda Function #1: Report Generator&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs a named query on Amazon Athena that calculates reading activity data.&lt;/li&gt;
&lt;li&gt;Stores results in an S3 bucket using a structured naming convention.&lt;/li&gt;
&lt;li&gt;Sends a Slack message once the report is ready.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;3.Lambda Function #2: Publisher Summary Aggregator&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fetches the latest CSV report from S3.&lt;/li&gt;
&lt;li&gt;Aggregates per-publisher read counts by quarter.&lt;/li&gt;
&lt;li&gt;Posts a clean, readable table to Slack showing publisher performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;4.Lambda Function #3: Missing Publisher Detector (optional)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Runs a separate Athena query to find books with missing publisher info.&lt;/li&gt;
&lt;li&gt;Sends a notification to Slack with a direct link to the generated file in S3.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;publisher-reporting/
│
├── deploy.sh                         # Infra provisioning and Lambda packaging
├── config.yaml                       # Runtime config (bucket names, cron schedules)
│
├── lambda/
│   ├── report_generator/
│   │   └── handler.py                # Runs Athena query and stores report in S3
│   ├── summary_report_notifier/
│   │   └── handler.py                # Aggregates data and posts to Slack
│   └── missing_publisher_report/
│       └── handler.py                # Detects and reports missing metadata
│
└── terraform/                        # Infra as Code (Lambda, EventBridge, IAM, etc.)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Code Snippets
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Lambda: Report Generator&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;exec_response = athena.start_query_execution(
    QueryString=query_string,
    QueryExecutionContext={"Database": ATHENA_DATABASE},
    ResultConfiguration={
        "OutputLocation": f"s3://{ATHENA_OUTPUT_BUCKET}/temporary-athena-query-results/"
    }
)
...
s3.copy_object(
    Bucket=TARGET_REPORT_BUCKET,
    CopySource={"Bucket": source_bucket, "Key": source_key},
    Key=final_key
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Lambda: Slack Summary Formatter&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;for row in reader:
    if row["book_read_counts"].strip().upper() == "TRUE":
        yq = row["year_quarter"].strip()
        pub = row["publisher"].strip() or "Unknown"
        counts[yq][pub] += 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Scheduling
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+------------------------------+------------------------------------------+-------------------------+
| Lambda Function              | Purpose                                  | Schedule (Cron Format)  |
+------------------------------+------------------------------------------+-------------------------+
| report_generator             | Run Athena query and save CSV to S3      | cron(0 0 * * ? *)       |
| summary_report_notifier      | Read latest CSV and post Slack summary   | cron(10 0 * * ? *)      |
| missing_publisher_report     | Detect books without publisher info      | cron(15 0 * * ? *)      |
+------------------------------+------------------------------------------+-------------------------+

# Notes:
# - Times are in UTC
# - 10–15 min stagger prevents overlap and race conditions
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Challenges &amp;amp; Fixes
&lt;/h2&gt;

&lt;p&gt;1.Slack showed same data daily&lt;/p&gt;

&lt;p&gt;Issue:&lt;br&gt;
Slack message was posting the same publisher data every day.&lt;/p&gt;

&lt;p&gt;Fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EventBridge schedule was set to run only on the 1st of each month.
Updated to run daily using: cron(0 0 * * ? *)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2.No new publisher summary files after Aug 1&lt;/p&gt;

&lt;p&gt;Issue:&lt;br&gt;
S3 bucket had no updated files after August 1st.&lt;/p&gt;

&lt;p&gt;Fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Found that the publisher report generator Lambda was not running.
Corrected the EventBridge schedule to trigger daily.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3.Lambda race condition&lt;/p&gt;

&lt;p&gt;Issue:&lt;br&gt;
The Slack posting Lambda was sometimes reading an older CSV file instead of the one just generated.&lt;/p&gt;

&lt;p&gt;Fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Introduced a 10-minute delay between the generator Lambda and the Slack reporter Lambda using separate schedules.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;4.Slack output was hard to read&lt;/p&gt;

&lt;p&gt;Issue:&lt;br&gt;
The publisher read counts in Slack were misaligned and difficult to follow.&lt;/p&gt;

&lt;p&gt;Fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Formatted the message using Slack-compatible triple backticks (```

...

```) to show preformatted blocks.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;5.S3 bucket getting cluttered&lt;/p&gt;

&lt;p&gt;Issue:&lt;br&gt;
Temporary Athena result files were crowding the output bucket.&lt;/p&gt;

&lt;p&gt;Fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Moved results to a 'temporary-athena-query-results/' folder prefix
Added a lifecycle policy to auto-delete them after 3 days.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Impact
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Eliminated all manual report generation&lt;/li&gt;
&lt;li&gt;Improved team visibility into reader engagement&lt;/li&gt;
&lt;li&gt;Ensured scalable, serverless infrastructure using AWS best practices&lt;/li&gt;
&lt;li&gt;Automated alerts improved issue tracking and data consistency&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This project was a great example of combining Athena, Lambda, S3, and EventBridge into a cost-efficient, automated reporting pipeline. If you’re working with serverless data workflows, this pattern is easily adaptable to product analytics, user activity tracking, sales dashboards, and more.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kickstarting Generative AI with Amazon Bedrock: Building a No-Code Product Description Generator</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Sat, 31 May 2025 05:03:22 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/kickstarting-generative-ai-with-amazon-bedrock-building-a-no-code-product-description-generator-1blh</link>
      <guid>https://dev.to/gurudev_prasadteketi/kickstarting-generative-ai-with-amazon-bedrock-building-a-no-code-product-description-generator-1blh</guid>
      <description>&lt;h2&gt;
  
  
  Introduction: What and Why
&lt;/h2&gt;

&lt;p&gt;In today’s fast-moving digital world, the ability to generate personalized, high-quality content at scale is a game-changer for businesses across industries. Whether it’s marketing copy, product descriptions, chatbot responses, or internal documentation, Generative AI has the potential to significantly reduce content creation time while improving consistency and creativity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That’s where Amazon Bedrock comes in.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;strong&gt;Amazon Bedrock&lt;/strong&gt; is a fully managed service that lets developers build and scale generative AI applications using foundation models (FMs) from leading AI providers like Anthropic (Claude), AI21, and Amazon Titan—without needing to manage any infrastructure.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This project explores how non-developers and beginners can quickly prototype a generative AI solution directly from the AWS Console. Our use case: generating fun, compelling product descriptions with no code using the Claude model via Bedrock.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use Case:&lt;/strong&gt;&lt;br&gt;
Generate AI-powered product descriptions for e-commerce products (e.g., a waterproof smartwatch for kids).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tech Stack:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Bedrock (for text generation)&lt;/li&gt;
&lt;li&gt;Claude model (Anthropic's foundation model)&lt;/li&gt;
&lt;li&gt;AWS Console (no programming required)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step-by-Step Instructions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Login to AWS Console&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Go to Amazon Bedrock → Model Access → Enable Claude &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use Claude 3 Sonnet&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It offers a great balance between speed, cost, and reasoning power.&lt;/li&gt;
&lt;li&gt;Perfect for text generation tasks like product descriptions.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu39nyfjgjzyo0hj1k9ym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu39nyfjgjzyo0hj1k9ym.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lt41gh6n8fmy9nwl380.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5lt41gh6n8fmy9nwl380.png" alt="Image description" width="800" height="358"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg11k674ngdjrbkhkjwsw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg11k674ngdjrbkhkjwsw.png" alt="Image description" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open Playground&lt;/li&gt;
&lt;li&gt;Use the “Text Generation” option&lt;/li&gt;
&lt;li&gt;Prompt:
"Generate a fun, engaging product description for a waterproof smartwatch designed for children ages 6 to 12."&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4eoxp7bibjcyz11zxe30.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4eoxp7bibjcyz11zxe30.png" alt="Image description" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv2nz93wx71k9ubyqdn4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv2nz93wx71k9ubyqdn4.png" alt="Image description" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0mzqrkfqm0ihjm5zz37.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo0mzqrkfqm0ihjm5zz37.png" alt="Image description" width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Explore variations with different Prompts and their results&lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Generate a fun, engaging product description for a waterproof smartwatch designed for children ages 6 to 12.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;`Here's a fun, engaging product description for a waterproof smartwatch designed for children ages 6 to 12:&lt;/p&gt;

&lt;p&gt;Dive into Adventure with the Aqua Kiddo Smartwatch!&lt;/p&gt;

&lt;p&gt;Calling all little adventurers! Get ready to make a splash with the Aqua Kiddo Smartwatch – the ultimate waterproof companion for your wild and wacky explorations. Whether you're splashing in the pool, braving the sprinklers, or caught in a surprise rain shower, this smartwatch is built to keep up with your aquatic antics.&lt;/p&gt;

&lt;p&gt;With its vibrant, customizable watchfaces and fun games, the Aqua Kiddo is more than just a timepiece – it's a portable playground strapped to your wrist! Challenge your friends to epic underwater dance-offs, or race against the clock in exciting tap-tastic games. And when you're not making waves, stay connected with your crew through fun emoji messages and reminders.&lt;/p&gt;

&lt;p&gt;But the fun doesn't stop there! This smartwatch is also a trusty sidekick for your land adventures. Track your steps, set fitness goals, and even unlock cool virtual rewards for your active accomplishments. And with its durable, water-resistant design, you don't have to worry about a little mud, sand, or spills slowing you down.&lt;/p&gt;

&lt;p&gt;So what are you waiting for? Dive headfirst into endless fun and excitement with the Aqua Kiddo Smartwatch – the ultimate aquatic accessory for kids who love to make a splash!&lt;br&gt;
`&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Write a playful and humorous product description for a kids’ waterproof smartwatch.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;`Model: Here's a playful and humorous product description for a kids' waterproof smartwatch:&lt;/p&gt;

&lt;p&gt;Unleash the Aqua Adventure with the Splish-Splash SmartKid Watch!&lt;/p&gt;

&lt;p&gt;Are your little ones always making a splash, whether in the pool, at the beach, or even in the bathtub? Fear not, because the Splish-Splash SmartKid Watch is here to ensure their aquatic escapades are both fun and functional! This waterproof wonder is designed to withstand even the most epic water battles, complete with cannonball dives and whirlpool whirls.&lt;/p&gt;

&lt;p&gt;But wait, there's more! This smartwatch isn't just a sidekick for your child's underwater antics; it's a multi-talented marvel that'll keep them entertained and engaged, even when they're back on dry land. With its vibrant display and kid-friendly interface, they can track their steps, set reminders for playdates and chores (good luck with that one!), and even challenge their friends to virtual pet-raising competitions.&lt;/p&gt;

&lt;p&gt;And let's not forget the coolest feature of all – the built-in water detector! That's right, this watch knows when your little fish has taken the plunge, and it'll automatically switch to "aqua mode," complete with underwater games and challenges to keep them occupied while they're making waves.&lt;/p&gt;

&lt;p&gt;So, whether your kiddo is a budding marine biologist or just a professional puddle-jumper, the Splish-Splash SmartKid Watch is the perfect companion for their wet and wild adventures. Just be prepared for them to never want to leave the water (or your bathtub) again!`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Other Options to Try&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Temperature&lt;/strong&gt; - Controls creativity. Lower = focused, Higher = diverse/random - Try 0.7 (default), 0.3 (precise), 0.9 (creative)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Top P&lt;/strong&gt; - Controls randomness, like temperature but in a probabilistic way - Keep it at 1.0 unless experimenting &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Max Tokens&lt;/strong&gt; - Controls length of output - Start with 300–500&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04dwxhg9vs3my7p3tdob.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04dwxhg9vs3my7p3tdob.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Takeaways&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;No-code start: You don’t need to write any code to start experimenting with generative AI on AWS.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Prompt engineering matters: The way you phrase your input prompt greatly affects the quality and tone of the output.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Great for rapid prototyping: Bedrock is ideal for trying different use cases like product content, customer support scripts, or email generation.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This mini-project demonstrates how easy and powerful it is to prototype generative AI use cases using Amazon Bedrock and Claude 3 Sonnet—without writing a single line of code. It's a perfect starting point for anyone exploring content automation, prompt engineering, or AI-driven creativity on AWS.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automate AWS Cost Monitoring &amp; Explanations using EventBridge + Bedrock</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Fri, 25 Apr 2025 13:54:43 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/automate-aws-cost-monitoring-explanations-using-eventbridge-bedrock-57l1</link>
      <guid>https://dev.to/gurudev_prasadteketi/automate-aws-cost-monitoring-explanations-using-eventbridge-bedrock-57l1</guid>
      <description>&lt;h2&gt;
  
  
  Key concepts that we are going to explore
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Cost Tools - Cost Explorer, Budgets, Trusted Advisor, Compute Optimizer&lt;/li&gt;
&lt;li&gt;EventBridge - Rule-based automation, Event pattern filtering&lt;/li&gt;
&lt;li&gt;Bedrock + LangChain (light) - Trigger LLM to explain cost anomalies&lt;/li&gt;
&lt;li&gt;Lambda - Glue logic between AWS services&lt;/li&gt;
&lt;li&gt;SNS - Cost alerts via email/SMS are going to explore.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;🛠️ Architecture:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F796ni38a1zi33dlwthdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F796ni38a1zi33dlwthdn.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1:
&lt;/h2&gt;

&lt;p&gt;Enable AWS Cost Explorer&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64havsnbfdqwjjjxktoi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F64havsnbfdqwjjjxktoi.png" alt="Image description" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmdidddefd0tll92no0i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhmdidddefd0tll92no0i.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2:
&lt;/h2&gt;

&lt;p&gt;Configure AWS Budgets for service cost threshold (e.g., EC2 &amp;gt; $20/day)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5xmztq2ico9p0jxdywo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5xmztq2ico9p0jxdywo.png" alt="Image description" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3:
&lt;/h2&gt;

&lt;p&gt;Create an SNS Topic (e.g., CostAlertsTopic) and subscribe via email&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabvqxs2c8bbqc51z5t45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabvqxs2c8bbqc51z5t45.png" alt="Image description" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4:
&lt;/h2&gt;

&lt;p&gt;Enable Amazon Bedrock which is available in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;us-east-1 (N. Virginia)&lt;/li&gt;
&lt;li&gt;us-west-2 (Oregon)&lt;/li&gt;
&lt;li&gt;eu-central-1 (Frankfurt)&lt;/li&gt;
&lt;li&gt;ap-southeast-1 (Singapore)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Recommended Model: Anthropic Claude v2 (via Amazon Bedrock)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Why Claude v2?&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Excellent at summarization and natural explanations&lt;/li&gt;
&lt;li&gt;Fast and reliable via Bedrock API&lt;/li&gt;
&lt;li&gt;Supports secure inference with no fine-tuning needed&lt;/li&gt;
&lt;li&gt;Default model in many Bedrock-based examples&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Model Details for Bedrock Invocation&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Model ID: "anthropic.claude-v2"&lt;/li&gt;
&lt;li&gt;Provider: Anthropic&lt;/li&gt;
&lt;li&gt;API Method: InvokeModel&lt;/li&gt;
&lt;li&gt;Region: us-east-1 or other supported region&lt;/li&gt;
&lt;li&gt;Input format: Anthropic-specific prompt (not plain OpenAI style)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvflj97hrht8yeed2yvxq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvflj97hrht8yeed2yvxq.png" alt="Image description" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5:
&lt;/h2&gt;

&lt;p&gt;Setup Event bridge rule - Target Lambda Function&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "source": ["aws.budgets"],
  "detail-type": ["Budget Threshold Breached"]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa69oxlov4fi7jnkyjvo6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa69oxlov4fi7jnkyjvo6.png" alt="Image description" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8l067a4zvaivfaqkn64b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8l067a4zvaivfaqkn64b.png" alt="Image description" width="800" height="356"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6:
&lt;/h2&gt;

&lt;p&gt;Lambda Function: Analyze &amp;amp; Explain Cost Spike&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
import os

def lambda_handler(event, context):
    bedrock_runtime = boto3.client("bedrock-runtime", region_name="us-east-1")

    anomaly_detail = json.dumps(event['detail'], indent=2)
    prompt = f"Explain the following AWS cost anomaly in plain English:\n\n{anomaly_detail}"

    response = bedrock_runtime.invoke_model(
        modelId="anthropic.claude-v2",  # or other Bedrock-supported LLM
        body=json.dumps({"prompt": prompt, "max_tokens_to_sample": 500}),
        contentType="application/json"
    )

    output = json.loads(response['body'].read())
    explanation = output.get("completion", "No explanation generated.")

    sns = boto3.client('sns')
    sns.publish(
        TopicArn=os.environ["SNS_TOPIC_ARN"],
        Subject="AWS Cost Alert 📈",
        Message=explanation
    )

    return {"statusCode": 200, "body": explanation}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Note - Set SNS_TOPIC_ARN as an environment variable in Lambda
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4pxy45yvo2f4mcuryqr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fe4pxy45yvo2f4mcuryqr.png" alt="Image description" width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpws8lfmzsgr20ksa655g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpws8lfmzsgr20ksa655g.png" alt="Image description" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this hands-on project, we combined the power of AWS automation and Generative AI to create an intelligent, event-driven cost monitoring system. &lt;br&gt;
By leveraging AWS Budgets, EventBridge, Lambda, and Amazon Bedrock, we not only detected cost anomalies in real-time but also generated human-readable explanations using LLMs — making cloud spend analysis more accessible and actionable.&lt;/p&gt;

&lt;p&gt;This is just the beginning of what’s possible when AI meets Cloud FinOps.&lt;br&gt;
Imagine extending this to proactive optimization suggestions, ticket creation, or even Slack alerts — all powered by serverless workflows and GenAI.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Design and Manage Pub/Sub Messaging with Amazon Simple Notification Service (SNS)</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Fri, 21 Mar 2025 03:14:14 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/design-and-manage-pubsub-messaging-with-amazon-simple-notification-service-sns-42en</link>
      <guid>https://dev.to/gurudev_prasadteketi/design-and-manage-pubsub-messaging-with-amazon-simple-notification-service-sns-42en</guid>
      <description>&lt;h2&gt;
  
  
  Amazon Simple Notification Service (Amazon SNS)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Amazon Simple Notification Service (Amazon SNS) is a fully managed messaging service that enables the coordination of message delivery from publishers to subscribers. &lt;/li&gt;
&lt;li&gt;It provides a highly available, secure, and scalable messaging infrastructure, allowing applications to send notifications to various endpoints, such as email addresses, mobile devices, or AWS services. &lt;/li&gt;
&lt;li&gt;With Amazon SNS, you can decouple message producers from consumers, enabling asynchronous communication and event-driven architectures.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Objectives
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Configure message filtering policies in Amazon SNS to control message delivery to subscribers.&lt;/li&gt;
&lt;li&gt;Set up CloudWatch metrics to monitor SNS message activity.&lt;/li&gt;
&lt;li&gt;Create SNS topic subscriptions with multi-attribute filtering rules.&lt;/li&gt;
&lt;li&gt;Use CloudWatch metrics to monitor message delivery and filtering effectiveness.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Task 1: Subscribe to an SNS topic
&lt;/h2&gt;

&lt;p&gt;Set up multiple subscriptions to the Amazon SNS topic to simulate different teams receiving notifications. You’ll start with a basic subscription and then create additional subscriptions for the customer support and inventory management teams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task 1.1: Configure Initial Subscription&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open Simple Notification Service.&lt;/li&gt;
&lt;li&gt;select Topics.&lt;/li&gt;
&lt;li&gt;Select ecommerce-notifications.&lt;/li&gt;
&lt;li&gt;On the ecommerce-notifications topic page, choose Create subscription.&lt;/li&gt;
&lt;li&gt;On the Create subscription page, configure the following settings:&lt;/li&gt;
&lt;li&gt;Topic ARN: The ARN of the ecommerce-notifications topic is already filled in. Leave this unchanged.&lt;/li&gt;
&lt;li&gt;Protocol: Choose Email.&lt;/li&gt;
&lt;li&gt;Endpoint: Enter an email address where you can receive notifications during this lab.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Note&lt;/strong&gt;: You can create multiple subscriptions for a single topic, and each subscription can use different protocols including email, SMS (text messages), or other AWS services. This allows you to send the same message to multiple destinations simultaneously.&lt;/li&gt;
&lt;li&gt;Choose Create subscription.&lt;/li&gt;
&lt;li&gt;Check your email for a message from AWS Notifications.&lt;/li&gt;
&lt;li&gt;In the email body, choose the Confirm subscription link.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Note&lt;/strong&gt;: Now that your email address is subscribed to the topic, you will receive a notification whenever a new message is published to it.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjaewl45wcjs3f0vdd4x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjaewl45wcjs3f0vdd4x.png" alt="Image description" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2iet35hgdv6td2kk65th.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2iet35hgdv6td2kk65th.png" alt="Image description" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n42j9po8f21foq83xak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6n42j9po8f21foq83xak.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task 1.2: Create Team-Specific Subscriptions&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;From the breadcrumbs at the top of the screen, choose the ecommerce-notifications link.&lt;/li&gt;
&lt;li&gt;On the ecommerce-notifications topic page, choose Create subscription.&lt;/li&gt;
&lt;li&gt;Configure the following settings:

&lt;ul&gt;
&lt;li&gt; Protocol: Choose Email&lt;/li&gt;
&lt;li&gt; Endpoint: Enter your email address with +support added before the @ symbol. Example: &lt;a href="mailto:student+support@example.com"&gt;student+support@example.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Choose Create subscription.&lt;/li&gt;

&lt;li&gt;Repeat the process for the inventory management team:&lt;/li&gt;

&lt;li&gt;Return to the ecommerce-notifications page.&lt;/li&gt;

&lt;li&gt;Choose Create subscription

&lt;ul&gt;
&lt;li&gt; Protocol to Email&lt;/li&gt;
&lt;li&gt; Endpoint, enter your email address with +inventory added before the @ symbol. Example: &lt;a href="mailto:student+inventory@example.com"&gt;student+inventory@example.com&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Choose Create subscription&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Task 1.3: Verify All Subscriptions&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check your email inbox for two additional confirmation messages from AWS Notifications.&lt;/li&gt;
&lt;li&gt;Confirm both new subscriptions by clicking the Confirm subscription link in each email.&lt;/li&gt;
&lt;li&gt;Return to the SNS console and select Subscriptions from the left navigation menu.&lt;/li&gt;
&lt;li&gt;Verify that you see the following three confirmed subscriptions:

&lt;ul&gt;
&lt;li&gt;Your original email address&lt;/li&gt;
&lt;li&gt;Your email address with +support&lt;/li&gt;
&lt;li&gt;Your email address with +inventory&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr843jae6onlu004d1sq8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr843jae6onlu004d1sq8.png" alt="Image description" width="800" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0l7idr6kma8s3yq2mbh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh0l7idr6kma8s3yq2mbh.png" alt="Image description" width="800" height="441"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz8srd227zkl0dj8l0y8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzz8srd227zkl0dj8l0y8.png" alt="Image description" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Task 2: Monitor SNS Performance with CloudWatch
&lt;/h2&gt;

&lt;p&gt;_- Amazon CloudWatch provides valuable insights into the performance and health of your SNS topics. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lets create a dashboard to monitor key metrics, including delivery and failures rates. &lt;/li&gt;
&lt;li&gt;This visibility helps you ensure your notification system is working efficiently and allows you to quickly identify any potential issues.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;By tracking metrics like the number of messages published, successful deliveries, and failed notifications, you can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Verify that messages are being delivered to subscribers&lt;/li&gt;
&lt;li&gt;Identify potential delivery problems&lt;/li&gt;
&lt;li&gt;Monitor the overall health of your SNS topic&lt;/li&gt;
&lt;li&gt;Make data-driven decisions about your notification system_&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Open CloudWatch.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under Metrics, choose All metrics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Metrics search bar, enter SNS and press Enter.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the filtered lit of SNS Metrics, select SNS &amp;gt; Topic Metrics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A list of topics and available metrics appears.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Identify the metrics for the ecommerce-notifications topic and select the checkboxes next to the following metrics:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;NumberOfMessagesPublished&lt;/li&gt;
&lt;li&gt;NumberOfNotificationsDelivered&lt;/li&gt;
&lt;li&gt;NumberOfNotificationsFailed
&lt;strong&gt;- WARNING: Do not select the PublishSize metric.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Now that you’ve selected metrics to track, you create a new CloudWatch Dashboard and add them to it.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Open the Actions  menu at the top of the screen and select Add to dashboard.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;In the Add to dashboard popup window, choose Create new.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Configure the following for your dashboard:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create new dashboard: Enter SNSMetricsDashboard.&lt;/li&gt;
&lt;li&gt;Widget type: Choose Stacked area.&lt;/li&gt;
&lt;li&gt;Customize widget title: Enter SNS Performance and Health Metrics.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Choose the Create button.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Choose Add to dashboard button.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Save&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qage5ymq7vlz9qnhmq0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qage5ymq7vlz9qnhmq0.png" alt="Image description" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7imqg0ek43jmp5nran9c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7imqg0ek43jmp5nran9c.png" alt="Image description" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fool8yzp51zwstu4hkt98.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fool8yzp51zwstu4hkt98.png" alt="Image description" width="800" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fbj35yj174bi8z8ycl5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2fbj35yj174bi8z8ycl5.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx93m8bljkrfwfu3m8v5k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx93m8bljkrfwfu3m8v5k.png" alt="Image description" width="800" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3i9x978y77c67twairo2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3i9x978y77c67twairo2.png" alt="Image description" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Task 3: Implement Message Filtering
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Implementing message filtering to ensure that different teams receive only the notifications relevant to their responsibilities. You work with an existing Lambda function that publishes various types of messages to your SNS topic:&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;High-value order notifications for the sales team&lt;/li&gt;
&lt;li&gt;Support tickets for the customer service team&lt;/li&gt;
&lt;li&gt;Inventory alerts for the warehouse team&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;By configuring message filtering policies, you’ll ensure that each team receives only the notifications they need to act upon, reducing noise and improving operational efficiency.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Task 3.1: Configure Filtering Policies&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Simple Notification Service.&lt;/li&gt;
&lt;li&gt;Choose Subscriptions.&lt;/li&gt;
&lt;li&gt;You start by updating the main subscription (your original email address).&lt;/li&gt;
&lt;li&gt;Select the radio next to the subscription with your original email address and choose the Edit button at the top of the screen.&lt;/li&gt;
&lt;li&gt;On the Edit subscription screen, expand the  Subscription filter policy - optional section.&lt;/li&gt;
&lt;li&gt;Use the Subscription filter policy toggle button to enable filtering for this subscription.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Note&lt;/strong&gt;: The filter policy scope in Amazon SNS determines whether filter conditions are applied to message attributes or the message body. Choosing Message attributes applies filtering based on metadata, whereas Message body filters based on content, allowing precise control over message delivery to subscribers.&lt;/li&gt;
&lt;li&gt;Choose Message attributes.&lt;/li&gt;
&lt;li&gt;In the JSON editor text box, enter the following policy:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
     "category": ["high_value"],
     "message_type": ["order_notification"]
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Save&lt;/li&gt;
&lt;li&gt;Repeat the same for support and inventory subscriptions.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "message_type": ["support_ticket"],
  "category": ["customer_service"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "message_type": ["inventory_alert"],
  "category": ["inventory"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9x3qoc9nq7p3farh6ym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh9x3qoc9nq7p3farh6ym.png" alt="Image description" width="800" height="343"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfoavj1c7rfo8uqp6k04.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnfoavj1c7rfo8uqp6k04.png" alt="Image description" width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5x1kfbhwtn9pmzgy8zk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg5x1kfbhwtn9pmzgy8zk.png" alt="Image description" width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3a89t3hz7qnoftaq3fj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu3a89t3hz7qnoftaq3fj.png" alt="Image description" width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztpfxd5ae65cuoh8g162.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fztpfxd5ae65cuoh8g162.png" alt="Image description" width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task 3.2: Test Message Filtering&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Return to the Topics page and select your topic.&lt;/li&gt;
&lt;li&gt;Choose Publish message.&lt;/li&gt;
&lt;li&gt;Start by confirming that your main email address receives notifications when high value orders are placed.&lt;/li&gt;
&lt;li&gt;Choose the Publish message button at the top of the screen and create a test message for a high value order:

&lt;ul&gt;
&lt;li&gt;Subject: Enter Test - High value order alert&lt;/li&gt;
&lt;li&gt;Message body: Enter An order for $750 has been placed&lt;/li&gt;
&lt;li&gt;Add message attribute:&lt;/li&gt;
&lt;li&gt;Type: Choose String&lt;/li&gt;
&lt;li&gt;Name: Enter category&lt;/li&gt;
&lt;li&gt;Value: Enter high_value&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Choose Add another attribute

&lt;ul&gt;
&lt;li&gt;Add message attribute:&lt;/li&gt;
&lt;li&gt;Type: Choose String&lt;/li&gt;
&lt;li&gt;Name: Enter message_type&lt;/li&gt;
&lt;li&gt;Value: Enter order_notification&lt;/li&gt;
&lt;li&gt;Choose Publish message&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Check your email inbox:&lt;/li&gt;

&lt;li&gt;Look at the email header and confirm that it was sent to your original email address
The +support and +inventory aliases should not receive this message
&lt;strong&gt;Note&lt;/strong&gt;: When using multiple attributes in a filter policy, SNS applies an AND operation between them. This means a subscription will only receive a message if ALL attributes in its filter policy match the message’s attributes. For example, given the policy you just created:
&lt;/li&gt;

&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
     "category": ["high_value"],
     "message_type": ["order_notification"]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your email address will only receive notification that have both:&lt;br&gt;
&lt;em&gt;category = high_value AND&lt;br&gt;
message_type = order_notification&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This powerful filtering capability has many applications beyond e-commerce. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A weather alert system could filter by both severity (critical) AND location (west-coast)&lt;/li&gt;
&lt;li&gt;A security monitoring system could filter by both threat level (high) AND asset type (database)&lt;/li&gt;
&lt;li&gt;A manufacturing system could filter by both machine status (maintenance-required) AND facility location (plant-A)&lt;/li&gt;
&lt;li&gt;This allows you to create very specific routing rules for your messages, ensuring that subscribers only receive the exact subset of messages they need to act upon.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasfhu62xaqxi8fzlyalp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fasfhu62xaqxi8fzlyalp.png" alt="Image description" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1wjg233bcw8y5p651ww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1wjg233bcw8y5p651ww.png" alt="Image description" width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpg7ctuvc6yu06nrz1p6d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpg7ctuvc6yu06nrz1p6d.png" alt="Image description" width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Now test that your support team subscription receives customer service tickets.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Choose the Publish message button at the top of the screen and create the following test message for support tickets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subject: Enter Test - Customer support ticket&lt;/li&gt;
&lt;li&gt;Message body: Enter New support ticket opened&lt;/li&gt;
&lt;li&gt;Add message attribute:&lt;/li&gt;
&lt;li&gt;Type: Choose String&lt;/li&gt;
&lt;li&gt;Name: Enter category&lt;/li&gt;
&lt;li&gt;Value: Enter customer_service&lt;/li&gt;
&lt;li&gt;Choose Add another attribute&lt;/li&gt;
&lt;li&gt;Add message attribute:&lt;/li&gt;
&lt;li&gt;Type: Choose String&lt;/li&gt;
&lt;li&gt;Name: Enter message_type&lt;/li&gt;
&lt;li&gt;Value: Enter support_ticket&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Choose Publish message&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Check your email inbox:&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Look at the email header and confirm that it was sent to your +support email address&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Your original email and +inventory alias should not receive this message&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Create a test message for inventory alerts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subject: Enter Test - Inventory alert&lt;/li&gt;
&lt;li&gt;Message body: Enter Low stock warning&lt;/li&gt;
&lt;li&gt;Add message attribute:&lt;/li&gt;
&lt;li&gt;Type: Choose String&lt;/li&gt;
&lt;li&gt;Name: Enter category&lt;/li&gt;
&lt;li&gt;Value: Enter inventory&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Choose Add another attribute&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add message attribute:&lt;/li&gt;
&lt;li&gt;Type: Choose String&lt;/li&gt;
&lt;li&gt;Name: Enter message_type&lt;/li&gt;
&lt;li&gt;Value: Enter inventory_alert&lt;/li&gt;
&lt;li&gt;Choose Publish message&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Check your email inbox:&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Look at the email header and confirm that it was sent to your +inventory email address&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Your original email and +support alias should not receive this message&lt;br&gt;&lt;br&gt;
The following table shows a summary of your filtering configuration:&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ciiuemkfh05trwlf8oe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ciiuemkfh05trwlf8oe.png" alt="Image description" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Task 4: Analyze SNS Performance Using CloudWatch
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Task 4.1: Review and update CloudWatch Dashboard&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose CloudWatch.&lt;/li&gt;
&lt;li&gt;choose Dashboards.&lt;/li&gt;
&lt;li&gt;Select the dashboard named SNSMetricsDashboard that you created earlier.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Note&lt;/strong&gt;: To adjust the dashboard size, look for the resize handle in the bottom-right corner. You can drag this handle to increase the viewing area of your dashboard.&lt;/li&gt;
&lt;li&gt;In the time series bar above the dashboard, select Custom  and then choose 30 minutes.&lt;/li&gt;
&lt;li&gt;In the dashboard, observe the following metrics:

&lt;ul&gt;
&lt;li&gt;NumberOfMessagesPublished: Shows how many messages the Lambda function is publishing&lt;/li&gt;
&lt;li&gt;NumberOfNotificationsDelivered: Shows successful message deliveries to subscribers&lt;/li&gt;
&lt;li&gt;NumberOfNotificationsFailed: Shows failed delivery attempts&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt; &lt;strong&gt;Note&lt;/strong&gt;: You should see:&lt;/li&gt;

&lt;li&gt;The number of MessagesPublished should be largely consistent, as this is controlled by a Lambda function triggering at regular intervals&lt;/li&gt;

&lt;li&gt;Few or no NotificationsFailed (indicating successful delivery)&lt;/li&gt;

&lt;li&gt;Hover over any data point in the graphs to see exact values at that time.&lt;/li&gt;

&lt;li&gt;Now, let’s investigate how filtering has affected your SNS topic.&lt;/li&gt;

&lt;li&gt;In the upper-right corner of your dashboard, select the icon with three vertical dots.&lt;/li&gt;

&lt;li&gt;From the menu that appears on screen, choose Edit.&lt;/li&gt;

&lt;li&gt;Choose the Browse tab.&lt;/li&gt;

&lt;li&gt;Enter NumberOfNotificationsFilteredOut-MessageAttributes in the search bar and press Enter.&lt;/li&gt;

&lt;li&gt;Choose SNS &amp;gt; Topic Metrics.&lt;/li&gt;

&lt;li&gt;Select the checkbox next to ecommerce-notifications and then choose Update widget.&lt;/li&gt;

&lt;li&gt;Choose Save.&lt;/li&gt;

&lt;li&gt;With the additional metric added to your dashboard, you can now see how filtering has affected your SNS topic.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flee6tjv70sjcjhxws6ee.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flee6tjv70sjcjhxws6ee.png" alt="Image description" width="800" height="372"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5g9fufzl53uljp107ze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5g9fufzl53uljp107ze.png" alt="Image description" width="800" height="1041"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff36prux5at74jvrd4bp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff36prux5at74jvrd4bp4.png" alt="Image description" width="800" height="851"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task 4.2: Create CloudWatch Alarms&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the CloudWatch console, expand the navigation menu by choosing the menu icon  in the upper-left corner, then open the Alarms menu and select All alarms.&lt;/li&gt;
&lt;li&gt;Choose Create alarm.&lt;/li&gt;
&lt;li&gt;Choose Select metric.&lt;/li&gt;
&lt;li&gt;Select the Browse tab.&lt;/li&gt;
&lt;li&gt;Enter NumberOfNotificationsFailed in the search bar and press Enter.&lt;/li&gt;
&lt;li&gt;Choose SNS &amp;gt; Topic Metrics.&lt;/li&gt;
&lt;li&gt;Select the checkbox next to ecommerce-notifications and then choose Select metric.&lt;/li&gt;
&lt;li&gt;Configure the following alarm conditions:&lt;/li&gt;
&lt;li&gt;Under Metric, leave the default settings&lt;/li&gt;
&lt;li&gt;Under Conditions:

&lt;ul&gt;
&lt;li&gt;For Threshold type, choose Static&lt;/li&gt;
&lt;li&gt;For Whenever NumberOfNotificationsFailed is…, choose Greater&lt;/li&gt;
&lt;li&gt;For than…, enter 0&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt; Note: This alarm will trigger whenever any message fails to deliver. Setting the threshold to 0 ensures you’re notified of even a single failure, which is important for maintaining reliable message delivery in a production environment.&lt;/li&gt;

&lt;li&gt;Choose Next.&lt;/li&gt;

&lt;li&gt;Configure the notification:&lt;/li&gt;

&lt;li&gt;Under Notification, choose Create new topic

&lt;ul&gt;
&lt;li&gt;For Create a new topice, enter sns-alarms&lt;/li&gt;
&lt;li&gt;For Email endpoints, enter your email address&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Choose Create topic&lt;/li&gt;

&lt;li&gt;Choose Next.&lt;/li&gt;

&lt;li&gt;For Alarm name, enter SNSFailedNotifications.&lt;/li&gt;

&lt;li&gt;Choose Next.&lt;/li&gt;

&lt;li&gt;Review your alarm configuration and choose Create alarm.&lt;/li&gt;

&lt;li&gt;Check your email and confirm the subscription to the new SNS topic.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqtf7iyyicexgzf6x0i2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjqtf7iyyicexgzf6x0i2.png" alt="Image description" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqa54xyrsr1ir6oe8d4ub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqa54xyrsr1ir6oe8d4ub.png" alt="Image description" width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73mbqgjt318nncfllgex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F73mbqgjt318nncfllgex.png" alt="Image description" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Review
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Configured message filtering policies in Amazon SNS to control message delivery to subscribers.&lt;/li&gt;
&lt;li&gt;Set up CloudWatch metrics to monitor SNS message activity.&lt;/li&gt;
&lt;li&gt;Created SNS topic subscriptions with multi-attribute filtering rules.&lt;/li&gt;
&lt;li&gt;Used CloudWatch metrics to monitor message delivery and filtering effectiveness.&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>Zero Trust Architecture for Service-to-Service Workload in AWS</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Thu, 20 Mar 2025 10:25:13 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/zero-trust-architecture-for-service-to-service-workload-in-aws-57j3</link>
      <guid>https://dev.to/gurudev_prasadteketi/zero-trust-architecture-for-service-to-service-workload-in-aws-57j3</guid>
      <description>&lt;h2&gt;
  
  
  Zero Trust Architecture in AWS
&lt;/h2&gt;

&lt;p&gt;is a security framework that implements Zero Trust principles across AWS services and workloads. It’s designed to verify every request for access to resources—regardless of its origin—based on identity, device state, and security posture.&lt;/p&gt;

&lt;p&gt;This hands-on lab focuses on applying Zero Trust principles to the &lt;strong&gt;service-to-service architectures&lt;/strong&gt; used in many micro-architecture or distributed environments today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;High-level architecture of the lab&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o0ucgc12jh3bk7jaqjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4o0ucgc12jh3bk7jaqjc.png" alt="Image description" width="800" height="677"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The scenario is comprised of two services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;calling services&lt;/strong&gt; component has multiple compute nodes which make API calls to the target services. The API calls are made by a fleet of Amazon Elastic Compute Cloud (Amazon EC2) instances and AWS Lambda functions. These compute nodes resides in different subnets within multiple Amazon VPCs.&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;target services&lt;/strong&gt; component has multiple REST APIs hosted by the API Gateway, multiple backend Lambda functions and other backend services like Amazon DynamoDB tables.&lt;/li&gt;
&lt;li&gt;Because the REST APIs are set as private, VPC Endpoints are configured within the calling services subnets to provide a network path for the API calls from the calling services using AWS PrivateLink.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Task 1: Review the lab scenario current state service-to-service architecture
&lt;/h2&gt;

&lt;p&gt;Recommendation service (ServiceA) : Generates recommendations based on previous orders retrieved from the Order History service (ServiceB). ServiceA valid compute nodes are represented by the EC2 instance denoted as Expected Caller.&lt;/p&gt;

&lt;p&gt;Order History service (ServiceB) : Provides access to customers past orders. This is represented by the ServiceB API. The API communicates at the backend with a Lambda function and a DynamoDB table to store the order table.&lt;/p&gt;

&lt;p&gt;In addition to the above, there are other unwanted callers and unknown APIs which perform other functions but should not be able to interact with the main two services above.&lt;/p&gt;

&lt;p&gt;There are many conditions that you can use to allow or block the API call using different security controls:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Does the caller need to make an API call to the target service?&lt;/li&gt;
&lt;li&gt;Does the API call has a valid path to the target service?&lt;/li&gt;
&lt;li&gt;Is the API call sourced from a valid CIDR IP range?&lt;/li&gt;
&lt;li&gt;Does the caller have the correct IAM permissions to make the API call via IAM role?&lt;/li&gt;
&lt;li&gt;Is the API call authenticated (signed) using AWS Signature V4 (SigV4) algorithm?&lt;/li&gt;
&lt;li&gt; Note: AWS Signature V4 is an algorithm used to sign the API calls. When enforced, it requires that the caller uses valid IAM credentials to sign the API calls.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrl285va4d51uoinwcer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrl285va4d51uoinwcer.png" alt="Image description" width="800" height="196"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Task 2: Review the existing security controls
&lt;/h2&gt;

&lt;p&gt;where you can apply security controls in the solution:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;API Gateway:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;IAM authorization on the API level: This is where you can require that the caller has valid IAM credentials and the API call is signed using SigV4. If turned off, the API can be called by any entity without having valid IAM credentials.&lt;/li&gt;
&lt;li&gt;Resource policy: This is where you can apply an IAM resource policy with specific conditions to restrict making calls to the API.&lt;/li&gt;
&lt;li&gt;API Endpoint Type: You can control where the API can be called from depending on the type you choose. In the lab scenario, both the ServiceB API and Unknown API are set to Private. This means that calls must be initiated from a API Gateway VPC endpoint from within a VPC.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;VPC Endpoint:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Resource policy: This is where you can apply an IAM resource policy with specific conditions to restrict making calls to the API. This is similar to the API Gateway resource policy but configured closer to the source of the API call.&lt;/li&gt;
&lt;li&gt;Security Group: You can define inbound/outbound rules to control the traffic traversing through the VPC endpoint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Open API Gateway&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; In the list of APIs, under the name column, choose the ServiceBAPI link.&lt;br&gt;
&lt;strong&gt;Step 3:&lt;/strong&gt; In the Resources pane, under /orders, choose GET.&lt;br&gt;
&lt;strong&gt;Step 4:&lt;/strong&gt; In the /orders-GET-Method Execution section, choose the Method request tab.&lt;br&gt;
&lt;strong&gt;Step 5:&lt;/strong&gt; In the Settings page, verify that Authorization is set to NONE.&lt;br&gt;
&lt;strong&gt;Step 6:&lt;/strong&gt; In the navigation pane at the left of the page, in the API:ServiceBAPI section, choose Resource policy.&lt;br&gt;
&lt;strong&gt;Step 7:&lt;/strong&gt; Check the existing IAM resource policy configured on the ServiceBAPI. The policy allows any call sourced from an IP address within the 10.199.0.0/24 range. This range is similar to the range of Private subnet 1 in the ServiceA VPC and also the Private subnet 2 in the Other VPC. You can verify the ranges using the Detailed architecture diagram and/or the Callers table presented earlier.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sqdildkfn3gqy1y0wup.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6sqdildkfn3gqy1y0wup.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50ebwp4b7o0g4czp1ar6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50ebwp4b7o0g4czp1ar6.png" alt="Image description" width="800" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; Open VPC&lt;br&gt;
&lt;strong&gt;Step 9:&lt;/strong&gt; In the navigation pane at the left of the page, in the Virtual private cloud section, choose Endpoints&lt;br&gt;
&lt;strong&gt;Step 10:&lt;/strong&gt; In the Endpoints page, select the endpoint with a VPC endpoint ID that matches the APIGWVPCEndpointID value that is listed to the left of these instructions.&lt;br&gt;
&lt;strong&gt;Step 11:&lt;/strong&gt; In the VPC endpoint Details section under the list of Endpoints, choose the Tags tab, then choose Manage tags.&lt;br&gt;
&lt;strong&gt;Step 12:&lt;/strong&gt; Choose Add new tag.&lt;br&gt;
&lt;strong&gt;Step 13:&lt;/strong&gt; On the Manage tags page, in the Tags section:&lt;br&gt;
For Key, enter Name.&lt;br&gt;
For Value - optional, enter ServiceA-APIGW-EP.&lt;br&gt;
&lt;strong&gt;Step 14:&lt;/strong&gt; Save and While the ServiceA-APIGW-EP endpoint name is selected, choose the Policy tab.&lt;br&gt;
Note that the existing policy allows all traffic to traverse through the endpoint to make a call to any Private API in the same region with no restrictions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqwv8kxnqzldurpn25fg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqwv8kxnqzldurpn25fg.png" alt="Image description" width="800" height="352"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ahc1zcd22w72aixywpw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ahc1zcd22w72aixywpw.png" alt="Image description" width="800" height="347"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Task 3: Run an assessment to evaluate your current security posture
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Connect to Service A EC2 instance.&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; run the command "runscanner" to run the scan.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w0gnz8xjyxoqprrqqk8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2w0gnz8xjyxoqprrqqk8.png" alt="Image description" width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Based on the assessment results above, only two out of six API calls patterns comply with desired security outcome. This really raises an alarm and does not fit well with the Zero Trust principles that you need comply with.&lt;/p&gt;
&lt;h2&gt;
  
  
  Task 4: Improve your security posture using IAM authorization on the API Gateway
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt;  choose API Gateway.&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt;  In the list of APIs, under the name column, choose the ServiceBAPI link.&lt;br&gt;
&lt;strong&gt;Step 3:&lt;/strong&gt; In the Resources panel, under the /orders, choose GET.&lt;br&gt;
&lt;strong&gt;Step 4:&lt;/strong&gt; In the /orders-GET-Method Execution section, choose the Method request tab.&lt;br&gt;
&lt;strong&gt;Step 5:&lt;/strong&gt; In the Method request settings section, choose Edit, and then from the Authorization drop-down menu, select AWS IAM.&lt;br&gt;
&lt;strong&gt;Step 6:&lt;/strong&gt; To save your selection, choose Save.&lt;br&gt;
&lt;strong&gt;Step 7:&lt;/strong&gt; After saving your selection, you need to deploy the API for your changes to take effect.&lt;br&gt;
&lt;strong&gt;Step 8:&lt;/strong&gt; In the Resources panel, choose Deploy API.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For Stage, select api.&lt;/li&gt;
&lt;li&gt;Choose Deploy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frf4l623z4uspcvp9jxnr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frf4l623z4uspcvp9jxnr.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgqtydxbfwgn5y2cgpyu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frgqtydxbfwgn5y2cgpyu.png" alt="Image description" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Task 5: Improve your security posture using API Gateway resource policy
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Return to the API Gateway Console browser tab.&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; In the ServiceBAPI left panel, choose Resource policy.&lt;br&gt;
&lt;strong&gt;Step 3:&lt;/strong&gt; Choose Edit, and then delete the entire policy.&lt;br&gt;
&lt;strong&gt;Step 4:&lt;/strong&gt; In the Policy details pane, add the following policy after making these two changes:&lt;br&gt;
Replace the INSERT_AWS_ACCOUNT_ID placeholder value with the AWSAccountID value that is listed to the left of these instructions (keep the quotes).&lt;br&gt;
Replace the INSERT_APIGW_VPC_ENDPOINT_ID placeholder value with the APIGWVPCEndpointID value that is listed to the left of these instructions (keep the quotes).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": "arn:aws:execute-api:*:*:*"
        },
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": "arn:aws:execute-api:*:*:*",
            "Condition": {
                "StringNotEquals": {
                    "aws:PrincipalAccount": "INSERT_AWS_ACCOUNT_ID"
                }
            }
        },
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": "arn:aws:execute-api:*:*:*",
            "Condition": {
                "StringNotEquals": {
                    "aws:SourceVpce": "INSERT_APIGW_VPC_ENDPOINT_ID"
                }
            }
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 5:&lt;/strong&gt; Save&lt;br&gt;
After saving the policy, you need to deploy the API for your changes to take effect.&lt;br&gt;
&lt;strong&gt;Step 6:&lt;/strong&gt; In the navigation pane at the left of the page, in the API:ServiceBAPI section, choose Resources.&lt;br&gt;
&lt;strong&gt;Step 7:&lt;/strong&gt; In the Resources panel, choose Deploy API.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For Stage, select api.&lt;/li&gt;
&lt;li&gt;Choose Deploy.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3azfmcen0kb0alhi5h06.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3azfmcen0kb0alhi5h06.png" alt="Image description" width="800" height="398"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 8:&lt;/strong&gt; Open the Session Manager terminal session to the ServiceA-Instance. &lt;br&gt;
&lt;strong&gt;Step 9:&lt;/strong&gt; Run command "runscanner"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feichfm4km9nn7t3j7nlo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feichfm4km9nn7t3j7nlo.png" alt="Image description" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Task 6: Improve the security posture using VPC endpoint policy
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; Return to the API Gateway Console browser tab.&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; At the top of the AWS Management Console, in the search bar, search for and choose VPC.&lt;br&gt;
&lt;strong&gt;Step 3:&lt;/strong&gt; In the navigation pane at the left of the page, in the Virtual private cloud section, choose Endpoints.&lt;br&gt;
&lt;strong&gt;Step 4:&lt;/strong&gt; In the list of Endpoints, under the Name column, locate the ServiceA-APIGW-EP and select it.&lt;br&gt;
&lt;strong&gt;Step 5:&lt;/strong&gt; In the VPC endpoint Details section under the list of Endpoints, choose the Policy tab.&lt;br&gt;
&lt;strong&gt;Step 6:&lt;/strong&gt; In the Policy pane, choose Edit Policy.&lt;br&gt;
&lt;strong&gt;Step 7:&lt;/strong&gt; In the Edit policy page, ensure that  Custom is selected.&lt;br&gt;
&lt;strong&gt;Step 8:&lt;/strong&gt; Select the existing policy and delete it.&lt;br&gt;
&lt;strong&gt;Step 9:&lt;/strong&gt; In the Policy pane, add the following policy after making these two changes:&lt;/p&gt;

&lt;p&gt;Replace the INSERT_SERVICEA_INSTANCE_ROLE_ARN placeholder value with the ServiceAInstanceRoleARN value that is listed to the left of these instructions (keep the quotes).&lt;br&gt;
Replace the INSERT_API_METHOD_ARN placeholder value with the APIMethodARN value that is listed to the left of these instructions (keep the quotes).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftur9tnlvp2ijgkwtvv45.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftur9tnlvp2ijgkwtvv45.png" alt="Image description" width="800" height="376"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 10:&lt;/strong&gt; Save&lt;br&gt;
&lt;strong&gt;Step 11:&lt;/strong&gt; Open TerminalA and run command "runscanner"  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6je8fwlurc5syr2l9k2y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6je8fwlurc5syr2l9k2y.png" alt="Image description" width="800" height="199"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Task 7: Improve the security posture by tuning the VPC endpoint security group
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Step 1:&lt;/strong&gt; In the list of Endpoints, select ServiceA-APIGW-EP endpoint if not already selected.&lt;br&gt;
&lt;strong&gt;Step 2:&lt;/strong&gt; Choose the Security Groups tab. There is one security group applied to the VPC endpoint.&lt;br&gt;
&lt;strong&gt;Step 3:&lt;/strong&gt; Under the Group ID, choose the security group link.&lt;br&gt;
&lt;strong&gt;Step 4:&lt;/strong&gt; On the Security Groups page, choose the security group link.&lt;br&gt;
&lt;strong&gt;Step 5:&lt;/strong&gt; In the Security Group Details section, choose the Inbound rules tab and examine the existing inbound rule.&lt;br&gt;
&lt;strong&gt;Step 6:&lt;/strong&gt; The current rule allows all HTTPS (TCP:443) traffic inbound to the VPC endpoint from the whole ServiceA VPC CIDR range of 10.199.0.0/16. This represents any subnet in the ServiceA VPC even though the expected caller resides only in Private subnet 1. This is a broad range of IP addresses which you need to narrow down.&lt;br&gt;
&lt;strong&gt;Step 7:&lt;/strong&gt; First, you delete the current rule and then add a new rule.&lt;br&gt;
&lt;strong&gt;Step 8:&lt;/strong&gt; Choose Edit inbound rules.&lt;br&gt;
&lt;strong&gt;Step 9:&lt;/strong&gt; To delete the existing rule, choose Delete.&lt;br&gt;
&lt;strong&gt;Step 10:&lt;/strong&gt; To add a new rule, choose Add rule.&lt;br&gt;
&lt;strong&gt;Step 11:&lt;/strong&gt; For the new rule, configure the following settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For Type, choose HTTPS&lt;/li&gt;
&lt;li&gt;For Source, choose Custom, then in the search field, select the security group that has ServiceASecurityGroup in its name.&lt;/li&gt;
&lt;li&gt;For Description - optional, enter From ServiceA Instances Security Group
&lt;strong&gt;Step 12:&lt;/strong&gt; Choose Save rules.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hrj2z46epvamz3vetqb.png" alt="Image description" width="800" height="262"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Step 13:&lt;/strong&gt; Open TerminalA and run command "runscanner" &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwm5szyv0mn0lgx3tnv7b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwm5szyv0mn0lgx3tnv7b.png" alt="Image description" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From the assessment results, you can see that all security requirements are still met. However, Unwanted Callers #1, #2, and #3 are now blocked by the VPC endpoint security group level which is closer to the source.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/security/zero-trust-architectures-an-aws-perspective/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/security/zero-trust-architectures-an-aws-perspective/&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Hands-on Lab - Event Driven Architecture with Amazon API Gateway, Amazon EventBridge and AWS Lambda</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Tue, 11 Mar 2025 04:54:44 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/hands-on-lab-event-driven-architecture-with-amazon-api-gateway-amazon-eventbridge-and-aws-lambda-kn2</link>
      <guid>https://dev.to/gurudev_prasadteketi/hands-on-lab-event-driven-architecture-with-amazon-api-gateway-amazon-eventbridge-and-aws-lambda-kn2</guid>
      <description>&lt;h2&gt;
  
  
  Event-driven architecture
&lt;/h2&gt;

&lt;p&gt;is a design pattern where events are utilized to trigger and facilitate communication between decoupled services, making it a common choice for modern applications built with microservices. Serverless architecture, on the other hand, provides a model for building and running applications and services without the need to provision or manage underlying infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Objective of the Lab
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Configure an HTTP API on API Gateway to redirect requests to EventBridge.
&lt;/li&gt;
&lt;li&gt;Create event bus rules that match incoming requests and route events to Lambda functions.
&lt;/li&gt;
&lt;li&gt;Process events with Lambda functions and send the processed events back to the event bus as new events.
&lt;/li&gt;
&lt;li&gt;Publish events to the event bus, triggering separate Lambda functions.
&lt;/li&gt;
&lt;li&gt;Lambda functions receive events and post them back to the client application via a WebSocket connection hosted on API Gateway.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  High Level Architecture Diagram
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famdo6cqrllhj4zq4og5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Famdo6cqrllhj4zq4og5l.png" alt="Image description" width="800" height="452"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implement five Lambda functions using below configurations and code and deploy each lambda&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Function name, enter: make_pizza&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For Runtime, select: Python 3.12&lt;/li&gt;
&lt;li&gt;For Change default execution role, choose - lab_lambda_make_pizza&lt;/li&gt;
&lt;li&gt;Environment variable: Key: EVENT_BUS, Value: lab_event_bus&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Function name, enter: cook_pizza&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For Runtime, select: Python 3.12&lt;/li&gt;
&lt;li&gt;For Change default execution role, choose - lab_lambda_cook_pizza&lt;/li&gt;
&lt;li&gt;Environment variable: Key: EVENT_BUS, Value: lab_event_bus&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Function name, enter: deliver_pizza&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For Runtime, select: Python 3.12&lt;/li&gt;
&lt;li&gt;For Change default execution role,choose - lab_lambda_deliver_pizza&lt;/li&gt;
&lt;li&gt;Environment variable: Key: EVENT_BUS, Value: lab_event_bus&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Function name, enter: websocket_connect&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For Runtime, select: Python 3.12&lt;/li&gt;
&lt;li&gt;For Change default execution role,choose - lab_lambda_websocket_connection&lt;/li&gt;
&lt;li&gt;Environment variable: Key: TABLENAME, Value: websocket_connections&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;Function name, enter: receive_events&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For Runtime, select: Python 3.12&lt;/li&gt;
&lt;li&gt;For Change default execution role,choose - lab_lambda_receive_events&lt;/li&gt;
&lt;li&gt;Environment variable: Key: TABLENAME, Value: receive_events&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwice3684erosks0u7jl0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwice3684erosks0u7jl0.png" alt="Image description" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjftufe85d0g9yrg715t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjftufe85d0g9yrg715t.png" alt="Image description" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7el2dwrh9lh7u1tmahy7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7el2dwrh9lh7u1tmahy7.png" alt="Image description" width="800" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifqwf902bfxgm5yq7kla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fifqwf902bfxgm5yq7kla.png" alt="Image description" width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sample Lambda function code for Make Pizza:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
import os
from botocore.exceptions import ClientError

client = boto3.client('events')

def lambda_handler(event, context):
  try:
    detail = event["detail"]
    detail["item"]["eventtype"]="cook_pizza"
    response = client.put_events(
      Entries=[
        {
          'DetailType': 'eventtype',
          'Detail': json.dumps(detail),
          'EventBusName': os.environ.get('EVENT_BUS'),
          'Source':"make_pizza"
        },
      ]
    )
    print(response)
  except ClientError as err:
    print(err)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Sample Lambda function code for Cook Pizza:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
import os
from botocore.exceptions import ClientError

client = boto3.client('events')

def lambda_handler(event, context):
  try:
    detail = event["detail"]
    detail["item"]["eventtype"]="deliver_pizza"
    response = client.put_events(
      Entries=[
        {
          'DetailType': 'eventtype',
          'Detail': json.dumps(detail),
          'EventBusName': os.environ.get('EVENT_BUS'),
          'Source':"cook_pizza"
        },
      ]
    )
    print(response)
  except ClientError as err:
    print(err)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Sample Lambda function code for Deliver Pizza:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import json
import boto3
import os
from botocore.exceptions import ClientError

client = boto3.client('events')

def lambda_handler(event, context):
  try:
    detail = event["detail"]
    detail["item"]["eventtype"]="delivered"
    response = client.put_events(
      Entries=[
        {
          'DetailType': 'eventtype',
          'Detail': json.dumps(detail),
          'EventBusName': os.environ.get('EVENT_BUS'),
          'Source':"deliver_pizza"
        },
      ]
    )
  except ClientError as err:
    print(err)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Sample Lambda function code for Websocket Connect:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
import os
from botocore.exceptions import ClientError

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(os.environ.get('TABLENAME'))

def lambda_handler(event, context):
  try:
    order_id=event['queryStringParameters']['order_id']
    response = table.put_item(
      Item={
      'order_id': order_id,
      'connection_id': event["requestContext"]["connectionId"]
      })

    return {
      "statusCode": 200,
      "headers": {
      "Content-Type": "application/json"
    }
    }
  except ClientError as err:
    print(err)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Sample Lambda function code for Receive Events:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
import os
from botocore.exceptions import ClientError

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(os.environ.get('TABLENAME'))
management = boto3.client('apigatewaymanagementapi', endpoint_url=os.getenv('APIGW_ENDPOINT'))

def lambda_handler(event, context):
  try:
    response = table.get_item(
      Key={
      'order_id': event["detail"]['item']['order_id']
      }
    )
    management.post_to_connection(
      Data=json.dumps(event["detail"]),
      ConnectionId=response["Item"]["connection_id"]
    )
  except ClientError as err:
    print(err)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Step 2&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Configure EventBridge&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configure the rules with the following settings and select Next .&lt;/li&gt;
&lt;li&gt;Build event pattern screen, under the Creation method section, select  Custom pattern (JSON editor) for the Method. &lt;/li&gt;
&lt;li&gt;Copy and paste the  json object into the Event pattern editor, &lt;/li&gt;
&lt;li&gt;On the Select target(s) page, from the Target 1 section under Target types, choose  AWS service.&lt;/li&gt;
&lt;li&gt;From the Select a target drop-down list, choose Lambda function.&lt;/li&gt;
&lt;li&gt;From the Function drop-down list, choose make_pizza.&lt;/li&gt;
&lt;li&gt;Choose the correspondoing role&lt;/li&gt;
&lt;li&gt;Select Next and select Next again on the Configure tags - optional screen.&lt;/li&gt;
&lt;li&gt;On the Review and create screen, select Create rule.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Configurations for each rule&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Name: lab_make_pizza_rule&lt;br&gt;
Event bus: lab_event_bus&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "detail": {
    "item": {
      "eventtype": ["make_pizza"]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Name: lab_cook_pizza_rule&lt;br&gt;
Event bus: lab_event_bus&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "detail": {
    "item": {
      "eventtype": ["cook_pizza"]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Name: lab_deliver_pizza_rule&lt;br&gt;
Event bus: lab_event_bus&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "detail": {
    "item": {
      "eventtype": ["deliver_pizza"]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Name: lab_receive_events_rule&lt;br&gt;
Event bus: lab_event_bus&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "detail": {
    "item": {
      "eventtype": ["make_pizza","cook_pizza","deliver_pizza"]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rtca8rr76o50tmutmoq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0rtca8rr76o50tmutmoq.png" alt="Image description" width="800" height="289"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc7hqaw8xppya9kl00i1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbc7hqaw8xppya9kl00i1.png" alt="Image description" width="800" height="320"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcldwwals7osciwgoymj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpcldwwals7osciwgoymj.png" alt="Image description" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlu2e83vxblug73ih3ad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlu2e83vxblug73ih3ad.png" alt="Image description" width="800" height="355"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Configure API Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;In the console Choose API Gateway.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;In the Choose an API type select Build from the &lt;strong&gt;HTTP API section&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;In the &lt;strong&gt;API Name&lt;/strong&gt; field, enter &lt;strong&gt;lab_http_api&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Next until you get to the Review and Create screen, then choose Create.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under the Develop section in the left-hand navigation pane, select Routes, then choose Create .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select &lt;strong&gt;POST&lt;/strong&gt; from the drop-down list, and select Create.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Under the Develop select Integrations, choose the POST route link and then select Create and attach an integration&lt;/strong&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For Integration target, configure the following options.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration type: Amazon EventBridge&lt;/strong&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration action: PutEvents&lt;/strong&gt; &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;EventBridge - PutEvents, configure the following options.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Detail: $request.body&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Detail type: eventtype&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Source: lab_http_api&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Invocation role, copy the APIExecutionRoleARN &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Expand Advanced settings, and configure the remaining fields. Select Create when you are finished.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For Event bus name - optional, enter the event bus ARN you saved in Task 2.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Region - optional.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From the left-hand navigation pane, select CORS and choose Configure in the top right corner.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Access-Control-Allow-Origin enter *.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Access-Control-Allow-Methods select, POST.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Access-Control-Allow-Headers, enter *.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Save.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;From the navigation menu on the left-hand side of the screen, navigate to the endpoint details screen by selecting API: lab_http_api. Copy and save the Invoke URL from the Stages for lab_http_api section of the API: lab_api screen. The URL is needed in Task 4.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Now create the web socket API endpoint, which is used to send events back to the web application. Web socket connections are two-way persistent connections allowing bi-directional communication and are often used by chat applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;At the top of the left-hand navigation pane, select APIs, then select Create API .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the Choose an API type select **Build from the WebSocket API **section.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the following fields and select Next .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;API Name field: lab_websocket_api&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Route selection expression: request.body.action&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the Add routes page choose Add &lt;strong&gt;$connect&lt;/strong&gt; route and select Next .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The websocket_connect function invokes when the application establishes the connection. This function saves the connection ID to a DynamoDB table.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Integration type: Lambda.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Lambda function websocket_connect.&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Next .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the Add Stages page select Next, followed by Create and deploy .&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select Stages from the left-hand navigation page and choose the Production stage.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy and save the WebSocket URL, as it is needed shortly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Copy the Connection URL up to, but do not include the @connections text.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Navigate back to the Lambda console and open the receive_events Lambda function.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the configuration tab select Environment variables, choose Edit and select Add environment variable.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Configure the variable with the following settings, and select Save when you are finished.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Key&lt;/strong&gt;: &lt;code&gt;APIGW_ENDPOINT&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Value&lt;/strong&gt;: the copied &lt;strong&gt;Connection URL&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Testing the ED Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4gv5u2dibtsiu2ts2wy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh4gv5u2dibtsiu2ts2wy.png" alt="Image description" width="800" height="413"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Build Machine Learning Models with Amazon SageMaker</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Sun, 02 Mar 2025 06:32:40 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/build-machine-learning-models-with-amazon-sagemaker-1n2k</link>
      <guid>https://dev.to/gurudev_prasadteketi/build-machine-learning-models-with-amazon-sagemaker-1n2k</guid>
      <description>&lt;h2&gt;
  
  
  Build Machine Learning Models with Amazon SageMaker
&lt;/h2&gt;

&lt;p&gt;If your data science team needs to implement machine learning models for their business needs. In this lab, you use Amazon SageMaker to set up a development environment and run a basic linear regression model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Navigate the SageMaker domain.&lt;/li&gt;
&lt;li&gt;Create a SageMaker space.&lt;/li&gt;
&lt;li&gt;Launch SageMaker studio.&lt;/li&gt;
&lt;li&gt;Clone a repository.&lt;/li&gt;
&lt;li&gt;Run a linear regression model.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Task 1: Navigate SageMaker Domain
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;At the top of the AWS Management Console, in the search bar, search for and choose Amazon SageMaker AI.&lt;/li&gt;
&lt;li&gt;In the left navigation pane, under Admin configurations, choose Domains.&lt;/li&gt;
&lt;li&gt;Choose create Domain.&lt;/li&gt;
&lt;li&gt;Choose setup for organization&lt;/li&gt;
&lt;li&gt;In the Domain details page, explore the following:

&lt;ul&gt;
&lt;li&gt;Domain settings&lt;/li&gt;
&lt;li&gt;User profiles&lt;/li&gt;
&lt;li&gt;Environment&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Note: A domain enables you to manage multiple users working in isolated environments.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjab3fs8t32iexzce9875.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjab3fs8t32iexzce9875.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefrgymp58kxe6a9tk8l7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fefrgymp58kxe6a9tk8l7.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmfbpqgbkdlcodw5vd2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmfbpqgbkdlcodw5vd2o.png" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Task 2: Create a SageMaker space
&lt;/h2&gt;

&lt;p&gt;Create a collaborative space for your development work. Spaces in SageMaker Studio enable data scientists to share resources and collaborate on machine learning projects.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the left navigation pane, under Applications and IDEs, choose Studio.

&lt;ul&gt;
&lt;li&gt;Choose Open studio.&lt;/li&gt;
&lt;li&gt;Wait for SageMaker Studio to launch.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Note: After launching SageMaker Studio, a welcome pop-up window may appear. Choose Skip Tour for now to proceed with next steps.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Locate the Applications pane on the left side.&lt;/li&gt;
&lt;li&gt;From the Applications pane, choose JupyterLab.&lt;/li&gt;
&lt;li&gt;Choose + Create JupyterLab space.&lt;/li&gt;
&lt;li&gt;In the Create JupyterLab space window, configure:&lt;/li&gt;
&lt;li&gt;For Name, enter ml-regression-lab.&lt;/li&gt;
&lt;li&gt;Select Share with my domain.&lt;/li&gt;
&lt;li&gt;Choose Create space.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69wk69qslv4s3vimwi6b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F69wk69qslv4s3vimwi6b.png" alt="Image description" width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: Wait for the Status to change to Stopped, which typically takes 1-2 minutes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Task 3: Launch JupyterLab environment
&lt;/h2&gt;

&lt;p&gt;Start and access your JupyterLab environment. JupyterLab provides an interactive development environment for writing and executing machine learning code.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After the space creation completes, choose Run space.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: Wait for the Status to change to Running, which typically takes 1-2 minutes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose Open JupyterLab.&lt;/li&gt;
&lt;li&gt;A new browser tab opens to the JuypterLab workspaces interface.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: The JupyterLab workspaces interface takes 1–2 minutes to load for the first time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97bt2ssypy0ieiirfbzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F97bt2ssypy0ieiirfbzv.png" alt="Image description" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8basuazfnwavwc65ejym.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8basuazfnwavwc65ejym.png" alt="Image description" width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv63qbak2cfga62ya39t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvv63qbak2cfga62ya39t.png" alt="Image description" width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Task 4: Clone repository
&lt;/h2&gt;

&lt;p&gt;Clone the lab repository containing the machine learning notebook.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the left menu bar, choose the Git icon.&lt;/li&gt;
&lt;li&gt;Choose Clone a Repository.&lt;/li&gt;
&lt;li&gt;For Git repository URL, paste the Url for repo of your code.&lt;/li&gt;
&lt;li&gt;Choose Clone.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccy4ydk8n6l369foe0u2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fccy4ydk8n6l369foe0u2.png" alt="Image description" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Task 5: Run linear regression
&lt;/h2&gt;

&lt;p&gt;Implement a basic linear regression model using the provided notebook.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the left navigation pane, open the notebook that has linear regression code with extension ipyb.&lt;/li&gt;
&lt;li&gt;In the Set up notebook environment window, choose Select.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Note: Wait for the kernel gateway to start, which typically takes 2-3 minutes.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Carefully advance through the notebook. Run each code cell and view its output by selecting within the cell and pressing Shift+Enter or choosing the Run button at the top of the page.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flt7hmvk2ef7qfm882b3z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flt7hmvk2ef7qfm882b3z.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2vxli3ta2gt8xkqs2bw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft2vxli3ta2gt8xkqs2bw.png" alt="Image description" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;You have successfully:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigated the SageMaker domain.&lt;/li&gt;
&lt;li&gt;Created a space for development.&lt;/li&gt;
&lt;li&gt;Launched SageMaker studio.&lt;/li&gt;
&lt;li&gt;Cloned a repository.&lt;/li&gt;
&lt;li&gt;Ran a linear regression model.&lt;/li&gt;
&lt;/ol&gt;

</description>
    </item>
    <item>
      <title>AWS Database Migration Service Lab</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Fri, 28 Feb 2025 07:11:37 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/aws-database-migration-service-lab-288</link>
      <guid>https://dev.to/gurudev_prasadteketi/aws-database-migration-service-lab-288</guid>
      <description>&lt;p&gt;Migrate data from a MySQL database running on an Amazon EC2 instance to an Amazon Aurora RDS instance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Services used in this lab
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;AWS Database Migration Service&lt;/strong&gt; : AWS Database Migration Service helps you migrate databases to AWS easily and securely. With AWS Database Migration Service, the source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS Database Migration Service can migrate your data to and from the most widely used commercial and open-source databases. The service supports homogenous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora or Microsoft SQL Server to MySQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Aurora&lt;/strong&gt;: Amazon Aurora is a fully managed, MySQL-compatible relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora provides up to five times better performance than MySQL with the security, availability, and reliability of a commercial database at one tenth the cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Elastic Compute Cloud (Amazon EC2)&lt;/strong&gt;: Amazon EC2 is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon RDS&lt;/strong&gt;: Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business. Amazon RDS provides you with six familiar database engines to choose from, including Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL and MariaDB.&lt;/p&gt;

&lt;p&gt;Additional services and tools&lt;br&gt;
&lt;strong&gt;MySQL Workbench&lt;/strong&gt;: MySQL Workbench is a unified visual tool for database architects, developers, and DBAs. MySQL Workbench provides data modeling, SQL development, and comprehensive administration tools for server configuration, user administration, backup, and much more. MySQL Workbench is available on Windows, Linux and Mac OS X.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration Steps
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Step 1: Connect to your Amazon EC2 instance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Open the AWS Systems Manager console.&lt;/li&gt;
&lt;li&gt;In the navigation pane at the left of the page, under  Node Tools, choose Fleet Manager.&lt;/li&gt;
&lt;li&gt;Select the instance on which MySQL is running.&lt;/li&gt;
&lt;li&gt;From the Node actions  drop-down list, choose Connect and then choose Connect with Remote Desktop.&lt;/li&gt;
&lt;li&gt;For preferred Authentication type choose  Key pair.&lt;/li&gt;
&lt;li&gt;For Key pair content, choose the following option
&lt;/li&gt;
&lt;li&gt;Browse your local machine to select the key pair file.&lt;/li&gt;
&lt;li&gt;Select Choose file to upload the PEM key from your local directory that is associated with your instance.&lt;/li&gt;
&lt;li&gt;Select Connect.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93xapkwubnjkdzgtyugw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F93xapkwubnjkdzgtyugw.png" alt="Image description" width="542" height="1202"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0za0pii86a7xj96gm2al.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0za0pii86a7xj96gm2al.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Configure and connect to your source MySQL database
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;On your remote desktop, open (double-click) mysql-connector-net-8.4.0.msi and complete the installation.&lt;/li&gt;
&lt;li&gt;On your remote desktop, open (double-click) mysql-installer-community-8.0.37.0.&lt;/li&gt;
&lt;li&gt;On the Choosing a Setup Type page:Select  Custom&lt;/li&gt;
&lt;li&gt;On the Select Products page, expand  and select each of the products below one at a time then choose the right arrow to move them to the right window.&lt;/li&gt;
&lt;li&gt;MySQL Servers: MySQL Server 8.0.37 - X64&lt;/li&gt;
&lt;li&gt;Applications: MySQL Workbench - 8.0.36 - X64&lt;/li&gt;
&lt;li&gt;On the Installation page, choose Execute.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This install the server and workbench.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqv47kbsusf2lm5ynmvh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxqv47kbsusf2lm5ynmvh.png" alt="Image description" width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3qw2x1w35j0qvhwtor2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3qw2x1w35j0qvhwtor2.png" alt="Image description" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continue until you get to the Accounts and Roles page.&lt;/li&gt;
&lt;li&gt;On the Accounts and Roles page, configure:

&lt;ul&gt;
&lt;li&gt;MySQL Root Password&lt;/li&gt;
&lt;li&gt;Repeat Password&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Choose Add User , then configure:

&lt;ul&gt;
&lt;li&gt;Username: admin&lt;/li&gt;
&lt;li&gt;Password: &lt;/li&gt;
&lt;li&gt;Confirm Password:
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo5chhqzbbta6w3zsfvo5.png" alt="Image description" width="800" height="575"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6qff2ssydwa26h85zubl.png" alt="Image description" width="800" height="573"&gt;
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Continue till Apply Configuration and select execute.&lt;br&gt;
This configures and starts your source SQL server.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Connect and configure your source MySQL server
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;choose the Local Instance MySQL80 box to connect.&lt;/li&gt;
&lt;li&gt;On the Please enter password for the following service popup, configure:

&lt;ul&gt;
&lt;li&gt;Password: &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Select  Save password in vault
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhe7abq6yt1fr3ti9qpsg.png" alt="Image description" width="800" height="386"&gt;
&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;In the Query 1 window, enter:&lt;br&gt;
&lt;code&gt;CREATE DATABASE mydb;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvpkghgxyogzcn2g7ew5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvpkghgxyogzcn2g7ew5.png" alt="Image description" width="800" height="476"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Import data into your database by choosing Server &amp;gt; Data Import.&lt;/li&gt;
&lt;li&gt;In the Import from Disk window:&lt;/li&gt;
&lt;li&gt;Choose the ellipsis &lt;/li&gt;
&lt;li&gt;Browse to and choose the dumps folder if it is not already selected.&lt;/li&gt;
&lt;li&gt;In the Import Progress window:&lt;/li&gt;
&lt;li&gt;Choose Start Import.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9hrh7fxx8dspq6gva7u5.png" alt="Image description" width="800" height="401"&gt;
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frh0teb9v6a5h33jjif75.png" alt="Image description" width="800" height="518"&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The data is imported into your database.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Verify your data was imported
&lt;/h2&gt;

&lt;p&gt;In the left navigation pane, choose the refresh  icon next to SCHEMAS.You should see your  mydb database.&lt;br&gt;
At the top of the screen, choose the Query 1 tab, then:&lt;br&gt;
Delete the existing Query&lt;br&gt;
Run the following query:&lt;br&gt;
&lt;code&gt;Select * from mydb.employee;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbaz85q7y2zxkud6uczl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwbaz85q7y2zxkud6uczl.png" alt="Image description" width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 5: Use MySQL Workbench to connect to your RDS instance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Open your Ec2 instance in a different tab and also open you cluster endpoint in another tab.&lt;/li&gt;
&lt;li&gt;In you Ec2 instance terminal run the below command&lt;/li&gt;
&lt;li&gt;`cd C:\Users\Administrator\Desktop\&lt;/li&gt;
&lt;li&gt;echo "labstack-c75a29ae-b79e-45ef-be37-37a-auroracluster-qxcm3vw430qn.cluster-cjug6kxfhyno.ap-southeast-2.rds.amazonaws.com" &amp;gt; ClusterEndpoint.txt`&lt;/li&gt;
&lt;li&gt;Return to the Fleet Manager - Remote Desktop browser tab and on your remote dekstop, open the ClusterEndpoint.txt file that you created in the previous step.&lt;/li&gt;
&lt;li&gt;Now go to the MySQL Workbench and at the top of the screen, choose the Home  button.&lt;/li&gt;
&lt;li&gt;Create a new MySQL connection by choosing the plus  button.&lt;/li&gt;
&lt;li&gt;In the Setup New Connection window, configure the following:

&lt;ul&gt;
&lt;li&gt;Connection Name: Aurora&lt;/li&gt;
&lt;li&gt;Hostname: Delete 127.0.0.1. Copy the ClusterEndpoint from the text file named ClusterEndpoint.txt that you opened in the earlier step and paste in the window.&lt;/li&gt;
&lt;li&gt;Username: admin&lt;/li&gt;
&lt;li&gt;Choose Store in Vault…&lt;/li&gt;
&lt;li&gt;Password: paste the DBPassword value &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Choose Test Connection.&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq7sdwaxdwv0b3l11gjc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq7sdwaxdwv0b3l11gjc.png" alt="Image description" width="800" height="503"&gt;&lt;/a&gt;&lt;br&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fql7mcwtm3dz6o8cdiliy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fql7mcwtm3dz6o8cdiliy.png" alt="Image description" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Complete setup and the window returns you to MySQL Workbench. You should now see two connections.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;In the MySQL Workbench menu, choose Aurora to connect to the database.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;The SQL Editor window opens, showing a successful connection to the database.&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Execute  the following query to verify that no data exists in your Aurora instance.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;Select * from mydb.employee;&lt;/code&gt;&lt;br&gt;
It should retrun result that the table does not exist.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsmpppjwq2t5mtjqmxov.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvsmpppjwq2t5mtjqmxov.png" alt="Image description" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6: Migrate your source MySQL database to your Aurora instance using AWS Database Migration Service
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Create your replication instance&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The first step in migrating data using AWS Database Migration Service is to create a replication instance. An AWS DMS replication instance runs on an Amazon Elastic Compute Cloud (Amazon EC2) instance. A replication instance provides high availability and failover support using a Multi-AZ deployment.&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS DMS uses a replication instance that connects to the source data store, reads the source data, and formats the data for consumption by the target data store. A replication instance also loads the data into the target data store. Most of this processing happens in memory. However, large transactions might require some buffering on disk. Cached transactions and log files are also written to disk.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the AWS Management Console, go to  Database Migration Service.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left navigation pane, choose Replication instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Create replication instance then configure:&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Name: replicationInstance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Description - optional: replicationInstance&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Instance class: dms.t3.micro&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;High Availability: Dev or test workload(Single-AZ)&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Virtual private cloud (VPC) for IPv4: Lab-VPC&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Public accessible: Deselect &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose Create replication instance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Wait for the status of your replication instance to display Available.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2dium6n9r45sa05mz0h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa2dium6n9r45sa05mz0h.png" alt="Image description" width="800" height="353"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6pgzurpbaqx98l0zfwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6pgzurpbaqx98l0zfwb.png" alt="Image description" width="800" height="512"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tz2x3q8c6gmal85vthw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tz2x3q8c6gmal85vthw.png" alt="Image description" width="800" height="545"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljf1rwq92javgw6twjm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fljf1rwq92javgw6twjm6.png" alt="Image description" width="800" height="499"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Step 7: Create your source endpoint
&lt;/h2&gt;

&lt;p&gt;An endpoint provides connection, data store type, and location information about your data store. AWS Database Migration Service uses this information to connect to a data store and migrate data from a source endpoint to a target endpoint. You can specify additional connection attributes for an endpoint by using extra connection attributes. These attributes can control logging, file size, and other parameters; for more information about extra connection attributes, see the documentation section for your data store.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the left navigation pane, choose Endpoints.&lt;/li&gt;
&lt;li&gt;Choose Create endpoint then configure:

&lt;ul&gt;
&lt;li&gt;Source endpoint&lt;/li&gt;
&lt;li&gt;Endpoint identifier: MySQL&lt;/li&gt;
&lt;li&gt;Source engine: MySQL&lt;/li&gt;
&lt;li&gt;Access to endpoint database:  Provide access information manually&lt;/li&gt;
&lt;li&gt;Server name: input WindowsPrivateIP &lt;/li&gt;
&lt;li&gt;Port: 3306&lt;/li&gt;
&lt;li&gt;User name: admin&lt;/li&gt;
&lt;li&gt;Password: copy and paste the DBPassword &lt;/li&gt;
&lt;li&gt;Expand  Test endpoint connection (optional), then configure:&lt;/li&gt;
&lt;li&gt;VPC: Lab-VPC&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Choose Run test
Once your test is successful, choose Create endpoint.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrf3m3tft1882bjbb6e9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnrf3m3tft1882bjbb6e9.png" alt="Image description" width="800" height="325"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7yvdv2qic8nsubwqd3r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy7yvdv2qic8nsubwqd3r.png" alt="Image description" width="800" height="513"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5wvn0210y356frvvh4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5wvn0210y356frvvh4j.png" alt="Image description" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 8: Create your target endpoint to your Aurora instance
&lt;/h2&gt;

&lt;p&gt;At the top of the screen, choose Create endpoint , then configure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select  Target endpoint&lt;/li&gt;
&lt;li&gt;Check  Select RDS DB instance&lt;/li&gt;
&lt;li&gt;For RDS Instance, select the RDS instance that appears.&lt;/li&gt;
&lt;li&gt;In the Endpoint configuration section, configure:

&lt;ul&gt;
&lt;li&gt;Endpoint identifier: aurora&lt;/li&gt;
&lt;li&gt;Access to endpoint database:  Provide access information manually&lt;/li&gt;
&lt;li&gt;User name: admin&lt;/li&gt;
&lt;li&gt;Password:paste the DBPassword &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Expand  Test endpoint connection, then configure:&lt;/li&gt;

&lt;li&gt;VPC: Lab-VPC&lt;/li&gt;

&lt;li&gt;Choose Run test&lt;/li&gt;

&lt;li&gt;Once your test is successful, choose Create endpoint.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy7i75iobit33z9xdixz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiy7i75iobit33z9xdixz.png" alt="Image description" width="800" height="492"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 9: Create a database migration task
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;An AWS Database Migration Service (AWS DMS)&lt;/strong&gt; task is where all the work happens. You use tasks to migrate data from the source endpoint to the target endpoint, and the task processing is done on the replication instance. You specify what tables and schemas to use for your migration and any special processing, such as logging requirements, control table data, and error handling.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;In the left navigation pane, choose Database migration tasks.&lt;/li&gt;
&lt;li&gt;Choose Create Database migration task then configure:

&lt;ul&gt;
&lt;li&gt;Task identifier: MySQL-Aurora&lt;/li&gt;
&lt;li&gt;Replication instance: Select your replication instance&lt;/li&gt;
&lt;li&gt;Source database endpoint: mysql&lt;/li&gt;
&lt;li&gt;Target database endpoint: aurora&lt;/li&gt;
&lt;li&gt;Migration type: Migrate&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;Under Table mappings, choose Add new selection rule and then configure:

&lt;ul&gt;
&lt;li&gt;Schema: Enter a schema&lt;/li&gt;
&lt;li&gt;Source name: mydb&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;In the Premigration assessment section, uncheck Turn on premigration assessment.&lt;/li&gt;

&lt;li&gt;Choose Create database migration task.&lt;/li&gt;

&lt;li&gt;Wait for the Status of your task to change from running to  Load complete.&lt;/li&gt;

&lt;li&gt;Choose your mysql-aurora task.&lt;/li&gt;

&lt;li&gt;Choose the Table statistics tab.
Caution: This shows the table statistics for your database migration task. You should see that 3 tables were loaded.&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlpajbh7uwqbdg736jz9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjlpajbh7uwqbdg736jz9.png" alt="Image description" width="800" height="527"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1a9pcxghk5re0ika187t.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1a9pcxghk5re0ika187t.png" alt="Image description" width="800" height="495"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0z3mr8a5z5jfjyquvdwz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0z3mr8a5z5jfjyquvdwz.png" alt="Image description" width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 10: Verification of Migration
&lt;/h2&gt;

&lt;p&gt;Return to the Fleet Manager - Remote Desktop browser tab. &lt;br&gt;
In MySQL Workbench, on the Aurora connection tab, Execute :&lt;/p&gt;

&lt;p&gt;&lt;code&gt;Select * from mydb.employee;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Above query should return data confirming the successful usage of the AWS Database Migration Service to migrate data from your source MySQL server to your target Aurora instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpghljhibx9mqtg2kp280.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpghljhibx9mqtg2kp280.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Connected to a pre-created Amazon EC2 instance&lt;br&gt;
Configured MySQL Server as your source database for migration&lt;br&gt;
Connected to a pre-created Amazon Aurora instance&lt;br&gt;
Migrated data from your MySQL server to your Aurora instance&lt;br&gt;
Verified that your migration was successful&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Securing data in transit and data at rest in AWS</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Fri, 28 Feb 2025 05:21:52 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/securing-data-in-transit-and-data-at-rest-in-aws-19hi</link>
      <guid>https://dev.to/gurudev_prasadteketi/securing-data-in-transit-and-data-at-rest-in-aws-19hi</guid>
      <description>&lt;p&gt;Securing data in transit and data at rest in AWS requires a combination of encryption, access controls, and AWS native security services. &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Securing Data in Transit
&lt;/h2&gt;

&lt;p&gt;Data in transit refers to data actively moving across networks, such as between clients and AWS services or between AWS resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use TLS (Transport Layer Security) Encryption&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Enforce HTTPS (TLS 1.2 or 1.3) for API endpoints and web applications.&lt;/li&gt;
&lt;li&gt;Use AWS Certificate Manager (ACM) to manage SSL/TLS certificates.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Encrypt Data in Transit Between AWS Services&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;AWS services like S3, RDS, and DynamoDB automatically use TLS for data transmission.&lt;/li&gt;
&lt;li&gt;Use AWS PrivateLink to securely communicate between VPCs and AWS services.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Implement Mutual TLS (mTLS)&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Use ACM and AWS IoT for mutual authentication.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Use Secure Communication Channels&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;VPNs: AWS SitetoSite VPN and AWS Client VPN for secure access.&lt;/li&gt;
&lt;li&gt;AWS Direct Connect with MACsec (Media Access Control Security) for encrypted dedicated connections.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Enable Encryption in Transit for Databases&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;RDS, Aurora, Redshift, and DynamoDB support TLS encryption.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Use AWS Shield and AWS WAF&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Protect against DDoS and applicationlayer attacks on encrypted endpoints.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Services for Data in Transit Security
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Certificate Manager (ACM)&lt;/strong&gt; - Manages SSL/TLS certificates for HTTPS &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS PrivateLink&lt;/strong&gt; - Secure communication between VPCs and AWS services &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS VPN &amp;amp; AWS Direct Connect&lt;/strong&gt; - Secure network connections to AWS &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Shield &amp;amp; AWS WAF&lt;/strong&gt; - Protects against DDoS attacks &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Key Management Service (KMS)&lt;/strong&gt; - Encrypts data in transit and at rest &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Secrets Manager&lt;/strong&gt; - Stores and manages database credentials and API keys securely &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Securing Data at Rest
&lt;/h2&gt;

&lt;p&gt;Data at rest refers to stored data, such as files in S3, databases, and snapshots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best Practices&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Use AWSManaged Encryption Services&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Enable server side encryption (SSE) for S3, RDS, DynamoDB, EBS, and Redshift.&lt;/li&gt;
&lt;li&gt;Use AWS Key Management Service (KMS) to manage encryption keys.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt; &lt;strong&gt;Enforce Encryption at Rest Across All AWS Storage Services&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Amazon S3: SSES3 (AES256), SSEKMS, or SSEC.&lt;/li&gt;
&lt;li&gt;Amazon EBS: Encrypt EBS volumes and snapshots using KMS.&lt;/li&gt;
&lt;li&gt;Amazon RDS: Enable encryption when creating a database instance.&lt;/li&gt;
&lt;li&gt;Amazon DynamoDB: Default encryption with AWS KMS.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt; &lt;strong&gt;Limit Access to Encrypted Data&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Implement IAM policies and bucket policies to control access.&lt;/li&gt;
&lt;li&gt;Use AWS Identity and Access Management (IAM) for least privilege access.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt; &lt;strong&gt;Monitor and Audit Data Access&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Use AWS CloudTrail for logging and monitoring API calls.&lt;/li&gt;
&lt;li&gt;Enable AWS Config to track configuration changes.&lt;/li&gt;
&lt;li&gt;Use Amazon Macie to detect sensitive data in S3.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt; &lt;strong&gt;Backup and Secure Snapshots&lt;/strong&gt;:

&lt;ul&gt;
&lt;li&gt;Encrypt backups and snapshots (S3, RDS, EBS).&lt;/li&gt;
&lt;li&gt;Use AWS Backup to automate encrypted backups.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h2&gt;
  
  
  AWS Services for Data at Rest Security
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Key Management Service (KMS) - Manages encryption keys for AWS services &lt;/li&gt;
&lt;li&gt;AWS Secrets Manager - Securely stores and rotates secrets &lt;/li&gt;
&lt;li&gt;AWS CloudTrail - Logs API activity for security auditing &lt;/li&gt;
&lt;li&gt;Amazon Macie - Detects sensitive data in S3 &lt;/li&gt;
&lt;li&gt;AWS Config - Tracks resource configurations and compliance &lt;/li&gt;
&lt;li&gt;AWS Backup - Centralized backup management with encryption &lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Use AWS KMS for all encryption needs.&lt;/li&gt;
&lt;li&gt;Enforce IAM least privilege principles to prevent unauthorized access.&lt;/li&gt;
&lt;li&gt;Automate security monitoring with AWS Security Hub, CloudWatch, and GuardDuty.&lt;/li&gt;
&lt;li&gt;Follow AWS WellArchitected Framework security best practices.&lt;/li&gt;
&lt;/ul&gt;

</description>
    </item>
    <item>
      <title>Amazon Aurora vs. Amazon RDS</title>
      <dc:creator>Gurudev Prasad Teketi</dc:creator>
      <pubDate>Fri, 28 Feb 2025 04:58:28 +0000</pubDate>
      <link>https://dev.to/gurudev_prasadteketi/amazon-aurora-vs-amazon-rds-5h80</link>
      <guid>https://dev.to/gurudev_prasadteketi/amazon-aurora-vs-amazon-rds-5h80</guid>
      <description>&lt;h2&gt;
  
  
  Difference between Amazon Aurora and Amazon RDS
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon Aurora&lt;/strong&gt;: A fully managed relational database service designed for high performance and scalability. It is compatible with MySQL and PostgreSQL but offers better performance, availability, and fault tolerance.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Amazon RDS (Relational Database Service)&lt;/strong&gt;: A managed database service that supports multiple database engines (MySQL, PostgreSQL, MariaDB, Oracle, SQL Server, and Amazon Aurora) with automated maintenance, backups, and scaling.  &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Differences
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Performance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Aurora: Offers up to **5x MySQL performance and 3x PostgreSQL **performance due to optimized architecture.&lt;/p&gt;

&lt;p&gt;RDS: Provides native performance of the chosen engine (e.g., MySQL, PostgreSQL, etc.) without enhancements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Storage &amp;amp; Scaling&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Aurora:Auto-scales storage from &lt;strong&gt;10GB up to 128TB&lt;/strong&gt; per database instance.&lt;br&gt;
Supports &lt;strong&gt;15 read replicas&lt;/strong&gt; with sub-10ms replication lag.&lt;/p&gt;

&lt;p&gt;RDS:Manual storage scaling up to &lt;strong&gt;64TB (varies by engine)&lt;/strong&gt;.&lt;br&gt;
Supports &lt;strong&gt;5 read replicas&lt;/strong&gt; with higher replication lag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. High Availability &amp;amp; Failover&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Aurora:Replicates data across &lt;strong&gt;6 copies in 3 Availability Zones&lt;/strong&gt; (AZs).&lt;br&gt;
Automated failover in ~30 seconds.&lt;/p&gt;

&lt;p&gt;RDS:Uses &lt;strong&gt;Single-AZ or Multi-AZ deployment&lt;/strong&gt; (2 copies in 2 AZs).&lt;br&gt;
Failover time is 60-120 seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Replication&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Aurora:&lt;strong&gt;15 read replicas&lt;/strong&gt; with fast replication.&lt;br&gt;
Supports cross-region replication.&lt;/p&gt;

&lt;p&gt;RDS:&lt;strong&gt;5 read replicas&lt;/strong&gt; with slower replication.&lt;br&gt;
Cross-region replication only for MySQL and PostgreSQL.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Pricing &amp;amp; Cost Efficiency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Aurora:More expensive than RDS due to enhanced features.&lt;br&gt;
&lt;strong&gt;Pay-as-you-go model&lt;/strong&gt; with storage auto-scaling.&lt;/p&gt;

&lt;p&gt;RDS:Cheaper for small-to-medium workloads.Fixed storage allocation, requiring manual scaling.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Backup &amp;amp; Restore&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Aurora: &lt;strong&gt;Continuous, incremental backups to S3&lt;/strong&gt;, with point-in-time recovery.&lt;/p&gt;

&lt;p&gt;RDS: &lt;strong&gt;Daily snapshots and transaction logs&lt;/strong&gt; for point-in-time recovery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Use Case Suitability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Aurora: Best for high-performance, high-availability applications that require scalability and fast replication.&lt;/p&gt;

&lt;p&gt;RDS: Suitable for standard workloads that don’t require Aurora’s high throughput or storage flexibility. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foryyx6khv1tfdisli4hi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foryyx6khv1tfdisli4hi.png" alt="Image description" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
