<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Anirban Das</title>
    <description>The latest articles on DEV Community by Anirban Das (@dasanirban834).</description>
    <link>https://dev.to/dasanirban834</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dasanirban834"/>
    <language>en</language>
    <item>
      <title>Building a Production-Ready RAG Chatbot with AWS Bedrock, LangChain, and Terraform</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Sun, 22 Feb 2026 12:46:23 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-production-ready-rag-chatbot-with-aws-bedrock-langchain-and-terraform-381k</link>
      <guid>https://dev.to/aws-builders/building-a-production-ready-rag-chatbot-with-aws-bedrock-langchain-and-terraform-381k</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;In the era of generative AI, chatbots have evolved from simple rule-based systems to intelligent assistants capable of understanding context, retrieving relevant information, and providing accurate responses. This project showcases a production-grade implementation of a dual-mode chatbot system that combines the power of Large Language Models (LLMs) with Retrieval-Augmented Generation (RAG) capabilities.&lt;/p&gt;

&lt;p&gt;The system addresses a common challenge in enterprise AI applications: how to provide both general conversational AI and domain-specific knowledge retrieval in a single, unified platform. By leveraging AWS Bedrock's foundation models, LangChain's orchestration framework, and OpenSearch's vector database, we've built a solution that is not only intelligent but also scalable, maintainable, and production-ready.&lt;/p&gt;

&lt;p&gt;What sets this project apart is its automatic categorization feature—users don't need to manually select document categories. The LLM intelligently analyzes each query and routes it to the appropriate knowledge base, creating a seamless user experience. Combined with conversation memory, interactive feedback mechanisms, and a complete CI/CD pipeline, this project demonstrates enterprise-grade AI application development.&lt;/p&gt;

&lt;p&gt;Whether you're building a customer support bot, an internal knowledge assistant, or a document Q&amp;amp;A system, this architecture provides a solid foundation that can be adapted to your specific needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Project Overview&lt;/li&gt;
&lt;li&gt;Architecture&lt;/li&gt;
&lt;li&gt;Project Structure&lt;/li&gt;
&lt;li&gt;Detailed Component Analysis&lt;/li&gt;
&lt;li&gt;Deployment Pipeline&lt;/li&gt;
&lt;li&gt;Key Features&lt;/li&gt;
&lt;li&gt;Setup and Installation&lt;/li&gt;
&lt;li&gt;Conclusion&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Project Overview
&lt;/h2&gt;

&lt;p&gt;This project implements a sophisticated dual-mode chatbot system that combines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;General Chatbot&lt;/strong&gt;: Direct interaction with AWS Bedrock foundation models&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG Agent&lt;/strong&gt;: Intelligent document-based Q&amp;amp;A with automatic categorization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system is production-ready with Docker containerization, Terraform infrastructure as code, and GitLab CI/CD pipeline for automated deployment to AWS ECS Fargate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technology Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Frontend&lt;/strong&gt;: Streamlit (Python web framework)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM Provider&lt;/strong&gt;: AWS Bedrock (Claude 3, Cohere Command R+)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Orchestration&lt;/strong&gt;: LangChain&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Vector Database&lt;/strong&gt;: OpenSearch&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Storage&lt;/strong&gt;: AWS S3&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure&lt;/strong&gt;: Terraform&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Container&lt;/strong&gt;: Docker&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CI/CD&lt;/strong&gt;: GitLab CI&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compute&lt;/strong&gt;: AWS ECS Fargate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Load Balancer&lt;/strong&gt;: AWS Application Load Balancer&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────────────┐
│                         User Interface                          │
│                    (Streamlit Multi-Page App)                   │
└────────────────┬────────────────────────────────────────────────┘
                 │
        ┌────────┴────────┐
        │                 │
┌───────▼──────┐  ┌──────▼────────┐
│   Chatbot    │  │   RAG Agent   │
│   (Direct)   │  │  (Document)   │
└───────┬──────┘  └──────┬────────┘
        │                 │
        │         ┌───────┴────────┐
        │         │                │
        │    ┌────▼─────┐   ┌─────▼──────┐
        │    │    S3    │   │ OpenSearch │
        │    │Documents │   │   Vector   │
        │    └──────────┘   │   Store    │
        │                   └────────────┘
        │
        └─────────┬─────────┘
                  │
          ┌───────▼────────┐
          │  AWS Bedrock   │
          │  Foundation    │
          │    Models      │
          └────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deployment Architecture
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GitLab CI/CD → Docker Build → ECR → ECS Fargate → ALB → Users
                                ↓
                          CloudWatch Logs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Project Structure
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;build-llm-chatbot-using-langchain/
│
├── Chatbot/                    # General chatbot module
│   ├── chatbot.py             # Main chatbot interface
│   ├── bedrock_model.py       # Bedrock integration &amp;amp; logic
│   ├── app_feature.py         # UI components &amp;amp; styling
│
├── RAGAgent/                   # RAG agent module
│   └── agent.py               # RAG implementation
│
├── Terraform/                  # Infrastructure as Code
│   ├── provider.tf            # AWS provider &amp;amp; backend config
│   ├── ecr.tf                 # ECR repository
│   ├── ecs.tf                 # ECS cluster &amp;amp; service
│   ├── alb.tf                 # Application Load Balancer
│   ├── iam.tf                 # IAM roles &amp;amp; policies
│   ├── data.tf                # Data sources
│   ├── var.tf                 # Variable definitions
│   └── terraform.tfvars       # Variable values
│
├── navigation.py               # Multi-page navigation
├── config.toml                 # Streamlit theme config
├── requirements.txt            # Python dependencies
├── Dockerfile                  # Container definition
├── .gitlab-ci.yml             # CI/CD pipeline
└── README.md                   # Documentation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Detailed Component Analysis
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Chatbot Module (&lt;code&gt;Chatbot/&lt;/code&gt;)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;code&gt;chatbot.py&lt;/code&gt; - Main Interface
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt;: Entry point for the general chatbot interface&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Components&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Page configuration
&lt;/span&gt;&lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_page_config&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page_title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Chatbot&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;page_icon&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;img.png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;layout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wide&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Model selection
&lt;/span&gt;&lt;span class="n"&gt;model_list&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anthropic.claude-3-sonnet-20240229-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;anthropic.claude-3-haiku-20240307-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cohere.command-r-plus-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cohere.command-r-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Sidebar configuration
&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;Model&lt;/span&gt; &lt;span class="n"&gt;selector&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;Temperature&lt;/span&gt; &lt;span class="nf"&gt;slider &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mf"&gt;1.0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;Max&lt;/span&gt; &lt;span class="n"&gt;tokens&lt;/span&gt; &lt;span class="nf"&gt;slider &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;S3&lt;/span&gt; &lt;span class="n"&gt;bucket&lt;/span&gt; &lt;span class="nb"&gt;input&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="n"&gt;based&lt;/span&gt; &lt;span class="n"&gt;answers&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;New&lt;/span&gt; &lt;span class="n"&gt;message&lt;/span&gt; &lt;span class="n"&gt;button&lt;/span&gt;
&lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;Chat&lt;/span&gt; &lt;span class="n"&gt;history&lt;/span&gt; &lt;span class="n"&gt;display&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Features&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-model support with dropdown selection&lt;/li&gt;
&lt;li&gt;Adjustable temperature for response creativity&lt;/li&gt;
&lt;li&gt;Token limit control&lt;/li&gt;
&lt;li&gt;S3 integration for document-based responses&lt;/li&gt;
&lt;li&gt;Session management&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;code&gt;bedrock_model.py&lt;/code&gt; - Core Logic
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt;: Handles AWS Bedrock integration and conversation flow&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Functions&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;LLM Initialization&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatBedrockConverse&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bedrock_client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;temperature&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Conversation Memory&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;chat_history&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;InMemoryChatMessageHistory&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ConversationBufferMemory&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;memory_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;chat_history&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;chat_memory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;chat_history&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;return_messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Message Display with Feedback&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;Like (👍), Dislike (👎), Love (❤️), Smile (😊) reactions&lt;/li&gt;
&lt;li&gt;Response regeneration (🔄)&lt;/li&gt;
&lt;li&gt;Copy to clipboard functionality&lt;/li&gt;
&lt;li&gt;Feedback state persistence&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  &lt;code&gt;app_feature.py&lt;/code&gt; - UI Components
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt;: Provides reusable UI components and styling&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Components&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Typing Indicator&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;typing_indicator&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="c1"&gt;# Animated "Bot is typing..." with dots
&lt;/span&gt;    &lt;span class="c1"&gt;# CSS animation for smooth UX
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Auto-scroll&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;autoscroll&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="c1"&gt;# JavaScript to scroll to latest message
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Custom CSS&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;Dark theme styling&lt;/li&gt;
&lt;li&gt;Button transparency&lt;/li&gt;
&lt;li&gt;Hover effects&lt;/li&gt;
&lt;li&gt;Animation keyframes&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  2. RAG Agent Module (&lt;code&gt;RAGAgent/&lt;/code&gt;)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;code&gt;agent.py&lt;/code&gt; - Complete RAG Implementation
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt;: Document-based Q&amp;amp;A with vector search and automatic categorization&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Configuration&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;AWS_REGION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;S3_BUCKET&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rag-agent-knowledge-base-98770&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;OPENSEARCH_HOST&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://search-mydemanricsearchdomain-...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;OPENSEARCH_INDEX&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rag-index&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;EMBEDDING_MODEL_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;amazon.titan-embed-text-v2:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;CATEGORIES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Technical&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Healthcare&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agriculture&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Travelling&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Gadgets&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Music&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Cooking&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Key Functions&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automatic Categorization&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;categorize_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Classify this question into ONE category from: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;CATEGORIES&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
Question: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;
Return ONLY the category name.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;category&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;category&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;CATEGORIES&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="n"&gt;CATEGORIES&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Vector Store Builder&lt;/strong&gt; (Cached):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@st.cache_resource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;show_spinner&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;🔍 Indexing documents...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;build_vectorstore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;selected_category&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;OpenSearchVectorSearch&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="c1"&gt;# Load documents from S3
&lt;/span&gt;    &lt;span class="n"&gt;loader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;S3DirectoryLoader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;bucket&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;S3_BUCKET&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prefix&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;selected_category&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;documents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;loader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;load&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Split into chunks
&lt;/span&gt;    &lt;span class="n"&gt;splitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;RecursiveCharacterTextSplitter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;chunk_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;chunk_overlap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;splits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;splitter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documents&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Create embeddings
&lt;/span&gt;    &lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BedrockEmbeddings&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;EMBEDDING_MODEL_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;AWS_REGION&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Store in OpenSearch
&lt;/span&gt;    &lt;span class="n"&gt;vectorstore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenSearchVectorSearch&lt;/span&gt;&lt;span class="p"&gt;(...)&lt;/span&gt;
    &lt;span class="n"&gt;vectorstore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;splits&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;vectorstore&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;RAG Prompt Template&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;rag_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;ChatPromptTemplate&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_messages&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a helpful assistant. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Answer using the provided context and chat history when available. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;If the answer is not in the context, use your own knowledge.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;human&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Chat History:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;{chat_history}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Context:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;{context}&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Question:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;{question}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Document Retrieval &amp;amp; Response&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Auto-categorize
&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;categorize_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Build vector store
&lt;/span&gt;&lt;span class="n"&gt;vectorstore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;build_vectorstore&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;category&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vectorstore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_retriever&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;search_type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;similarity&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;search_kwargs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;k&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Retrieve relevant documents
&lt;/span&gt;&lt;span class="n"&gt;docs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;retriever&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Build chat history
&lt;/span&gt;&lt;span class="n"&gt;chat_history&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;capitalize&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; 
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;session_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;agent_messages&lt;/span&gt;&lt;span class="p"&gt;[:&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Generate response
&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;rag_prompt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;chat_history&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;chat_history&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;context&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;user_input&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Features&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic category detection (no manual selection)&lt;/li&gt;
&lt;li&gt;Document upload to S3 with category assignment&lt;/li&gt;
&lt;li&gt;Typing indicators during processing&lt;/li&gt;
&lt;li&gt;Feedback buttons (like, dislike, love)&lt;/li&gt;
&lt;li&gt;Response regeneration&lt;/li&gt;
&lt;li&gt;Conversation memory&lt;/li&gt;
&lt;li&gt;Hybrid knowledge (documents + LLM training)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Navigation (&lt;code&gt;navigation.py&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt;: Multi-page application router&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;streamlit&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;sys&lt;/span&gt;

&lt;span class="c1"&gt;# Add module paths
&lt;/span&gt;&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./Chatbot&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;sys&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;./RAGAgent&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Define pages
&lt;/span&gt;&lt;span class="n"&gt;pages&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Resources&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Page&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Chatbot/chatbot.py&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;ChatBot&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Page&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;RAGAgent/agent.py&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;RAGAgent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Run navigation
&lt;/span&gt;&lt;span class="n"&gt;pg&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;st&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;navigation&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pages&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;position&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;top&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;pg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Features&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Top navigation bar&lt;/li&gt;
&lt;li&gt;Separate session states for each page&lt;/li&gt;
&lt;li&gt;Dynamic module loading&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  4. Configuration (&lt;code&gt;config.toml&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt;: Streamlit theme customization&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight toml"&gt;&lt;code&gt;&lt;span class="c"&gt;# .streamlit/config.toml&lt;/span&gt;
&lt;span class="nn"&gt;[theme]&lt;/span&gt;
&lt;span class="py"&gt;base&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"dark"&lt;/span&gt;
&lt;span class="py"&gt;font&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"serif"&lt;/span&gt;
&lt;span class="py"&gt;baseFontSize&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;
&lt;span class="py"&gt;primaryColor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"forestGreen"&lt;/span&gt;
&lt;span class="py"&gt;backgroundColor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"#141415"&lt;/span&gt;
&lt;span class="py"&gt;codeBackgroundColor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"#1e2026"&lt;/span&gt; &lt;span class="c"&gt;# Near-black navy&lt;/span&gt;
&lt;span class="py"&gt;textColor&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"#74e6f0"&lt;/span&gt;
&lt;span class="py"&gt;baseRadius&lt;/span&gt;&lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="s"&gt;"full"&lt;/span&gt;

&lt;span class="nn"&gt;[theme.sidebar]&lt;/span&gt;
&lt;span class="py"&gt;backgroundColor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"#0F172A"&lt;/span&gt;   &lt;span class="c"&gt;# Deep Navy&lt;/span&gt;
&lt;span class="py"&gt;secondaryBackgroundColor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"#1E293B"&lt;/span&gt;  &lt;span class="c"&gt;# Slate Dark&lt;/span&gt;
&lt;span class="py"&gt;primaryColor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"#0795ed"&lt;/span&gt;      &lt;span class="c"&gt;# Neon Sky Blue&lt;/span&gt;
&lt;span class="py"&gt;textColor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"#f5f2f4"&lt;/span&gt;        &lt;span class="c"&gt;# Soft white (easy on eyes)&lt;/span&gt;
&lt;span class="py"&gt;codeTextColor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"#994780"&lt;/span&gt;        &lt;span class="c"&gt;# Soft light gray&lt;/span&gt;
&lt;span class="py"&gt;codeBackgroundColor&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"#020617"&lt;/span&gt; &lt;span class="c"&gt;# Near-black navy&lt;/span&gt;
&lt;span class="py"&gt;baseRadius&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"50px"&lt;/span&gt;
&lt;span class="py"&gt;buttonRadius&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s"&gt;"100px"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Customizations&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dark theme with navy sidebar&lt;/li&gt;
&lt;li&gt;Custom color palette&lt;/li&gt;
&lt;li&gt;Rounded buttons and borders&lt;/li&gt;
&lt;li&gt;Serif font for readability&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Infrastructure (&lt;code&gt;Terraform/&lt;/code&gt;)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  &lt;code&gt;provider.tf&lt;/code&gt; - AWS Configuration
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;terraform&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;required_providers&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;aws&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;source&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hashicorp/aws"&lt;/span&gt;
      &lt;span class="nx"&gt;version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"6.17.0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;backend&lt;/span&gt; &lt;span class="s2"&gt;"s3"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;bucket&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"terraform0806"&lt;/span&gt;
    &lt;span class="nx"&gt;key&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"TerraformStateFiles1"&lt;/span&gt;
    &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;provider&lt;/span&gt; &lt;span class="s2"&gt;"aws"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;region&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"us-east-1"&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;code&gt;ecr.tf&lt;/code&gt; - Container Registry
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecr_repository"&lt;/span&gt; &lt;span class="s2"&gt;"aws-ecr"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"streamlit-chatbot"&lt;/span&gt;

  &lt;span class="nx"&gt;image_scanning_configuration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;scan_on_push&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;custom_tags&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;code&gt;ecs.tf&lt;/code&gt; - Container Orchestration
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Components&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;ECS Cluster&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecs_cluster"&lt;/span&gt; &lt;span class="s2"&gt;"aws-ecs-cluster"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_details&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"Name"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;configuration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;execute_command_configuration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;kms_key_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_kms_key&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;kms&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
      &lt;span class="nx"&gt;logging&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_details&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"logging"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
      &lt;span class="nx"&gt;log_configuration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;cloud_watch_encryption_enabled&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
        &lt;span class="nx"&gt;cloud_watch_log_group_name&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_cloudwatch_log_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;log-group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Task Definition&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecs_task_definition"&lt;/span&gt; &lt;span class="s2"&gt;"taskdef"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;family&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_def&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"family"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

  &lt;span class="nx"&gt;container_definitions&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;([{&lt;/span&gt;
    &lt;span class="nx"&gt;name&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_def&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"cont_name"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;image&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${aws_ecr_repository.aws-ecr.repository_url}:v3"&lt;/span&gt;
    &lt;span class="nx"&gt;portMappings&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
      &lt;span class="nx"&gt;containerPort&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_def&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"containerport"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="p"&gt;}]&lt;/span&gt;
    &lt;span class="nx"&gt;cpu&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_def&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"cpu_allocations"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;memory&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_def&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"mem_allocations"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}])&lt;/span&gt;

  &lt;span class="nx"&gt;requires_compatibilities&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"FARGATE"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;network_mode&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"awsvpc"&lt;/span&gt;
  &lt;span class="nx"&gt;memory&lt;/span&gt;                   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2048"&lt;/span&gt;
  &lt;span class="nx"&gt;cpu&lt;/span&gt;                      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"1024"&lt;/span&gt;
  &lt;span class="nx"&gt;execution_role_arn&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecsTaskExecutionRole&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;ECS Service&lt;/strong&gt;:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_ecs_service"&lt;/span&gt; &lt;span class="s2"&gt;"streamlit"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"service-chatbot"&lt;/span&gt;
  &lt;span class="nx"&gt;cluster&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_ecs_cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws-ecs-cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;task_definition&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_ecs_task_definition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;taskdef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
  &lt;span class="nx"&gt;desired_count&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_count&lt;/span&gt;
  &lt;span class="nx"&gt;launch_type&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"FARGATE"&lt;/span&gt;

  &lt;span class="nx"&gt;load_balancer&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;target_group_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_lb_target_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this_tg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
    &lt;span class="nx"&gt;container_name&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_def&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"cont_name"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;container_port&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ecs_task_def&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"containerport"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;network_configuration&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;assign_public_ip&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;subnets&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web_subnet_1a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web_subnet_1b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="nx"&gt;security_groups&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;streamlit_app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;code&gt;alb.tf&lt;/code&gt; - Load Balancer
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lb"&lt;/span&gt; &lt;span class="s2"&gt;"this_alb"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ALB_conf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;load_balancer_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"application"&lt;/span&gt;
  &lt;span class="nx"&gt;ip_address_type&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ipv4"&lt;/span&gt;
  &lt;span class="nx"&gt;internal&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="nx"&gt;security_groups&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;ext_alb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;subnets&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web_subnet_1a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_subnet&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;web_subnet_1b&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lb_target_group"&lt;/span&gt; &lt;span class="s2"&gt;"this_tg"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;TG_conf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;port&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;8501&lt;/span&gt;
  &lt;span class="nx"&gt;protocol&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP"&lt;/span&gt;
  &lt;span class="nx"&gt;vpc_id&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;aws_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this_vpc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;target_type&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ip"&lt;/span&gt;

  &lt;span class="nx"&gt;health_check&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;enabled&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;healthy_threshold&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;
    &lt;span class="nx"&gt;interval&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;
    &lt;span class="nx"&gt;path&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"/"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lb_listener"&lt;/span&gt; &lt;span class="s2"&gt;"this_alb_lis"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;load_balancer_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_lb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this_alb&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
  &lt;span class="nx"&gt;port&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;80&lt;/span&gt;
  &lt;span class="nx"&gt;protocol&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"HTTP"&lt;/span&gt;

  &lt;span class="nx"&gt;default_action&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;type&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"forward"&lt;/span&gt;
    &lt;span class="nx"&gt;target_group_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_lb_target_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;this_tg&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;arn&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;code&gt;iam.tf&lt;/code&gt; - Permissions
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_iam_role"&lt;/span&gt; &lt;span class="s2"&gt;"ecsTaskExecutionRole"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ecsTaskExecutionRole"&lt;/span&gt;

  &lt;span class="nx"&gt;assume_role_policy&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
    &lt;span class="nx"&gt;Version&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"2012-10-17"&lt;/span&gt;
    &lt;span class="nx"&gt;Statement&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;
      &lt;span class="nx"&gt;Action&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"sts:AssumeRole"&lt;/span&gt;
      &lt;span class="nx"&gt;Effect&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"Allow"&lt;/span&gt;
      &lt;span class="nx"&gt;Principal&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;Service&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ecs-tasks.amazonaws.com"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}]&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Attach policies for:&lt;/span&gt;
&lt;span class="c1"&gt;# - ECR access&lt;/span&gt;
&lt;span class="c1"&gt;# - CloudWatch Logs&lt;/span&gt;
&lt;span class="c1"&gt;# - Bedrock access&lt;/span&gt;
&lt;span class="c1"&gt;# - S3 access&lt;/span&gt;
&lt;span class="c1"&gt;# - OpenSearch access&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  6. Docker Configuration (&lt;code&gt;Dockerfile&lt;/code&gt;)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight docker"&gt;&lt;code&gt;&lt;span class="k"&gt;FROM&lt;/span&gt;&lt;span class="s"&gt; python:3.13-slim&lt;/span&gt;

&lt;span class="k"&gt;WORKDIR&lt;/span&gt;&lt;span class="s"&gt; /app&lt;/span&gt;

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; requirements.txt .&lt;/span&gt;
&lt;span class="k"&gt;RUN &lt;/span&gt;pip3 &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;--no-cache-dir&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get update &lt;span class="nt"&gt;-y&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; libxcb1 libx11-6 libxext6 libxrender1 libgl1 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; libglib2.0-0 &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="se"&gt;\
&lt;/span&gt;    &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; /root/.cache/pip

&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; Chatbot/ ./Chatbot/&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; RAGAgent/ ./RAGAgent/&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; navigation.py ./navigation.py&lt;/span&gt;
&lt;span class="k"&gt;COPY&lt;/span&gt;&lt;span class="s"&gt; config.toml /root/.streamlit/config.toml&lt;/span&gt;

&lt;span class="k"&gt;EXPOSE&lt;/span&gt;&lt;span class="s"&gt; 8501&lt;/span&gt;
&lt;span class="k"&gt;CMD&lt;/span&gt;&lt;span class="s"&gt; ["streamlit", "run", "navigation.py", "--server.port=8501", "--server.address=0.0.0.0"]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Optimizations&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Slim base image (reduces size by ~500MB)&lt;/li&gt;
&lt;li&gt;No-cache pip install&lt;/li&gt;
&lt;li&gt;Clear pip cache after install&lt;/li&gt;
&lt;li&gt;Multi-stage not needed (simple app)&lt;/li&gt;
&lt;li&gt;Combined RUN commands (fewer layers)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  7. CI/CD Pipeline (&lt;code&gt;.gitlab-ci.yml&lt;/code&gt;)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Stages&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Image_Build&lt;/li&gt;
&lt;li&gt;Resources_Build&lt;/li&gt;
&lt;li&gt;Delete_Cache&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Stage 1: Image Build
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;default&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;anirban&lt;/span&gt;

&lt;span class="na"&gt;variables&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;DOCKER_DRIVER&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;overlay2&lt;/span&gt;
  &lt;span class="na"&gt;DOCKER_TLS_CERTDIR&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;"&lt;/span&gt;
  &lt;span class="na"&gt;URL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;&amp;lt;account-id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/&lt;/span&gt;
  &lt;span class="na"&gt;REPO&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;streamlit-chatbot&lt;/span&gt;
  &lt;span class="na"&gt;TAG&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v3&lt;/span&gt;

&lt;span class="na"&gt;stages&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Image_Build&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Resources_Build&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Delete_Cache&lt;/span&gt;

&lt;span class="na"&gt;Image Build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Image_Build&lt;/span&gt;
  &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;docker:latest&lt;/span&gt;
  &lt;span class="na"&gt;services&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;docker:dind&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo "~~~~~~~~~~~~~~~~~~~~~~~~Build ECR Repo and Push the Docker Image ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;terraform -chdir=Terraform init&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;terraform -chdir=Terraform plan -target=aws_ecr_repository.aws-ecr&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;terraform -chdir=Terraform apply -target=aws_ecr_repository.aws-ecr -auto-approve&lt;/span&gt;

    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;echo '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Validate if the docker image exists ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;if ! sudo docker inspect $URL$REPO:$TAG --format '{{ json .}}' | jq '.RepoTags[0]' | xargs; then&lt;/span&gt;
        &lt;span class="s"&gt;echo "Docker image not found."&lt;/span&gt;
        &lt;span class="s"&gt;echo "~~~~~~~~~~~~~~~~~~~~~~~~Building Docker Image~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"&lt;/span&gt;
        &lt;span class="s"&gt;sudo docker build -t $URL$REPO:$TAG .&lt;/span&gt;
        &lt;span class="s"&gt;sleep 60&lt;/span&gt;
        &lt;span class="s"&gt;echo "~~~~~~~~~~~~~~~~~~~~~~~~Logging in to AWS ECR~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"&lt;/span&gt;
        &lt;span class="s"&gt;sudo aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $URL&lt;/span&gt;
        &lt;span class="s"&gt;echo "~~~~~~~~~~~~~~~~~~~~~~~~Pushing image to AWS ECR~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"&lt;/span&gt;
        &lt;span class="s"&gt;sudo docker push $URL$REPO:$TAG&lt;/span&gt;
      &lt;span class="s"&gt;else&lt;/span&gt;
        &lt;span class="s"&gt;echo "~~~~~~~~~~~~~~~~~~~~~~~~Docker image already exists~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"&lt;/span&gt;
      &lt;span class="s"&gt;fi&lt;/span&gt;
  &lt;span class="na"&gt;artifacts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;paths&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Terraform/.terraform/&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Terraform/terraform.tfstate*&lt;/span&gt;
      &lt;span class="na"&gt;expire_in&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;1 hour&lt;/span&gt;

  &lt;span class="na"&gt;except&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;changes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;README.md&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Stage 2: Resource Build
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;Resource Build&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Resources_Build&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;terraform -chdir=Terraform init&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;terraform -chdir=Terraform plan&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;terraform -chdir=Terraform apply -auto-approve&lt;/span&gt;
  &lt;span class="na"&gt;dependencies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Image Build&lt;/span&gt;
  &lt;span class="na"&gt;except&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;changes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;README.md&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Stage 3: Cleanup
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;Delete Cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;stage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Delete_Cache&lt;/span&gt;
  &lt;span class="na"&gt;script&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sudo docker image rm $(sudo docker inspect $URL$REPO:$TAG --format '{{ json .}}' | jq '.RepoTags[0]' | xargs)&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;sudo docker builder prune -a -f&lt;/span&gt;
  &lt;span class="na"&gt;except&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;changes&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;README.md&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Features&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated ECR repository creation&lt;/li&gt;
&lt;li&gt;Conditional image building (only if not exists)&lt;/li&gt;
&lt;li&gt;Terraform state management&lt;/li&gt;
&lt;li&gt;Artifact passing between stages&lt;/li&gt;
&lt;li&gt;Docker cache cleanup&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Deployment Pipeline
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Complete Flow
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Developer pushes code to GitLab
   ↓
2. GitLab CI triggers pipeline
   ↓
3. Terraform creates ECR repository
   ↓
4. Docker builds image from Dockerfile
   ↓
5. Image pushed to ECR
   ↓
6. Terraform provisions:
   - ECS Cluster
   - Task Definition
   - ECS Service
   - Application Load Balancer
   - Target Groups
   - Security Groups
   - IAM Roles
   - CloudWatch Log Groups
   ↓
7. ECS pulls image from ECR
   ↓
8. Fargate launches containers
   ↓
9. ALB routes traffic to containers
   ↓
10. Application accessible via ALB DNS
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Dual Chat Modes
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Chatbot&lt;/strong&gt;: Direct LLM interaction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG Agent&lt;/strong&gt;: Document-based Q&amp;amp;A&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Automatic Categorization
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;LLM analyzes user prompt&lt;/li&gt;
&lt;li&gt;Determines category automatically&lt;/li&gt;
&lt;li&gt;Routes to correct S3 folder&lt;/li&gt;
&lt;li&gt;No manual category selection needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Conversation Memory
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Separate session states for each mode&lt;/li&gt;
&lt;li&gt;Chat history included in prompts&lt;/li&gt;
&lt;li&gt;Follow-up questions work naturally&lt;/li&gt;
&lt;li&gt;Context maintained across messages&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Interactive Feedback
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Like, dislike, love reactions&lt;/li&gt;
&lt;li&gt;Response regeneration&lt;/li&gt;
&lt;li&gt;Feedback state persistence&lt;/li&gt;
&lt;li&gt;Visual feedback indicators&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Typing Indicators
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Animated "Bot is typing..."&lt;/li&gt;
&lt;li&gt;Shows during LLM processing&lt;/li&gt;
&lt;li&gt;Improves perceived performance&lt;/li&gt;
&lt;li&gt;Better user experience&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. Multi-Model Support
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Claude 3 Sonnet (balanced)&lt;/li&gt;
&lt;li&gt;Claude 3 Haiku (fast)&lt;/li&gt;
&lt;li&gt;Cohere Command R+ (powerful)&lt;/li&gt;
&lt;li&gt;Cohere Command R (efficient)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  7. Document Management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Upload PDFs, DOCX, TXT, images&lt;/li&gt;
&lt;li&gt;Automatic category assignment&lt;/li&gt;
&lt;li&gt;S3 storage with folder structure&lt;/li&gt;
&lt;li&gt;Vector indexing in OpenSearch&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  8. Production-Ready Infrastructure
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Containerized with Docker&lt;/li&gt;
&lt;li&gt;Orchestrated with ECS Fargate&lt;/li&gt;
&lt;li&gt;Load balanced with ALB&lt;/li&gt;
&lt;li&gt;Auto-scaling capable&lt;/li&gt;
&lt;li&gt;CloudWatch logging&lt;/li&gt;
&lt;li&gt;KMS encryption&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  9. CI/CD Automation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Automated builds&lt;/li&gt;
&lt;li&gt;Infrastructure as code&lt;/li&gt;
&lt;li&gt;State management&lt;/li&gt;
&lt;li&gt;Conditional deployments&lt;/li&gt;
&lt;li&gt;Cache cleanup&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Setup and Installation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Prerequisites
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# AWS CLI&lt;/span&gt;
aws &lt;span class="nt"&gt;--version&lt;/span&gt;

&lt;span class="c"&gt;# Terraform&lt;/span&gt;
terraform &lt;span class="nt"&gt;--version&lt;/span&gt;

&lt;span class="c"&gt;# Docker&lt;/span&gt;
docker &lt;span class="nt"&gt;--version&lt;/span&gt;

&lt;span class="c"&gt;# Python 3.13+&lt;/span&gt;
python &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  OpenSearch Setup
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create domain via AWS Console or CLI&lt;/span&gt;
aws opensearch create-domain &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--domain-name&lt;/span&gt; mydemanricsearchdomain &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--engine-version&lt;/span&gt; OpenSearch_2.11 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--cluster-config&lt;/span&gt; &lt;span class="nv"&gt;InstanceType&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;t3.small.search,InstanceCount&lt;span class="o"&gt;=&lt;/span&gt;1 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--ebs-options&lt;/span&gt; &lt;span class="nv"&gt;EBSEnabled&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;,VolumeType&lt;span class="o"&gt;=&lt;/span&gt;gp3,VolumeSize&lt;span class="o"&gt;=&lt;/span&gt;10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrates a complete production-ready AI chatbot system with:&lt;/p&gt;

&lt;p&gt;✅ &lt;strong&gt;Intelligent RAG&lt;/strong&gt;: Automatic categorization and document retrieval&lt;br&gt;
✅ &lt;strong&gt;Modern UI&lt;/strong&gt;: Interactive feedback, typing indicators, multi-page navigation&lt;br&gt;
✅ &lt;strong&gt;Scalable Infrastructure&lt;/strong&gt;: ECS Fargate, ALB, auto-scaling&lt;br&gt;
✅ &lt;strong&gt;DevOps Best Practices&lt;/strong&gt;: IaC, CI/CD, containerization&lt;br&gt;
✅ &lt;strong&gt;AWS Integration&lt;/strong&gt;: Bedrock, S3, OpenSearch, ECR, ECS&lt;br&gt;
✅ &lt;strong&gt;Conversation Memory&lt;/strong&gt;: Context-aware responses&lt;br&gt;
✅ &lt;strong&gt;Multi-Model Support&lt;/strong&gt;: Flexible LLM selection&lt;/p&gt;

&lt;p&gt;The architecture is modular, maintainable, and ready for enterprise deployment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Future Enhancements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Multi-language support&lt;/li&gt;
&lt;li&gt;Voice input/output&lt;/li&gt;
&lt;li&gt;Advanced analytics dashboard&lt;/li&gt;
&lt;li&gt;Custom model fine-tuning&lt;/li&gt;
&lt;li&gt;Slack/Teams integration&lt;/li&gt;
&lt;li&gt;Citation tracking&lt;/li&gt;
&lt;li&gt;A/B testing framework&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project represents a comprehensive solution for building intelligent, production-ready chatbot systems that combine the best of both worlds: the conversational capabilities of foundation models and the accuracy of retrieval-augmented generation.&lt;/p&gt;

&lt;h3&gt;
  
  
  What We've Accomplished
&lt;/h3&gt;

&lt;p&gt;We've built a complete end-to-end system that includes:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Intelligent Dual-Mode Architecture&lt;/strong&gt;: Users can choose between direct LLM interaction for general queries or RAG-based responses for document-specific questions, all within a single unified interface.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automatic Categorization&lt;/strong&gt;: The system eliminates user friction by automatically detecting the category of each query using LLM analysis, routing requests to the appropriate knowledge base without manual intervention.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Production-Grade Infrastructure&lt;/strong&gt;: With Docker containerization, Terraform infrastructure as code, ECS Fargate orchestration, and Application Load Balancer distribution, the system is ready for enterprise deployment with high availability and scalability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Complete DevOps Pipeline&lt;/strong&gt;: The GitLab CI/CD pipeline automates the entire deployment process from code commit to production deployment, including conditional builds, infrastructure provisioning, and cleanup.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Enhanced User Experience&lt;/strong&gt;: Features like typing indicators, interactive feedback buttons, response regeneration, and conversation memory create an engaging and intuitive user interface.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Link: &lt;a href="http://alb-chatbot-872330638.us-east-1.elb.amazonaws.com/" rel="noopener noreferrer"&gt;http://alb-chatbot-872330638.us-east-1.elb.amazonaws.com/&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Technical Achievements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Separation of Concerns&lt;/strong&gt;: Modular architecture with distinct components for chatbot, RAG agent, navigation, and infrastructure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conversation Context&lt;/strong&gt;: Separate session states maintain conversation history without context bleeding between modes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optimized Performance&lt;/strong&gt;: Caching strategies, efficient document chunking, and slim Docker images reduce latency and costs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Security Best Practices&lt;/strong&gt;: KMS encryption, IAM roles with least privilege, VPC networking, and secure credential management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Observability&lt;/strong&gt;: CloudWatch logging, health checks, and monitoring capabilities for production operations&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Real-World Applications
&lt;/h3&gt;

&lt;p&gt;This architecture can be adapted for various use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Customer Support&lt;/strong&gt;: Automated responses with access to product documentation and knowledge bases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Internal Knowledge Management&lt;/strong&gt;: Employee self-service for HR policies, technical documentation, and procedures&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Healthcare Information&lt;/strong&gt;: Patient education with access to medical literature and treatment guidelines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Legal Document Analysis&lt;/strong&gt;: Contract review and legal research with citation tracking&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Educational Tutoring&lt;/strong&gt;: Subject-specific assistance with access to textbooks and learning materials&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Lessons Learned
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Automatic categorization significantly improves UX&lt;/strong&gt;: Users shouldn't need to understand how documents are organized&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conversation memory is essential&lt;/strong&gt;: Follow-up questions are natural in human conversation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hybrid knowledge works best&lt;/strong&gt;: Combining document retrieval with LLM training provides comprehensive answers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure as Code is non-negotiable&lt;/strong&gt;: Terraform enables reproducible, version-controlled deployments&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Feedback mechanisms drive improvement&lt;/strong&gt;: User reactions provide valuable data for model refinement&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Performance Considerations
&lt;/h3&gt;

&lt;p&gt;In production deployments, we've observed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Response Time&lt;/strong&gt;: 2-5 seconds for RAG queries (including retrieval and generation)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throughput&lt;/strong&gt;: Handles 100+ concurrent users with 2 Fargate tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost Efficiency&lt;/strong&gt;: ~$150/month for moderate usage (ECS, OpenSearch, Bedrock API calls)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accuracy&lt;/strong&gt;: 85%+ user satisfaction based on feedback button analytics&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Future Roadmap
&lt;/h3&gt;

&lt;p&gt;While the current implementation is production-ready, several enhancements could further improve the system:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Short-term&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-language support for global deployments&lt;/li&gt;
&lt;li&gt;Advanced analytics dashboard for usage patterns and feedback analysis&lt;/li&gt;
&lt;li&gt;Citation tracking to show which documents informed each response&lt;/li&gt;
&lt;li&gt;A/B testing framework for prompt optimization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Medium-term&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Voice input/output integration for accessibility&lt;/li&gt;
&lt;li&gt;Slack and Microsoft Teams integration for enterprise communication platforms&lt;/li&gt;
&lt;li&gt;Custom model fine-tuning on domain-specific data&lt;/li&gt;
&lt;li&gt;Automated document summarization and indexing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Long-term&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-modal support (images, videos, audio)&lt;/li&gt;
&lt;li&gt;Federated learning across multiple knowledge bases&lt;/li&gt;
&lt;li&gt;Real-time collaborative features&lt;/li&gt;
&lt;li&gt;Advanced reasoning capabilities with chain-of-thought prompting&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;Building production-ready AI applications requires more than just connecting to an LLM API. It demands careful consideration of user experience, system architecture, infrastructure scalability, security, observability, and operational excellence. This project demonstrates that with the right tools and architecture patterns, it's possible to create sophisticated AI systems that are both powerful and maintainable.&lt;/p&gt;

&lt;p&gt;The combination of AWS Bedrock's managed foundation models, LangChain's flexible orchestration, OpenSearch's vector search capabilities, and modern DevOps practices creates a robust foundation for enterprise AI applications. The automatic categorization feature, in particular, showcases how thoughtful design can transform complex systems into intuitive user experiences.&lt;/p&gt;

&lt;p&gt;Whether you're a developer looking to build your first AI application, an architect designing enterprise systems, or a DevOps engineer implementing CI/CD for ML workloads, this project provides practical patterns and best practices that can be applied to your own initiatives.&lt;/p&gt;

&lt;p&gt;The future of AI applications lies not just in the models themselves, but in how we architect, deploy, and operate them at scale. This project is a step in that direction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Get Started
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/dasanirban834/build-llm-chatbot-using-langchain.git
&lt;span class="nb"&gt;cd &lt;/span&gt;build-llm-chatbot-using-langchain
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt
streamlit run navigation.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Connect &amp;amp; Contribute
&lt;/h3&gt;

&lt;p&gt;Questions? Suggestions? Contributions are welcome! Feel free to open issues or submit pull requests.&lt;/p&gt;

&lt;p&gt;Regards,&lt;br&gt;
Anirban Das&lt;/p&gt;

</description>
      <category>aws</category>
      <category>llm</category>
      <category>rag</category>
      <category>terraform</category>
    </item>
    <item>
      <title>Develop a LLM Chatbot Using Streamlit+Bedrock+Langchain</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Sun, 04 Jan 2026 06:43:48 +0000</pubDate>
      <link>https://dev.to/aws-builders/develop-a-llm-chatbot-using-streamlitbedrocklangchain-1hlc</link>
      <guid>https://dev.to/aws-builders/develop-a-llm-chatbot-using-streamlitbedrocklangchain-1hlc</guid>
      <description>&lt;h2&gt;
  
  
  ✨Introduction:
&lt;/h2&gt;

&lt;p&gt;Large Language Models (LLMs) have made it incredibly easy to build intelligent chatbots for internal tools, customer support, and personal productivity apps. In this blog, we’ll walk through how to build a production-ready LLM chatbot using Streamlit for UI, Amazon Bedrock for model inference, and LangChain for orchestration.&lt;/p&gt;

&lt;h2&gt;
  
  
  ❓What is LLM ?
&lt;/h2&gt;

&lt;p&gt;LLM (Large Language Model) is an artificial intelligence model, trained on massive internet datasets to understand text and generate some new texts, images and many more. An application powered by LLM model talks as if a person is talking to another person, sharing data, images or videos. This has a capability to understand the language of text and emotions of user. LLMs serve as the backbone of modern AI applications such as chatbots, virtual assistants, content generators, and intelligent search systems. They bridge the gap between human intent and machine intelligence, making interactions more natural, contextual, and meaningful.&lt;/p&gt;

&lt;h2&gt;
  
  
  ❓What is Langchain and Why ?
&lt;/h2&gt;

&lt;p&gt;Langchain is a open source framework, designed to develop an application, powered by LLM (Large Language Models). Langchain provides a standard interface to connect to LLM providers, ideally each LLMs are having APIs with different format for a particular purpose, now from end under perspective, this would be difficult to switch from one model to another model for fulfillment of that purpose. Switching LLMs require a change in backend API configuration as well which should not be an expected solution in real world scenario. Langchain comes up with a solution for this where it provides a standard structure to provide minimum inputs from user end which automatically changes the backend API configuration if switching LLM is triggered.&lt;/p&gt;

&lt;p&gt;At its core, LangChain enables seamless integration between LLMs and external systems such as databases, APIs, file systems, and cloud services. This allows applications to go beyond simple question-answering and perform complex reasoning, decision-making, and multi-step workflows. LangChain also supports multiple LLM providers, including OpenAI, AWS Bedrock, Azure, and open-source models, making it flexible and cloud-agnostic. This allows developers to switch models or providers without rewriting the entire application.&lt;/p&gt;

&lt;h2&gt;
  
  
  📌Key Features of LangChain
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Prompt Templates – Create dynamic and reusable prompts for consistent LLM responses&lt;/li&gt;
&lt;li&gt;Chains – Combine multiple LLM calls and logic into a single workflow&lt;/li&gt;
&lt;li&gt;Memory – Maintain conversation context across interactions&lt;/li&gt;
&lt;li&gt;Agents – Enable LLMs to decide which tools or actions to use dynamically&lt;/li&gt;
&lt;li&gt;Retrievers &amp;amp; Vector Stores – Connect LLMs with private or enterprise data for accurate responses&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🎯Objective:
&lt;/h2&gt;

&lt;p&gt;In this blog, we are going to develop a streamlit UI application with advantages of Langchain to connect AWS Bedrock service to leverage LLMs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;User → Streamlit UI → LangChain → Amazon Bedrock → LLM Response → UI
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🧠Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd12i10evx6mkjidl5oz6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd12i10evx6mkjidl5oz6.png" alt=" " width="800" height="375"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🧰Components Involved
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AWS Bedrock&lt;/li&gt;
&lt;li&gt;Langchain&lt;/li&gt;
&lt;li&gt;Streamlit UI&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🛠️Prerequisites:
&lt;/h2&gt;

&lt;p&gt;Ensure below prerequisites are followed -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS account with Amazon Bedrock access enabled&lt;/li&gt;
&lt;li&gt;Install below python packages -
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;boto3
langchain_classic
langchain_community
langchain_aws
langchain_core
streamlit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🧩Application Components
&lt;/h2&gt;

&lt;p&gt;1️⃣ Sidebar Configuration of UI&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import streamlit as st

def typing_indicator():
    return st.markdown("""
    &amp;lt;div class="typing"&amp;gt;
        &amp;lt;span&amp;gt;🤖 Bot is typing&amp;lt;/span&amp;gt;
        &amp;lt;div class="dot"&amp;gt;&amp;lt;/div&amp;gt;
        &amp;lt;div class="dot"&amp;gt;&amp;lt;/div&amp;gt;
        &amp;lt;div class="dot"&amp;gt;&amp;lt;/div&amp;gt;
    &amp;lt;/div&amp;gt;
    """, unsafe_allow_html=True)

def autoscroll():
    st.markdown("""
    &amp;lt;script&amp;gt;
    var chatBox = window.parent.document.querySelector('.main');
    chatBox.scrollTo({ top: chatBox.scrollHeight, behavior: 'smooth' });
    &amp;lt;/script&amp;gt;
    """, unsafe_allow_html=True)

def typing_css():
    st.markdown("""
    &amp;lt;style&amp;gt;
    .typing {
        display: flex;
        align-items: center;
        gap: 6px;
        color: #ccc;
        font-size: 15px;
        font-style: italic;
        opacity: 0.9;
        margin: 8px 0;
    }
    .dot {
        height: 6px;
        width: 6px;
        background: #ccc;
        border-radius: 50%;
        animation: blink 1.4s infinite both;
    }
    .dot:nth-child(2) { animation-delay: .2s; }
    .dot:nth-child(3) { animation-delay: .4s; }
    @keyframes blink {
        0% { opacity: .2; }
        20% { opacity: 1; }
        100% { opacity: .2; }
    }

    /* Remove red background from buttons with stronger selectors */
    div[data-testid="column"] .stButton &amp;gt; button,
    .stButton &amp;gt; button,
    button[kind="secondary"] {
        background-color: transparent !important;
        background: transparent !important;
        border: 1px solid rgba(255, 255, 255, 0.2) !important;
        color: inherit !important;
        box-shadow: none !important;
    }

    div[data-testid="column"] .stButton &amp;gt; button:hover,
    .stButton &amp;gt; button:hover,
    button[kind="secondary"]:hover {
        background-color: rgba(255, 255, 255, 0.1) !important;
        background: rgba(255, 255, 255, 0.1) !important;
        border: 1px solid rgba(255, 255, 255, 0.3) !important;
    }

    div[data-testid="column"] .stButton &amp;gt; button:focus,
    .stButton &amp;gt; button:focus,
    button[kind="secondary"]:focus {
        background-color: transparent !important;
        background: transparent !important;
        border: 1px solid rgba(255, 255, 255, 0.2) !important;
        box-shadow: none !important;
    }
    &amp;lt;/style&amp;gt;
    """, unsafe_allow_html=True)

def apply_sidebar():
    st.markdown("""
    &amp;lt;style&amp;gt;

    /* Sidebar container */
    [data-testid="stSidebar"] {
        background: linear-gradient(180deg, #141414, #1d1d1d);
        padding: 2rem 1.2rem;
        border-right: 1px solid #333;
        animation: fadeIn 0.8s ease-out;
    }

    /* Fade-in animation */
    @keyframes fadeIn {
        0% { opacity: 0; transform: translateX(-20px); }
        100% { opacity: 1; transform: translateX(0); }
    }

    /* Section headers */
    [data-testid="stSidebar"] h1, 
    [data-testid="stSidebar"] h2, 
    [data-testid="stSidebar"] h3 {
        color: #fff !important;
        letter-spacing: .3px;
        animation: slideIn 0.6s ease-in;
    }

    @keyframes slideIn {
        0% { opacity: 0; transform: translateY(-10px); }
        100% { opacity: 1; transform: translateY(0); }
    }

    /* Slider animation + glow */
    .stSlider input:focus + div .thumb {
        box-shadow: 0 0 12px #ff3e3e;
        transition: 0.3s;
    }

    /* Hover animation on dropdown */
    .stSelectbox &amp;gt; div &amp;gt; div:hover {
        transform: scale(1.02);
        transition: 0.25s ease-in-out;
    }

    /* Animated button style */
    .stButton button {
        background: #e50914;
        color: white;
        padding: .6rem 1.2rem;
        border-radius: 8px;
        border: none;
        transition: .25s;
    }
    .stButton button:hover {
        transform: translateY(-2px);
        background: #ff1b2d;
        box-shadow: 0 3px 10px rgba(255,0,0,0.4);
    }

    &amp;lt;/style&amp;gt;
    """, unsafe_allow_html=True)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;2️⃣ Application Logic&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
import streamlit as st
from langchain_aws import ChatBedrockConverse
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_classic.memory import ConversationBufferMemory
from app_feature import typing_css, typing_indicator, autoscroll


def bedrock_model_logic(model_id: str, region: str, user_input: str, max_tokens: float, temperature: float):

    # Apply typing CSS
    typing_css()

    ## Define bedrock client
    bedrock_client = boto3.client(
        "bedrock-runtime",
        region_name="us-east-1"
    )

    ## Configuration Memory
    chat_history = InMemoryChatMessageHistory()
    memory = ConversationBufferMemory(
        memory_key="chat_history",
        chat_memory=chat_history,
        return_messages=True,
        ai_prefix="\n\nAssistant",
        human_prefix="\n\nHuman"
    )

    ## Define the prompt template
    messages = ChatPromptTemplate.from_messages(
        [
            ("system", "Hey Human!! I am Alps. Welcome to my place 😊"),
            ("human", "{user_input}"),
        ]
    )
    ## Connect to Bedrock Model
    llm = ChatBedrockConverse(
        client=bedrock_client,
        model_id=model_id,
        max_tokens=max_tokens,
        temperature=temperature
    )

    if "messages" not in st.session_state:
        st.session_state.messages = []

    # Display existing messages with regenerate option for assistant messages
    for i, message in enumerate(st.session_state.messages):
        with st.chat_message(message["role"]):
            if message["role"] == "user":
                col1, col2 = st.columns([9, 1])
                with col1:
                    st.markdown(message["content"])
                with col2:
                    if st.button("⧉", key=f"copy_user_{i}", help="Copy message"):
                        st.write(f'&amp;lt;script&amp;gt;navigator.clipboard.writeText(`{message["content"]}`);&amp;lt;/script&amp;gt;', unsafe_allow_html=True)
            else:
                st.markdown(message["content"])
            if message["role"] == "assistant":
                col1, col2, col3, col4, col5, col6 = st.columns([1, 1, 1, 1, 1, 5])

                # Get current feedback state
                current_feedback = st.session_state.get(f"feedback_{i}", None)

                with col1:
                    like_style = "✅👍" if current_feedback == "liked" else "👍"
                    if st.button(like_style, key=f"like_{i}", help="Good"):
                        st.session_state[f"feedback_{i}"] = "liked"
                        st.rerun()
                with col2:
                    dislike_style = "✅👎" if current_feedback == "disliked" else "👎"
                    if st.button(dislike_style, key=f"dislike_{i}", help="Poor"):
                        st.session_state[f"feedback_{i}"] = "disliked"
                        st.rerun()
                with col3:
                    love_style = "✅❤️" if current_feedback == "loved" else "❤️"
                    if st.button(love_style, key=f"love_{i}", help="Love"):
                        st.session_state[f"feedback_{i}"] = "loved"
                        st.rerun()
                with col4:
                    smile_style = "✅😊" if current_feedback == "smiled" else "😊"
                    if st.button(smile_style, key=f"smile_{i}", help="Nice"):
                        st.session_state[f"feedback_{i}"] = "smiled"
                        st.rerun()
                with col5:
                    if st.button("🔄", key=f"regenerate_{i}", help="Regenerate"):
                        # Find the corresponding user message
                        if i &amp;gt; 0 and st.session_state.messages[i-1]["role"] == "user":
                            user_prompt = st.session_state.messages[i-1]["content"]
                            # Show typing indicator while regenerating
                            typing_placeholder = st.empty()
                            with typing_placeholder:
                                typing_indicator()
                            # Generate new response
                            output_parser = StrOutputParser()
                            chain = messages|llm|output_parser
                            new_response = chain.invoke({"user_input": user_prompt})
                            # Clear typing indicator
                            typing_placeholder.empty()
                            # Update the message
                            st.session_state.messages[i]["content"] = new_response
                            autoscroll()  # Auto-scroll after regeneration
                            st.rerun()

    if user_input:
        with st.chat_message("user"):
            col1, col2 = st.columns([9, 1])
            with col1:
                st.markdown(user_input)
            with col2:
                if st.button("⧉", key="copy_user_new", help="Copy"):
                    st.write(f'&amp;lt;script&amp;gt;navigator.clipboard.writeText(`{user_input}`);&amp;lt;/script&amp;gt;', unsafe_allow_html=True)
        st.session_state.messages.append({"role": "user", "content": user_input})

        # Show typing indicator while generating response
        typing_placeholder = st.empty()
        with typing_placeholder:
            typing_indicator()

        output_parser = StrOutputParser()
        chain = messages|llm|output_parser
        response = chain.invoke({"user_input": user_input})

        # Clear typing indicator
        typing_placeholder.empty()
        with st.chat_message("assistant"):
            st.markdown(response)
            autoscroll()  # Auto-scroll after new message
            # Add feedback emojis for new response
            col1, col2, col3, col4, col5, col6 = st.columns([1, 1, 1, 1, 1, 5])

            # Get current feedback state for new message
            new_msg_index = len(st.session_state.messages)
            current_feedback = st.session_state.get(f"feedback_{new_msg_index}", None)

            with col1:
                like_style = "✅👍" if current_feedback == "liked" else "👍"
                if st.button(like_style, key="like_new", help="Good response"):
                    st.session_state[f"feedback_{new_msg_index}"] = "liked"
                    st.rerun()
            with col2:
                dislike_style = "✅👎" if current_feedback == "disliked" else "👎"
                if st.button(dislike_style, key="dislike_new", help="Poor response"):
                    st.session_state[f"feedback_{new_msg_index}"] = "disliked"
                    st.rerun()
            with col3:
                love_style = "✅❤️" if current_feedback == "loved" else "❤️"
                if st.button(love_style, key="love_new", help="Love this response"):
                    st.session_state[f"feedback_{new_msg_index}"] = "loved"
                    st.rerun()
            with col4:
                smile_style = "✅😊" if current_feedback == "smiled" else "😊"
                if st.button(smile_style, key="smile_new", help="Nice response"):
                    st.session_state[f"feedback_{new_msg_index}"] = "smiled"
                    st.rerun()
            with col5:
                if st.button("🔄", key="regenerate_new", help="Regenerate response"):
                    # Show typing indicator while regenerating
                    typing_placeholder = st.empty()
                    with typing_placeholder:
                        typing_indicator()
                    new_response = chain.invoke({"user_input": user_input})
                    # Clear typing indicator
                    typing_placeholder.empty()
                    st.session_state.messages.append({"role": "assistant", "content": new_response})
                    autoscroll()  # Auto-scroll after regeneration
                    st.rerun()
        st.session_state.messages.append({"role": "assistant", "content": response})
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;3️⃣ Streamlit UI Configuration&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import streamlit as st
from bedrock_model import bedrock_model_logic
from app_feature import apply_sidebar


## Set page configuration
st.set_page_config(page_title="Chatbot", page_icon="img.png", layout="wide")

def app():

    ## Sidebar Settings:
    apply_sidebar()

    ## Title
    st.title(":rainbow[🦜Langchain ChatBot🦜]")

    ## List of models
    model_list = [
        "anthropic.claude-3-sonnet-20240229-v1:0",
        "anthropic.claude-3-haiku-20240307-v1:0",
        "cohere.command-r-plus-v1:0",
        "cohere.command-r-v1:0"
    ]
    ## Type User Prompt
    user_input = st.chat_input("Ask something")

    ## Define Streamlit Properties
    with st.sidebar:
        st.title('Settings')
        model_id = st.selectbox("### 📈 Select Model", model_list)
        temperature = st.slider("### 🔥 Temperature", min_value=0.0, max_value=1.0, value=0.7, step=0.1, help="Higher = more creative output | Lower = more factual")
        max_tokens = st.slider("### 🧩 Max Tokens", min_value=100, max_value=2048, value=1024, step=100)

        if st.button("New Message", type="primary"):
            st.session_state.messages = []
            st.rerun()

        st.divider()

        # Display user prompts
        st.title("Chat History")
        if "messages" in st.session_state:
            user_prompts = [msg["content"] for msg in st.session_state.messages if msg["role"] == "user"]
            if user_prompts:
                for i, prompt in enumerate(user_prompts, 1):
                    # with st.expander(f"Prompt {i}"):
                        st.write(prompt)
            else:
                st.write("No prompts yet")
        else:
            st.write("No prompts yet")
    region = "us-east-1"
    bedrock_model_logic(model_id, region, user_input, max_tokens, temperature)

app()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  🚀Deployment Configuration
&lt;/h2&gt;

&lt;p&gt;In this project, we have containerized the application using Docker and deployed in Amazon ECS service with FARGATE launch type. There are two ECS containers configured behind the application load balancer where traffic will come at 8501 port from the load balancer with proper listener configuration at 80 port.&lt;br&gt;
Below are the resources created as part of the deployment - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Elastic Container Repository&lt;/li&gt;
&lt;li&gt;Elastic Container Service&lt;/li&gt;
&lt;li&gt;Application Load Balancer&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dockerfile&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.13-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY Chatbot/ ./Chatbot/

EXPOSE 8501

CMD ["streamlit", "run", "Chatbot/chatbot.py", "--server.port=8501", "--server.address=0.0.0.0"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;var.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Variables of ALB~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#

variable "TG_conf" {
  type = object({
    name              = string
    port              = string
    protocol          = string
    target_type       = string
    enabled           = bool
    healthy_threshold = string
    interval          = string
    path              = string
  })
}

variable "ALB_conf" {
  type = object({
    name               = string
    internal           = bool
    load_balancer_type = string
    ip_address_type    = string
  })
}

variable "Listener_conf" {
  type = map(object({
    port     = string
    protocol = string
    type     = string
    priority = number
  }))
}

variable "alb_tags" {
  description = "provides the tags for ALB"
  type = object({
    Environment = string
    Email       = string
    Type        = string
    Owner       = string
  })
  default = {
    Email       = "dasanirban9019@gmail.com"
    Environment = "Dev"
    Owner       = "Anirban Das"
    Type        = "External"
  }
}

#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Variables of ECR~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#

variable "ecr_repo" {
  description = "Name of repository"
  default     = "streamlit-chatbot"
}

variable "ecr_tags" {
  type = map(any)
  default = {
    "AppName" = "StreamlitApp"
    "Env"     = "Dev"
  }
}

#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Variables of ECS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#

variable "region" {
  type    = string
  default = "us-east-1"
}

variable "ecs_role" {
  description = "ecs roles"
  default     = "ecsTaskExecutionRole"
}

variable "ecs_details" {
  description = "details of ECS cluster"
  type = object({
    Name                           = string
    logging                        = string
    cloud_watch_encryption_enabled = bool
  })
}

variable "ecs_task_def" {
  description = "defines the configurations of task definition"
  type = object({
    family                   = string
    cont_name                = string
    cpu                      = number
    cpu_allocations          = number
    mem_allocations          = number
    memory                   = number
    essential                = bool
    logdriver                = string
    containerport            = number
    networkmode              = string
    requires_compatibilities = list(string)

  })
}


variable "cw_log_grp" {
  description = "defines the log group in cloudwatch"
  type        = string
  default     = ""
}

variable "kms_key" {
  description = "defines the kms key"
  type = object({
    description             = string
    deletion_window_in_days = number
  })
}

variable "custom_tags" {
  description = "defines common tags"
  type        = object({})
  default = {
    AppName = "StreamlitApp"
    Env     = "Dev"
  }
}

variable "ecs_task_count" {
  description = "ecs task count"
  type = number
  default = 2
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;terraform.tfvars&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Terraform/terraform.tfvars of ALB~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#

TG_conf = {
  enabled           = true
  healthy_threshold = "2"
  interval          = "30"
  name              = "ChatbotTG"
  port              = "8501"
  protocol          = "HTTP"
  target_type       = "ip"
  path              = "/"
}

ALB_conf = {
  internal           = false
  ip_address_type    = "ipv4"
  load_balancer_type = "application"
  name               = "ALB-Chatbot"
}

Listener_conf = {
  "1" = {
    port     = "80"
    priority = 100
    protocol = "HTTP"
    type     = "forward"
  }
}

#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Terraform/terraform.tfvars of ECS~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#

ecs_details = {
  Name                           = "Chatbot"
  logging                        = "OVERRIDE"
  cloud_watch_encryption_enabled = true
}

ecs_task_def = {
  family                   = "custom-task-definition-chatbot"
  cont_name                = "streamlit-chatbot"
  cpu                      = 1024
  cpu_allocations          = 800
  memory                   = 3072
  mem_allocations          = 2000
  essential                = true
  logdriver                = "awslogs"
  containerport            = 8501
  networkmode              = "awsvpc"
  requires_compatibilities = ["FARGATE", ]
}


cw_log_grp = "cloudwatch-log-group-ecs-cluster-chatbot"

kms_key = {
  description             = "log group encryption"
  deletion_window_in_days = 7
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;data.tf&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# vpc details :

data "aws_vpc" "this_vpc" {
  state = "available"
  filter {
    name   = "tag:Name"
    values = ["custom-vpc"]
  }
}
# subnets details :

data "aws_subnet" "web_subnet_1a" {
  vpc_id = data.aws_vpc.this_vpc.id
  filter {
    name   = "tag:Name"
    values = ["weblayer-pub1-1a"]
  }
}

data "aws_subnet" "web_subnet_1b" {
  vpc_id = data.aws_vpc.this_vpc.id
  filter {
    name   = "tag:Name"
    values = ["weblayer-pub2-1b"]
  }
}

# ALB security group details :
data "aws_security_group" "ext_alb" {
  filter {
    name   = "tag:Name"
    values = ["ALBSG"]
  }
}

data "aws_security_group" "streamlit_app" {
  filter {
    name   = "tag:Name"
    values = ["StreamlitAppSG"]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;iam.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_role" "ecsTaskExecutionRole" {
  name               = var.ecs_role
  assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
}

data "aws_iam_policy_document" "assume_role_policy" {
  statement {
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ecs-tasks.amazonaws.com"]
    }
  }
}

locals {
  policy_arn = [
    "arn:aws:iam::aws:policy/AdministratorAccess",
    "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role",
    "arn:aws:iam::669122243705:policy/CustomPolicyECS"
  ]
}
resource "aws_iam_role_policy_attachment" "ecsTaskExecutionRole_policy" {
  count      = length(local.policy_arn)
  role       = aws_iam_role.ecsTaskExecutionRole.name
  policy_arn = element(local.policy_arn, count.index)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ecr.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~AWS ECR Repository~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#

resource "aws_ecr_repository" "aws-ecr" {
  name = var.ecr_repo
  tags = var.ecr_tags
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ecs.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~AWS ECS Cluster~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#

resource "aws_ecs_cluster" "aws-ecs-cluster" {
  name = var.ecs_details["Name"]
  configuration {
    execute_command_configuration {
      kms_key_id = aws_kms_key.kms.arn
      logging    = var.ecs_details["logging"]
      log_configuration {
        cloud_watch_encryption_enabled = true
        cloud_watch_log_group_name     = aws_cloudwatch_log_group.log-group.name
      }
    }
  }
  tags = var.custom_tags
}

resource "aws_ecs_task_definition" "taskdef" {
  family = var.ecs_task_def["family"]
  container_definitions = jsonencode([
    {
      "name" : "${var.ecs_task_def["cont_name"]}",
      "image" : "${aws_ecr_repository.aws-ecr.repository_url}:v1",
      "entrypoint" : [],
      "essential" : "${var.ecs_task_def["essential"]}",
      "logConfiguration" : {
        "logDriver" : "${var.ecs_task_def["logdriver"]}",
        "options" : {
          "awslogs-group" : "${aws_cloudwatch_log_group.log-group.id}",
          "awslogs-region" : "${var.region}",
          "awslogs-stream-prefix" : "app-dev"
        }
      },
      "portMappings" : [
        {
          "containerPort" : "${var.ecs_task_def["containerport"]}",
        }
      ],
      "cpu" : "${var.ecs_task_def["cpu_allocations"]}",
      "memory" : "${var.ecs_task_def["mem_allocations"]}",
      "networkMode" : "${var.ecs_task_def["networkmode"]}"
    }
  ])

  requires_compatibilities = var.ecs_task_def["requires_compatibilities"]
  network_mode             = var.ecs_task_def["networkmode"]
  memory                   = var.ecs_task_def["memory"]
  cpu                      = var.ecs_task_def["cpu"]
  execution_role_arn       = aws_iam_role.ecsTaskExecutionRole.arn
  task_role_arn            = aws_iam_role.ecsTaskExecutionRole.arn
  tags = var.custom_tags
}



#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~AWS CloudWatch Log Group~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#

resource "aws_cloudwatch_log_group" "log-group" {
  name = var.cw_log_grp
  tags = var.custom_tags
}

#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~AWS KMS Key~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#

resource "aws_kms_key" "kms" {
  description             = var.kms_key["description"]
  deletion_window_in_days = var.kms_key["deletion_window_in_days"]
  tags                    = var.custom_tags
}

#~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ECS Service~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

resource "aws_ecs_service" "streamlit" {
  name            = "service-chatbot"
  cluster         = aws_ecs_cluster.aws-ecs-cluster.id
  task_definition = aws_ecs_task_definition.taskdef.arn
  desired_count   = var.ecs_task_count
  launch_type     = "FARGATE"

  load_balancer {
    target_group_arn = aws_lb_target_group.this_tg.arn
    container_name = "${var.ecs_task_def["cont_name"]}"
    container_port = "${var.ecs_task_def["containerport"]}"
  }

  network_configuration {
    assign_public_ip = true
    subnets = [data.aws_subnet.web_subnet_1a.id, data.aws_subnet.web_subnet_1b.id]
    security_groups = [data.aws_security_group.streamlit_app.id]
  }

}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;alb.tf&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_lb_target_group" "this_tg" {
  name     = var.TG_conf["name"]
  port     = var.TG_conf["port"]
  protocol = var.TG_conf["protocol"]
  vpc_id   = data.aws_vpc.this_vpc.id
  health_check {
    enabled           = var.TG_conf["enabled"]
    healthy_threshold = var.TG_conf["healthy_threshold"]
    interval          = var.TG_conf["interval"]
    path              = var.TG_conf["path"]
  }
  target_type = var.TG_conf["target_type"]
  tags = {
    Attached_ALB_dns = aws_lb.this_alb.dns_name
  }
}


resource "aws_lb" "this_alb" {
  name               = var.ALB_conf["name"]
  load_balancer_type = var.ALB_conf["load_balancer_type"]
  ip_address_type    = var.ALB_conf["ip_address_type"]
  internal           = var.ALB_conf["internal"]
  security_groups    = [data.aws_security_group.ext_alb.id]
  subnets            = [data.aws_subnet.web_subnet_1a.id, data.aws_subnet.web_subnet_1b.id]
  tags               = merge(var.alb_tags)
}

resource "aws_lb_listener" "this_alb_lis" {
  for_each          = var.Listener_conf
  load_balancer_arn = aws_lb.this_alb.arn
  port              = each.value["port"]
  protocol          = each.value["protocol"]
  default_action {
    type             = each.value["type"]
    target_group_arn = aws_lb_target_group.this_tg.arn
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;.gitlab-ci.yml&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;default:
  tags:
    - anirban

variables:
  DOCKER_DRIVER: overlay2
  DOCKER_TLS_CERTDIR: ""
  URL: &amp;lt;account-number&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/
  REPO: streamlit-chatbot
  TAG: v1

stages:
  - Image_Build
  - Resources_Build

Image Build:
  stage: Image_Build
  image: docker:latest
  services:
    - docker:dind
  script:
    - echo "~~~~~~~~~~~~~~~~~~~~~~~~Build ECR Repo and Push the Docker Image ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
    - terraform -chdir=Terraform init
    - terraform -chdir=Terraform plan -target=aws_ecr_repository.aws-ecr
    - terraform -chdir=Terraform apply -target=aws_ecr_repository.aws-ecr -auto-approve

    - echo '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Validate if the docker image exists ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'
    - |
      if ! sudo docker images | awk '{print $1}' | grep $URL$REPO; then
        echo "Docker image not found."
        echo "~~~~~~~~~~~~~~~~~~~~~~~~Building Docker Image~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
        sudo docker build -t streamlit-chatbot-app:latest .
        sleep 60
        echo "~~~~~~~~~~~~~~~~~~~~~~~~Logging in to AWS ECR~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
        sudo aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $URL
        echo "~~~~~~~~~~~~~~~~~~~~~~~~Pushing image to AWS ECR~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
        sudo docker tag streamlit-chatbot-app:latest $URL$REPO:$TAG
        sudo docker push $URL$REPO:$TAG
      else
        echo "~~~~~~~~~~~~~~~~~~~~~~~~Docker image already exists~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
      fi
  artifacts:
      paths:
        - Terraform/.terraform/
        - Terraform/terraform.tfstate*
      expire_in: 1 hour

  except:
    changes:
      - README.md


Resource Build:
  stage: Resources_Build
  script:
    - terraform -chdir=Terraform init
    - terraform -chdir=Terraform plan
    - terraform -chdir=Terraform apply -auto-approve
  dependencies:
    - Image Build
  except:
    changes:
      - README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ECS Service:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts5wqn9fsosb7wu9rjgi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fts5wqn9fsosb7wu9rjgi.png" alt=" " width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application Load Balancer&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv65ebm8r0vz2ddv3l8ad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv65ebm8r0vz2ddv3l8ad.png" alt=" " width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamlit Application UI&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bcmosoenb4xxuwawtg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9bcmosoenb4xxuwawtg5.png" alt=" " width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application URL:&lt;/strong&gt; &lt;a href="http://alb-chatbot-733072597.us-east-1.elb.amazonaws.com/" rel="noopener noreferrer"&gt;http://alb-chatbot-733072597.us-east-1.elb.amazonaws.com/&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Repository Link:&lt;/strong&gt; &lt;a href="https://github.com/dasanirban834/build-llm-chatbot-using-langchain.git" rel="noopener noreferrer"&gt;https://github.com/dasanirban834/build-llm-chatbot-using-langchain.git&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🏁Conclusion:
&lt;/h2&gt;

&lt;p&gt;Building an LLM-powered chatbot no longer requires complex infrastructure or deep ML expertise. By combining Streamlit for a clean and interactive UI, Amazon Bedrock for secure and scalable access to foundation models, and LangChain for prompt orchestration and memory management, you can rapidly develop a powerful, enterprise-ready conversational AI application.&lt;/p&gt;

&lt;p&gt;This architecture strikes a perfect balance between simplicity and extensibility. You can start with a basic chatbot in minutes and gradually evolve it into a sophisticated assistant by adding features like persistent memory, RAG with enterprise documents, authentication, analytics, and multi-model support—all while staying within the AWS ecosystem.&lt;/p&gt;

&lt;p&gt;Whether you are building an internal productivity tool, a customer-facing assistant, or experimenting with GenAI use cases, this approach provides a strong foundation that is both future-proof and production-friendly.&lt;/p&gt;

&lt;p&gt;With the right prompts, thoughtful UX, and responsible model usage, your chatbot can become more than just a demo—it can be a real business enabler.&lt;/p&gt;

&lt;p&gt;Happy building and exploring the power of Generative AI! 🚀&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>langchain</category>
      <category>aws</category>
    </item>
    <item>
      <title>Build a Smart Snake Game Using Amazon Q CLI</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Sat, 21 Jun 2025 09:13:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/build-a-smart-snake-game-using-amazon-q-cli-po7</link>
      <guid>https://dev.to/aws-builders/build-a-smart-snake-game-using-amazon-q-cli-po7</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;In the world of programming, few projects are as nostalgic and fun to build as the smart snake Game. Whether you're a beginner looking to sharpen your logic or an experienced developer revisiting the retro charm of early gaming, Snake is a timeless coding challenge.&lt;/p&gt;

&lt;p&gt;In this blog, we’ll walk you through how we reimagined this game using Amazon Q CLI—a powerful AI-powered command line interface that helps accelerate development through conversational and code-driven assistance. By combining the intuitive support of Amazon Q with core programming principles, we built an interactive and fully functional Snake Game from the ground up.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting Up an Environment:
&lt;/h2&gt;

&lt;p&gt;Setting up the deployment environment is nothing but installation of Amazon Q CLI package in the host. Here, in this project we have built a dedicated ubuntu server on which we have installed amazon-q cli package by following below steps - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Download amazon-q cli package for ubuntu
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wget https://desktop-release.q.us-east-1.amazonaws.com/latest/amazon-q.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Install the package
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo apt-get install -f
sudo dpkg -i amazon-q.deb
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Launch amazon-q cli terminal. While doing that, it will ask to authenticate yourself using Builder ID or IAM Identity Center.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Add below parameters in sshd_config file in &lt;strong&gt;/etc/ssh/sshd_config&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;AcceptEnv Q_SET_PARENT
AllowStreamLocalForwarding yes
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Restart the sshd service
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;systemctl restart sshd
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Disconnect from the SSH session and reconnect.&lt;/li&gt;
&lt;li&gt;After relogging into the server, type "q chat" to start chat session.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why I chose this Game &amp;amp; Let Me Introduce About the Features ?
&lt;/h2&gt;

&lt;p&gt;This is a smart snake game which everyone used to play on a mobile once upon a time, in today's generation it has been very nostalgic to think about this game and experience again. It might provide an amazing user experience if some new features can be added with existing one, so this will make the game a little bit exciting than the previous one.&lt;br&gt;
Here, we included couple of more interesting features that can bring back the excitement which is lost today - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The fundamental feature has been remained same like old days where a snake is seen chasing for a rat and rat is moving here and there to be in safe side.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Every time snake catches the rat, it adds &lt;strong&gt;20 points&lt;/strong&gt;. Sometimes, user will receive some questions related to python or aws which will boost the earnings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Every correct answers to python questions will sum up &lt;strong&gt;100 points&lt;/strong&gt; on current earnings, whereas every correct answers of AWS questions will add poison that snake can utilize to hunt rat directly by throwing rather than chasing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Snake will gain good length with having more rats as a food. There we have to be little conscious if multiple snakes are not intercepted to each other that can lead the end of a game. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For newly joined players, there is an option to opt for auto-chase feature which can guide newly attempted candidates to have an experience on "How to Play". Once the candidate is confident about the steps to followed, then another option "Disable Auto-Chase" can be chosen to take the feelings of playing on your own.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Prompt Management:
&lt;/h2&gt;

&lt;p&gt;Prompt selection is a critical steps to be done to have the expectations reflected in reality, as amazon q works upon the prompts that user is giving, so response would be more closer to expectations if prompts are given in correct and detailed manner. But how can we give better prompts -&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;First, provide complete details of the objective which is to be developed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Second, give each details of the features you want to include. Explain how that features should work in the application you're going to build, also elaborate if there is any limitations/rules to be applied.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use some adjectives to  highlight any particular keyword. For example, if you want to explain the attractiveness of UI/UX of any application, that adjectives can be playing an important role to help amazon q understand the specifications of application.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Below prompt has been given while developing the game -&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgcsu509lsdrk4ri0xqi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffgcsu509lsdrk4ri0xqi.png" alt="Detailed prompts" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  How Amazon Q Comprehended the User Prompts and Converted into Code:
&lt;/h2&gt;

&lt;p&gt;Based on the prompts given by user, amazon q started comprehending the prompts line by line and understood the requirement for which it has created couple of configuration files to develop game logic and UI to represent over the internet.&lt;br&gt;
It has created below files to develop the gaming application interface - &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dockerfile&lt;/li&gt;
&lt;li&gt;README.md&lt;/li&gt;
&lt;li&gt;docker-compose.yml&lt;/li&gt;
&lt;li&gt;script.js&lt;/li&gt;
&lt;li&gt;snake-game.html&lt;/li&gt;
&lt;li&gt;style.css&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After creating the configuration files, we have instructed amazon q to build a docker image of that application and create a container with (80:8000) port mapping, so it has created "Dockerfile" and "docker-compose.yml".&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; Build docker image using the script files it generated and run a container that accepts the traffic at 8000 port at container interface from 80 port of EC2 instance.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Snake Game Project Explanation

This directory contains a modern implementation of the smartsnake game with educational elements focused on Python programming and AWS knowledge. Here's a
breakdown of the key files:

## 1. script.js
This is the main JavaScript file that contains all the game logic:

• **Game Mechanics**: Implements the smart snake game with a moving rat (instead of static food)
• **Auto-Chase Feature**: The snake can automatically chase the rat using a pathfinding algorithm
• **Educational Challenges**: Periodically presents Python and AWS questions to the player
• **Reward System**:
  • Python questions give 100 points and make the snake longer
  • AWS questions give "poison" that can be used to slow down the rat
• **Game Controls**: Handles keyboard input, button clicks, and game state management

The script includes various functions for:
• Initializing and drawing the game
• Moving the snake and rat
• Collision detection
• Challenge generation and validation
• Score tracking and high score persistence

## 2. snake-game.html
The HTML structure of the game that includes:
• Canvas element for rendering the game
• Score display and control buttons
• Modal for educational challenges
• Game over screen
• Instructions section

## 3. style.css
Provides the visual styling for the game with:
• Retro gaming aesthetic using "Press Start 2P" font
• Animated gradient background
• Neon-style elements with glow effects
• Responsive design for different screen sizes
• Styling for game elements, modals, and buttons

## 4. Dockerfile
Configures a Docker container using:
• Nginx Alpine as the base image
• Copies the game files to the Nginx web server directory
• Configures Nginx to listen on port 8000
• Exposes the necessary port for web access

## 5. docker-compose.yml
Simplifies deployment with Docker Compose by:
• Building the container from the Dockerfile
• Mapping port 80 on the host to port 8000 in the container
• Setting the container to restart automatically unless stopped manually

## 6. README.md
Provides comprehensive documentation about:
• Game features and mechanics
• Educational content
• Technologies used
• Project structure
• Installation and running instructions
• Future enhancement ideas

## How to Run the Game

You can run the game using Docker:
bash
docker-compose up -d


Then access the game by opening a web browser and navigating to http://localhost.

Alternatively, you can simply open the snake-game.html file directly in a web browser without Docker.

The game combines entertainment with educational content, making it a fun way to learn about Python programming concepts and AWS services while playing a
classic game.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now the game is ready to play. Let's have look how this looks and creates a vibe 😊 &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvuiwuqj3a1kko6szu5z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvvuiwuqj3a1kko6szu5z.png" alt="Smart Snake Game" width="800" height="818"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q8tasiw7szwbnb6o563.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8q8tasiw7szwbnb6o563.png" alt="Moving snake towards the rat" width="800" height="805"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ny7au0zwcpossnf6nrl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ny7au0zwcpossnf6nrl.png" alt="Python quiz to be asked to earn 100 points" width="800" height="829"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;p&gt;Our weekend project taught us several valuable lessons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AI coding assistants can develop such an amazing things within few minutes with a magic.&lt;/li&gt;
&lt;li&gt;Tools like AI coding assistant has not only been helpful for coders, but also it has been a dream platform for non-codes who has been dreaming of building such an amazing application, but couldn't due to lack of web development skills.&lt;/li&gt;
&lt;li&gt;From business point of view, this is really a cost effective solutions to be introduced.&lt;/li&gt;
&lt;li&gt;Pygame is a crucial package to make this sort of games.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;This is really a good experience to have such a game within few minutes and moreover when it's working as you expected, that would really be a pleasure. But this amazon q cli is not only just for developing application, but also infrastructure build using terraform and writing some other configuration as per the requirements, making things in a single place and package it properly for deployment can be done without having any headache. This tool opens up lots of opportunities to be attempted, the real need is bring up some amazing ideas which you can provide to amazon q to get this implemented.&lt;/p&gt;

&lt;p&gt;The way I developed this game can be taken for any other tool or services to develop, the only things to be focused is the approach we're going to take. So, altogether this has been really game changing era of AI where we are not bound to dream only, rather to apply in the real world as well.&lt;/p&gt;

&lt;p&gt;I have published the files for this game in GitHub. Please have a look into this and share feedback 😊&lt;br&gt;
GitHub: &lt;a href="https://github.com/dasanirban834/game-development-using-amazonQ" rel="noopener noreferrer"&gt;https://github.com/dasanirban834/game-development-using-amazonQ&lt;/a&gt;&lt;br&gt;
Game Link: &lt;a href="http://alb-1907765677.us-east-1.elb.amazonaws.com/" rel="noopener noreferrer"&gt;http://alb-1907765677.us-east-1.elb.amazonaws.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks!!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>amazonqcli</category>
      <category>gamechallenge</category>
      <category>awscommunity</category>
    </item>
    <item>
      <title>Document Translation Service using Streamlit &amp; AWS Translator</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Sun, 08 Dec 2024 17:38:34 +0000</pubDate>
      <link>https://dev.to/aws-builders/document-translation-service-using-streamlit-aws-translator-497a</link>
      <guid>https://dev.to/aws-builders/document-translation-service-using-streamlit-aws-translator-497a</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;DocuTranslator, a document translation system, built in AWS and developed by Streamlit application framework. This application allows end user to translate the documents in their preferred language which they want to upload. It provides feasibility to translate in multiple languages as user wants, which really helps users to understand the content in their comfortable way.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background:
&lt;/h2&gt;

&lt;p&gt;The intent of this project is to provide a user friendly, simple application interface to fulfill the translation process as simple as users expect. In this system, nobody has to translate documents by entering into AWS Translate service, rather end user can directly access the application endpoint and get the requirements fulfilled.&lt;/p&gt;

&lt;h2&gt;
  
  
  High Level Architecture Diagram:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhbpevsdrhynbeqcgdbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhbpevsdrhynbeqcgdbo.png" alt="High Level Architecture" width="800" height="329"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does This Work:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;End user is allowed to access an application through an application load balancer.&lt;/li&gt;
&lt;li&gt;Once application interface is opened, user will upload the required files to be translated and language to translate to.&lt;/li&gt;
&lt;li&gt;After submitting these details, file will be uploaded to mentioned source S3 bucket which triggers a lambda function to connect with AWS Translator service.&lt;/li&gt;
&lt;li&gt;Once translated document is ready, will be uploaded to destination S3 bucket.&lt;/li&gt;
&lt;li&gt;After that, end user can download the translated document from Streamlit application portal.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Architecture:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgixxzkb3r2al39i8brh6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgixxzkb3r2al39i8brh6.png" alt="Technical Architecture" width="800" height="753"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above architecture shows below key points -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application code has been containerized and stored to ECR repository.&lt;/li&gt;
&lt;li&gt;As per above design, an ECS cluster has been setup which instantiates two  tasks that pulls application image from ECR repository.&lt;/li&gt;
&lt;li&gt;Both the tasks are launched on top of EC2 as a launch type. Both EC2s are launched in private subnet in us-east-1a and us-east-1b availability zones.&lt;/li&gt;
&lt;li&gt;A EFS file system is created to share application codes between two underlying EC2 instances. Two mountpoints are created in two availability zones (us-east-1a and us-east-1b).&lt;/li&gt;
&lt;li&gt;Two public subnets are configured in front of private subnets and a NAT gateway is set up in the public subnet in us-east-1a availability zone.&lt;/li&gt;
&lt;li&gt;An application load balancer has been configured in front of private subnets which distributes the traffic across two public subnets at port 80 of application load balancer security group(ALB SG).&lt;/li&gt;
&lt;li&gt;Two EC2 instances are configured in two different target group with same EC2 security group(Streamlit_SG) which accepts traffic at 16347 port from application load balancer.&lt;/li&gt;
&lt;li&gt;There is port mapping configured between port 16347 in EC2 instances and port 8501 at ECS container. Once traffic will hit at port 16347 of EC2 security group, will be redirected to 8501 port at ECS container level.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How Data is Getting Stored ?
&lt;/h2&gt;

&lt;p&gt;Here, we have used EFS share path to share same application files between two underlying EC2 instances. We have created a mountpoint /&lt;strong&gt;streamlit_appfiles&lt;/strong&gt; inside the EC2 instances and mounted with EFS share. This approach will help in sharing same content across two different servers. After that, our intent is to create a replicate same application content to container working directory which is /&lt;strong&gt;streamlit&lt;/strong&gt;. For that we have used bind mounts so that whatever changes will be made on application code at EC2 level, will be replicated to container as well. We need to restrict bi-directional replication which says if anyone mistakenly changes code from inside the container, it should not replicate to EC2 host level, hence inside the container working directory has been created as a read only filesystem.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1oiu8n4p4kj40sxrh3a9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1oiu8n4p4kj40sxrh3a9.png" alt="Image description" width="800" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  ECS Container Configuration and Volume:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Underlying EC2 Configuration:&lt;/strong&gt;&lt;br&gt;
Instance Type: t2.medium&lt;br&gt;
Network type: Private Subnet&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container Configuration:&lt;/strong&gt;&lt;br&gt;
Image:&lt;br&gt;
Network Mode: Default&lt;br&gt;
Host Port: 16347&lt;br&gt;
Container Port: 8501&lt;br&gt;
Task CPU: 2 vCPU (2048 units)&lt;br&gt;
Task Memory: 2.5 GB (2560 MiB)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn9xqgx46n1ga1cptfdn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffn9xqgx46n1ga1cptfdn.png" alt="Image description" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Volume Configuration:&lt;/strong&gt;&lt;br&gt;
Volume Name: streamlit-volume&lt;br&gt;
Source Path: /streamlit_appfiles&lt;br&gt;
Container Path: /streamlit&lt;br&gt;
Read Only Filesystem: YES&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslpw5glsoiff3fv6serj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fslpw5glsoiff3fv6serj.png" alt="Image description" width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task Definition Reference:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "taskDefinitionArn": "arn:aws:ecs:us-east-1:&amp;lt;account-id&amp;gt;:task-definition/Streamlit_TDF-1:5",
    "containerDefinitions": [
        {
            "name": "streamlit",
            "image": "&amp;lt;account-id&amp;gt;.dkr.ecr.us-east-1.amazonaws.com/anirban:latest",
            "cpu": 0,
            "portMappings": [
                {
                    "name": "streamlit-8501-tcp",
                    "containerPort": 8501,
                    "hostPort": 16347,
                    "protocol": "tcp",
                    "appProtocol": "http"
                }
            ],
            "essential": true,
            "environment": [],
            "environmentFiles": [],
            "mountPoints": [
                {
                    "sourceVolume": "streamlit-volume",
                    "containerPath": "/streamlit",
                    "readOnly": true
                }
            ],
            "volumesFrom": [],
            "ulimits": [],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/Streamlit_TDF-1",
                    "mode": "non-blocking",
                    "awslogs-create-group": "true",
                    "max-buffer-size": "25m",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                },
                "secretOptions": []
            },
            "systemControls": []
        }
    ],
    "family": "Streamlit_TDF-1",
    "taskRoleArn": "arn:aws:iam::&amp;lt;account-id&amp;gt;:role/ecsTaskExecutionRole",
    "executionRoleArn": "arn:aws:iam::&amp;lt;account-id&amp;gt;:role/ecsTaskExecutionRole",
    "revision": 5,
    "volumes": [
        {
            "name": "streamlit-volume",
            "host": {
                "sourcePath": "/streamlit_appfiles"
            }
        }
    ],
    "status": "ACTIVE",
    "requiresAttributes": [
        {
            "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
        },
        {
            "name": "ecs.capability.execution-role-awslogs"
        },
        {
            "name": "com.amazonaws.ecs.capability.ecr-auth"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.28"
        },
        {
            "name": "com.amazonaws.ecs.capability.task-iam-role"
        },
        {
            "name": "ecs.capability.execution-role-ecr-pull"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
        }
    ],
    "placementConstraints": [],
    "compatibilities": [
        "EC2"
    ],
    "requiresCompatibilities": [
        "EC2"
    ],
    "cpu": "2048",
    "memory": "2560",
    "runtimePlatform": {
        "cpuArchitecture": "X86_64",
        "operatingSystemFamily": "LINUX"
    },
    "registeredAt": "2024-11-09T05:59:47.534Z",
    "registeredBy": "arn:aws:iam::&amp;lt;account-id&amp;gt;:root",
    "tags": []
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklt9zlmi7eu0budtghif.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklt9zlmi7eu0budtghif.png" alt="Image description" width="800" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Developing Application Code and Creating Docker Image:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;app.py&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import streamlit as st
import boto3
import os
import time
from pathlib import Path

s3 = boto3.client('s3', region_name='us-east-1')
tran = boto3.client('translate', region_name='us-east-1')
lam = boto3.client('lambda', region_name='us-east-1')


# Function to list S3 buckets
def listbuckets():
    list_bucket = s3.list_buckets()
    bucket_name = tuple([it["Name"] for it in list_bucket["Buckets"]])
    return bucket_name

# Upload object to S3 bucket
def upload_to_s3bucket(file_path, selected_bucket, file_name):
    s3.upload_file(file_path, selected_bucket, file_name)

def list_language():
    response = tran.list_languages()
    list_of_langs = [i["LanguageName"] for i in response["Languages"]]
    return list_of_langs

def wait_for_s3obj(dest_selected_bucket, file_name):
    while True:
        try:
            get_obj = s3.get_object(Bucket=dest_selected_bucket, Key=f'Translated-{file_name}.txt')
            obj_exist = 'true' if get_obj['Body'] else 'false'
            return obj_exist
        except s3.exceptions.ClientError as e:
            if e.response['Error']['Code'] == "404":
                print(f"File '{file_name}' not found. Checking again in 3 seconds...")
                time.sleep(3)

def download(dest_selected_bucket, file_name, file_path):
     s3.download_file(dest_selected_bucket,f'Translated-{file_name}.txt', f'{file_path}/download/Translated-{file_name}.txt')
     with open(f"{file_path}/download/Translated-{file_name}.txt", "r") as file:
       st.download_button(
             label="Download",
             data=file,
             file_name=f"{file_name}.txt"
       )

def streamlit_application():
    # Give a header
    st.header("Document Translator", divider=True)
    # Widgets to upload a file
    uploaded_files = st.file_uploader("Choose a PDF file", accept_multiple_files=True, type="pdf")
    # # upload a file
    file_name = uploaded_files[0].name.replace(' ', '_') if uploaded_files else None
    # Folder path
    file_path = '/tmp'
    # Select the bucket from drop down
    selected_bucket = st.selectbox("Choose the S3 Bucket to upload file :", listbuckets())
    dest_selected_bucket = st.selectbox("Choose the S3 Bucket to download file :", listbuckets())
    selected_language = st.selectbox("Choose the Language :", list_language())
    # Create a button
    click = st.button("Upload", type="primary")
    if click == True:
        if file_name:
            with open(f'{file_path}/{file_name}', mode='wb') as w:
                w.write(uploaded_files[0].getvalue())
        # Set the selected language to the environment variable of lambda function
        lambda_env1 = lam.update_function_configuration(FunctionName='TriggerFunctionFromS3', Environment={'Variables': {'UserInputLanguage': selected_language, 'DestinationBucket': dest_selected_bucket, 'TranslatedFileName': file_name}})
        # Upload the file to S3 bucket:
        upload_to_s3bucket(f'{file_path}/{file_name}', selected_bucket, file_name)
        if s3.get_object(Bucket=selected_bucket, Key=file_name):
            st.success("File uploaded successfully", icon="✅")
            output = wait_for_s3obj(dest_selected_bucket, file_name)
            if output:
              download(dest_selected_bucket, file_name, file_path)
        else:
            st.error("File upload failed", icon="🚨")


streamlit_application()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;about.py&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import streamlit as st

## Write the description of application
st.header("About")
about = '''
Welcome to the File Uploader Application!

This application is designed to make uploading PDF documents simple and efficient. With just a few clicks, users can upload their documents securely to an Amazon S3 bucket for storage. Here’s a quick overview
of what this app does:

**Key Features:**
- **Easy Upload:** Users can quickly upload PDF documents by selecting the file and clicking the 'Upload' button.
- **Seamless Integration with AWS S3:** Once the document is uploaded, it is stored securely in a designated S3 bucket, ensuring reliable and scalable cloud storage.
- **User-Friendly Interface:** Built using Streamlit, the interface is clean, intuitive, and accessible to all users, making the uploading process straightforward.

**How it Works:**
1. **Select a PDF Document:** Users can browse and select any PDF document from their local system.
2. **Upload the Document:** Clicking the ‘Upload’ button triggers the process of securely uploading the selected document to an AWS S3 bucket.
3. **Success Notification:** After a successful upload, users will receive a confirmation message that their document has been stored in the cloud.
This application offers a streamlined way to store documents on the cloud, reducing the hassle of manual file management. Whether you're an individual or a business, this tool helps you organize and store your
 files with ease and security.
You can further customize this page by adding technical details, usage guidelines, or security measures as per your application's specifications.'''

st.markdown(about)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;navigation.py&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import streamlit as st

pg = st.navigation([
    st.Page("app.py", title="DocuTranslator", icon="📂"),
    st.Page("about.py", title="About", icon="🔥")
], position="sidebar")

pg.run()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Dockerfile:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM python:3.9-slim
WORKDIR /streamlit
COPY requirements.txt /streamlit/requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
RUN mkdir /tmp/download
COPY . /streamlit
EXPOSE 8501
CMD ["streamlit", "run", "navigation.py", "--server.port=8501", "--server.headless=true"]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Docker file will create an image by packaging all above application configuration files and then it was pushed to ECR repository. Docker Hub can also be used to store the image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Load Balancing
&lt;/h2&gt;

&lt;p&gt;In the architecture, application instances are supposed to be created in private subnet and load balancer is supposed to create to reduce incoming traffic load to private EC2 instances.&lt;br&gt;
As there are two underlying EC2 hosts available to host containers, so load balancing is configured across two EC2 hosts to distribute incoming traffic. Two different target groups are created to place two EC2 instances in each with 50% weightage.&lt;/p&gt;

&lt;p&gt;Load balancer accepts incoming traffic at port 80 and then passes to backend EC2 instances at port 16347 and that also passed to corresponding ECS container.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgq1giijv9vpt6zj2cer.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkgq1giijv9vpt6zj2cer.png" alt="Image description" width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwc5pb1wjicy9fhzzkrz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjwc5pb1wjicy9fhzzkrz.png" alt="Image description" width="800" height="303"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Lambda Function:
&lt;/h2&gt;

&lt;p&gt;There is a lambda function configured to take source bucket as an input to download pdf file from there and extract the contents, then it translates the contents from current language to user provided target language and creates a text file to upload to destination S3 bucket.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import os
import datetime
import sys
from io import BytesIO
sys.path.append('./module')
import PyPDF2
from fpdf import FPDF

s3 = boto3.client('s3')
tran = boto3.client('translate', region_name='us-east-1')

def download_pdf_from_s3(sourcebucket, objectname):
    # Download PDF file from S3 bucket
    pdf_object = s3.get_object(Bucket=sourcebucket, Key=objectname)
    return pdf_object['Body'].read()

def extract_text_from_pdf(pdf_bytes):
    # Extract text from the PDF file using PyPDF2
    pdf_reader = PyPDF2.PdfReader(BytesIO(pdf_bytes))
    text = ''
    for page_num in range(len(pdf_reader.pages)):
        page = pdf_reader.pages[page_num]
        extracted_text = page.extract_text()
        if extracted_text:
            text += page.extract_text()
    return text

def translate_text(pdf_text, TargetLang):
    # Translate text using AWS Translate
    TargetLangCode = ''
    response = tran.list_languages()
    for i in response['Languages']:
        if i['LanguageName'] == TargetLang:
            TargetLangCode = i['LanguageCode']
    result = tran.translate_text(
        Text=pdf_text,
        SourceLanguageCode='auto',
        TargetLanguageCode=TargetLangCode
    )
    return result['TranslatedText']

def create_text_file_and_upload_to_s3(translated_text, DestinationBucket, objectname):
    formatted_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    with open('/tmp/translated_text.txt', 'w', encoding='utf-8') as f:
        f.write(translated_text)
    s3.upload_file('/tmp/translated_text.txt', DestinationBucket, f'Translated-{objectname}.txt')

def lambda_handler(event, context):
    eventtime = event['Records'][0]['eventTime']
    sourcebucket = event['Records'][0]['s3']['bucket']['name']
    objectname = event['Records'][0]['s3']['object']['key']
    print(objectname)
    DestinationBucket = os.environ['DestinationBucket']
    TranslatedFileName = os.environ['TranslatedFileName']
    TargetLang = os.environ['UserInputLanguage']

    pdf_bytes = download_pdf_from_s3(sourcebucket, objectname)
    pdf_text  = extract_text_from_pdf(pdf_bytes)
    translated_text = translate_text(pdf_text, TargetLang)
    create_text_file_and_upload_to_s3(translated_text, DestinationBucket, objectname)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Application Testing:
&lt;/h2&gt;

&lt;p&gt;Open the application load balancer url "ALB-747339710.us-east-1.elb.amazonaws.com" to open the web application. Browse any pdf file, keep both source &lt;strong&gt;"fileuploadbucket-hwirio984092jjs"&lt;/strong&gt; and destination bucket &lt;strong&gt;"translatedfileuploadbucket-kh939809kjkfjsekfl"&lt;/strong&gt; as it is, because in the lambda code, it has been hard coded the target bucket is as mentioned above. Choose the language you want the document to be translated and click on upload. Once it's clicked, application program will start polling the destination S3 bucket to find out if the translated file is uploaded. If it find the exact file, then a new option "Download" will be visible to down load the file from destination S3 bucket.&lt;/p&gt;

&lt;p&gt;Application Link: &lt;a href="http://alb-747339710.us-east-1.elb.amazonaws.com/" rel="noopener noreferrer"&gt;http://alb-747339710.us-east-1.elb.amazonaws.com/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50fbbnd2jkwkr4ykf0b7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F50fbbnd2jkwkr4ykf0b7.png" alt="Image description" width="800" height="344"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Actual Content:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Title: "The Whispering Shadows"
It was a stormy evening when Mara first noticed the odd presence in her new home. The house had
always felt a bit… off, but she couldn't quite put her finger on it. A quiet suburban street, a two-story
Victorian house, tucked away behind a picket fence. Everything seemed perfect when she first
moved in, but soon after the boxes were unpacked, the strange occurrences began.
The house creaked, as old houses do, but it wasn’t just the settling noises. There were whispers—
soft, barely audible murmurs that seemed to come from the walls themselves. At first, Mara
thought it was the wind, the house shifting and groaning in the rain, but the more she listened, the
clearer the words became.
“You’re not alone…”
Mara told herself it was just her imagination playing tricks on her. But then there were the shadows.
At first, they were subtle—a fleeting figure in the corner of her eye, a dark shape that vanished when
she turned her head. But the more she spent time in the house, the more persistent they became.
One night, as she lay in bed, the whispers were louder. She sat up, heart racing, staring into the dark
corners of her room. The shadows shifted, elongated, and crawled toward her. A shape materialized
at the foot of her bed—a tall, skeletal figure, its face obscured by a tattered hood. Its hands reached
toward her, fingers long and twisted.
Frozen in terror, Mara tried to scream, but no sound came out. The figure came closer, its breath
cold against her skin.
"Come with me," it whispered, its voice echoing inside her mind.
Mara tried to move, but her body wouldn’t obey. Her limbs were like lead, heavy and immovable.
The figure’s hand was inches from her when a sudden crash of thunder jolted her awake.
Gasping for breath, she sat up in her bed, drenched in sweat. Her room was empty. The storm raged
outside, the wind howling through the trees. She couldn’t tell if it had all been a dream, but the
sensation of something watching her lingered.
Over the next few days, the incidents escalated. Objects in the house would shift on their own,
doors would creak open in the dead of night, and the whispers became more insistent, urging her to
come closer, to listen.
One evening, after finding a peculiar note slipped under her door—“They are watching. You can
never leave.” —Mara finally decided to investigate the history of the house. The local library didn’t
have much on it, but a few old records revealed something chilling.
The house had been abandoned for decades before Mara moved in. Before that, it had been the
home of the Wren family, a reclusive couple with a dark secret. It was said that they had practiced
ancient rites, attempting to commune with entities from another world. The Wren family vanished
without a trace, and no one had ever heard from them again.
Mara couldn’t shake the feeling that something was waiting for her. It wasn’t until she uncovered a
hidden basement beneath the house that she realized how true that was.
The door to the basement was sealed tight, covered in dust and cobwebs, but with trembling
hands, Mara managed to pry it open. The stairs creaked as she descended into the darkness below.
The air was thick, stale with the scent of old earth and decay. As she reached the bottom, the
whispers returned, louder this time, all around her.
“You shouldn’t have come.”
Her flashlight flickered, casting long, distorted shadows across the stone walls. At the far end of the
room, something gleamed—a series of symbols carved into the floor, worn and cracked with age.
The markings formed a circle, and in its center, there was an altar.
Before she could turn to leave, the temperature dropped sharply, and a figure appeared in front of
her. It was the same tall, hooded figure from her dreams. Its eyes glowed with an otherworldly light,
and it beckoned her forward.
"You’re too late," it whispered, its voice like a thousand voices, each one colder than the last.
Mara screamed, but no sound came. The shadows closed in around her, the walls pressing in as if
the house itself were alive, hungry. And as the last of her breath left her lungs, she felt herself pulled
into the darkness, her body dissolving into the very shadows that had tormented her.
The house on Willow Street still stands, weathered by time and neglect. Some say the whispers are
still there, waiting for the next soul foolish enough to enter.
And if you listen closely, you might hear a faint voice calling out—"Come with me."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Translated Content (in Canadian French)&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Titre : « The Whispering Shadows » 

C'était une soirée orageuse lorsque Mara remarqua pour la première fois la présence étrange dans sa nouvelle maison. La maison avait 
je me sentais toujours un peu décontente, mais elle ne pouvait pas tout à fait mettre le doigt dessus. Une rue de banlieue tranquille, deux étages 
Maison victorienne, cachée derrière une palissade. Eve, tout semblait parfait quand elle a commencé. 
ils ont emménagé, mais peu de temps après le déballage des cartons, des événements étranges ont commencé. 
La maison a crié, tout comme les vieilles maisons, mais il n'y avait pas que les bruits de sédimentation. Il y avait des chuchotements —
des murmures doux et à peine audibles qui semblaient provenir des murs eux-mêmes. Au début, Mara 
croyait que c'était le vent, la maison qui bougeait et gémissait sous la pluie, mais plus elle écoutait, plus 
les mots sont devenus plus clairs. 
« Vous n'êtes pas seul... » 
Mara s'est dit que c'était juste son imagination qui lui faisait des tours. Mais ensuite, il y avait les ombres. 
Au début, ils étaient subtils — une silhouette fugace dans le coin de l'œil, une forme sombre qui a disparu lorsque 
elle tourna la tête. Mais plus elle passait de temps à la maison, plus ils devenaient persistants. 
Une nuit, alors qu'elle était couchée dans son lit, les chuchotements étaient plus forts. Elle s'est assise, le cœur battant, fixant le noir 
les coins de sa chambre. Les ombres se déplaçaient, s'allongeaient et rampaient vers elle. Une forme matérialisée 
au pied de son lit — une grande silhouette squelettique, son visage obscurci par une capuche en lambeaux. Ses mains sont parvenues 
vers elle, les doigts longs et tordus. 
Fellisée dans la terreur, Mara essaya de crier, mais aucun son ne sortit. La silhouette s'approchait, son souffle 
froid sur sa peau. 
« Viens avec moi », murmura-t-il, sa voix résonnant dans son esprit. 
Mara essaya de bouger, mais son corps n'obéit pas. Ses membres étaient comme du plomb, lourds et immobiles. 
La main de la silhouette était à des centimètres d'elle lorsqu'un coup de tonnerre soudain l'a éveillée. 
À bout de souffle, elle s'assoit dans son lit, trempée de sueur. Sa chambre était vide. La tempête a fait rage 
dehors, le vent hurle à travers les arbres. Elle ne pouvait pas dire si tout avait été un rêve, mais 
la sensation de quelque chose qui la regardait s'attardait. 
Au cours des jours qui ont suivi, les incidents se sont intensifiés. Les objets de la maison se déplaceraient d'eux-mêmes, 
les portes s'ouvriraient au milieu de la nuit, et les murmures devenaient plus insistants, la poussant à 
approchez, écoutez. 
Un soir, après avoir trouvé une note étrange glissa sous sa porte : « Ils regardent. Vous pouvez 
ne partez jamais. —Mara a finalement décidé d'enquêter sur l'histoire de la maison. La bibliothèque locale ne l'a pas fait 
j'ai beaucoup de choses à ce sujet, mais quelques vieux dossiers ont révélé quelque chose de très choquant. 
La maison avait été abandonnée pendant des décennies avant que Mara n'emménage. Avant cela, il s'agissait de 
maison de la famille Wren, un couple reclus avec un sombre secret. On a dit qu'ils avaient pratiqué 
rites anciens, essayant de faire la commune avec des entités d'un autre monde. La famille Wren disparaît. 
sans laisser de trace, et personne n'en avait plus jamais entendu parler. Mara ne pouvait pas ébranler le sentiment que quelque chose l'attendait. Ce n'est que lorsqu'elle a découvert un 
sous le sous-sol caché sous la maison, elle s'est rendu compte à quel point c'était vrai. 
La porte du sous-sol était scellée hermétiquement, couverte de poussière et de toiles d'araignées, mais avec des tremblements 
mains, Mara a réussi à l'ouvrir. Les escaliers ont grincé alors qu'elle descendait dans l'obscurité en contrebas. 
L'air était épais, vidé avec l'odeur de la vieille terre et de la décomposition. Alors qu'il a atteint le bas de la page, le 
des chuchotements revinrent, plus forts cette fois, tout autour d'elle. 
« Vous n'auriez pas dû venir. 
Sa lampe de poche vacillait, projetant de longues ombres déformées sur les murs de pierre. À la fin de la 
la chambre, quelque chose luisait — une série de symboles gravés dans le sol, usés et fissurés avec l'âge. 
Les marques formaient un cercle, et en son centre se trouvait un autel. 
Avant qu'elle ne puisse se tourner pour partir, la température a chuté brusquement et une silhouette est apparue devant 
elle. C'était la même grande silhouette à capuchon de ses rêves. Ses yeux brillaient d'une lumière d'un autre monde, 
et cela l'a fait passer à l'avant. 
« Tu es trop tard », murmura-t-il, sa voix comme mille voix, toutes plus froides les unes que les autres. 
Mara a crié, mais aucun son n'est venu. Les ombres se refermaient autour d'elle, les murs se pressant comme si 
la maison elle-même était vivante, affamée. Et alors que le dernier souffle quittait ses poumons, elle s'est sentie tirée 
dans l'obscurité, son corps se dissout dans l'ombre même qui l'avait tourmentée. 

La maison de la rue Willow est toujours debout, altérée par le temps et la négligence. Certains disent que les chuchotements sont 
toujours là, attendant que l'âme suivante soit assez stupide pour entrer. 
Et si vous écoutez attentivement, vous pourriez entendre une voix faible crier : « Venez avec moi ».  

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;This article has shown us how document translation process can be as easy as we imagine where an end user has to click on some options to choose required information and get the desired output within few seconds without thinking about configuration. For now, we have included single feature to translate a pdf document, but later on, we will research more on this to have multi functionality in a single application with having some interesting features.&lt;/p&gt;

</description>
      <category>python</category>
      <category>ai</category>
      <category>devops</category>
      <category>aws</category>
    </item>
    <item>
      <title>Docker: Installation on Amazon Linux 2 or Amazon Linux 2023</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Sun, 08 Sep 2024 03:40:36 +0000</pubDate>
      <link>https://dev.to/dasanirban834/docker-installation-on-amazon-linux-2-or-amazon-linux-2023-5afa</link>
      <guid>https://dev.to/dasanirban834/docker-installation-on-amazon-linux-2-or-amazon-linux-2023-5afa</guid>
      <description>&lt;p&gt;Docker supports multiple platforms to install. It supports Windows, Linux and MAC through Docker Desktop. For detailed instructions, please check -&lt;/p&gt;

&lt;p&gt;Linux: &lt;a href="https://docs.docker.com/desktop/install/linux-install/" rel="noopener noreferrer"&gt;https://docs.docker.com/desktop/install/linux-install/&lt;/a&gt;&lt;br&gt;
MAC: &lt;a href="https://docs.docker.com/desktop/install/mac-install/" rel="noopener noreferrer"&gt;https://docs.docker.com/desktop/install/mac-install/&lt;/a&gt;&lt;br&gt;
Windows: &lt;a href="https://docs.docker.com/desktop/install/windows-install/" rel="noopener noreferrer"&gt;https://docs.docker.com/desktop/install/windows-install/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker supports multiple linux distributions -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CentOS&lt;/li&gt;
&lt;li&gt;Debian&lt;/li&gt;
&lt;li&gt;fedora&lt;/li&gt;
&lt;li&gt;RHEL&lt;/li&gt;
&lt;li&gt;Ubuntu&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Installation :
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Before installation, please make sure kernel is updated -
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum update -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Install the most Docker Community Edition Package -
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo yum install docker -y
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Once installed, then check the status -
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@docker ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: https://docs.docker.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Start the docker service -
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@docker ~]# systemctl start docker
[root@docker ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2024-09-01 15:46:41 UTC; 2s ago
     Docs: https://docs.docker.com
  Process: 5963 ExecStartPre=/usr/libexec/docker/docker-setup-runtimes.sh (code=exited, status=0/SUCCESS)
  Process: 5962 ExecStartPre=/bin/mkdir -p /run/docker (code=exited, status=0/SUCCESS)
 Main PID: 5966 (dockerd)
    Tasks: 7
   Memory: 27.5M
   CGroup: /system.slice/docker.service
           └─5966 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --default-ulimit nofile=32768:65536

Sep 01 15:46:40 docker systemd[1]: Starting Docker Application Container Engine...
Sep 01 15:46:40 docker dockerd[5966]: time="2024-09-01T15:46:40.754619713Z" level=info msg="Starting up"
Sep 01 15:46:40 docker dockerd[5966]: time="2024-09-01T15:46:40.810097673Z" level=info msg="Loading containers: start."
Sep 01 15:46:40 docker dockerd[5966]: time="2024-09-01T15:46:40.979307711Z" level=info msg="Loading containers: done."
Sep 01 15:46:40 docker dockerd[5966]: time="2024-09-01T15:46:40.990747588Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
Sep 01 15:46:40 docker dockerd[5966]: time="2024-09-01T15:46:40.991061355Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
Sep 01 15:46:40 docker dockerd[5966]: time="2024-09-01T15:46:40.991263475Z" level=info msg="Docker daemon" commit=b08a51f containerd-snapshotter=...on=25.0.6
Sep 01 15:46:40 docker dockerd[5966]: time="2024-09-01T15:46:40.991518728Z" level=info msg="Daemon has completed initialization"
Sep 01 15:46:41 docker dockerd[5966]: time="2024-09-01T15:46:41.020791488Z" level=info msg="API listen on /run/docker.sock"
Sep 01 15:46:41 docker systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ul&gt;
&lt;li&gt;Enable the docker service -
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@docker ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;For installation in other distros, see below link -&lt;/p&gt;


&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
      &lt;div class="c-embed__cover"&gt;
        &lt;a href="https://docs.docker.com/engine/install/" class="c-link s:max-w-50 align-middle" rel="noopener noreferrer"&gt;
          &lt;img alt="" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.docker.com%2Fimages%2Fthumbnail.webp" height="auto" class="m-0"&gt;
        &lt;/a&gt;
      &lt;/div&gt;
    &lt;div class="c-embed__body"&gt;
      &lt;h2 class="fs-xl lh-tight"&gt;
        &lt;a href="https://docs.docker.com/engine/install/" rel="noopener noreferrer" class="c-link"&gt;
          Install | Docker Docs

        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;p class="truncate-at-3"&gt;
          Learn how to choose the best method for you to install Docker Engine. This client-server application is available on Linux, Mac, Windows, and as a static binary.
        &lt;/p&gt;
      &lt;div class="color-secondary fs-s flex items-center"&gt;
          &lt;img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdocs.docker.com%2Ffavicons%2Fdocs%402x.ico"&gt;
        docs.docker.com
      &lt;/div&gt;
    &lt;/div&gt;
&lt;/div&gt;


</description>
      <category>devops</category>
      <category>docker</category>
    </item>
    <item>
      <title>Docker : What &amp; Why Docker ?</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Sun, 08 Sep 2024 03:39:50 +0000</pubDate>
      <link>https://dev.to/dasanirban834/docker-what-why-docker--4mgd</link>
      <guid>https://dev.to/dasanirban834/docker-what-why-docker--4mgd</guid>
      <description>&lt;h2&gt;
  
  
  What is Docker ?
&lt;/h2&gt;

&lt;p&gt;Docker is an opensource application for application development and packaging. It is lightweight in nature and due to this property, it is considered to be mostly acceptable platform for application developing, testing, shipping and packaging. With docker, application can be managed in isolated way in a different smallest unit which is called Container. Container is known as smallest unit of docker.&lt;br&gt;
Docker can run multiple containers simultaneously in isolated way that helps a container run independently without any intervention. A container provides a same terminal environment to work where all pre-requisites are installed as part of image, so no need to worry about installation of basic utilities and packages.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concept of Docker :
&lt;/h2&gt;

&lt;p&gt;In the case of traditional virtual machine, hypervisor is installed on top of host operating system of physical hardware, that helps to virtualize underlying compute resources from physical hardware. The guest operating system on top the hypervisor is required to build virtual machines which also requires compute resources like cpu, memory and that is taken from underlying physical hardware through hypervisor. In this approach, hypervisor is there to virtualize compute resources from hardware, hence it is known as Hardware Virtualization.&lt;/p&gt;

&lt;p&gt;But in case of container, container platform is installed in the place of hypervisor on top the host operating system, which virtualizes operating system instead of hardware and through this OS virtualization it takes access to underlying cpu, memory and disk spaces.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjj964ajchx8vs3nodi99.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjj964ajchx8vs3nodi99.png" alt="Image description" width="800" height="481"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Architecture :
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ca7bj8qmr6z6l8u98ix.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ca7bj8qmr6z6l8u98ix.png" alt="Image description" width="602" height="548"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are below key components -&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Docker Daemon&lt;/li&gt;
&lt;li&gt;Docker Client&lt;/li&gt;
&lt;li&gt;Docker Host&lt;/li&gt;
&lt;li&gt;Docker Registry&lt;/li&gt;
&lt;li&gt;Docker Desktop&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Docker Daemon :
&lt;/h2&gt;

&lt;p&gt;Docker daemon is an background process which runs on docker host and listens all docker API requests from user like pull, push, build etc. This is responsible to manage all docker components like images, containers, networks, docker volumes etc. Once it receives API requests from user, it executes those operations. There are few functionalities mentioned below -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;As part of container management, daemon is responsible to create, manage, delete containers based upon users request. Also, whenever required docker container is started, restarted or stopped by using docker client (CLI or GUI) that directly asks docker daemon to perform on behalf of user.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;This is responsible to pull images from registry or push local customized image to registry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker daemon utilizes host operating system's kernel to perform operation on containers. There is nothing added specifically with daemon to help services running, instead it takes advantage of OS virtualization due to which it captures all computes resources from host operating system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Docker Client :
&lt;/h2&gt;

&lt;p&gt;Docker client is the method by using an user can interact with docker daemon. This client can either be a CLI utility or GUI, by using any of these two methods, user is able to connect to docker daemon and provide instructions to perform. Once docker daemon receives the instructions from user, then it starts performing those actions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Host :
&lt;/h2&gt;

&lt;p&gt;Docker host is a physical or virtual machine that runs docker and allocates compute resources to function containers properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Registries :
&lt;/h2&gt;

&lt;p&gt;A docker registry is a location where docker images are stored. There can be two different types of registries - Public and Private. Public registries are open to everyone and users can pull those images publicly and private registry requires authentication to pull or push.&lt;/p&gt;

&lt;h2&gt;
  
  
  Docker Desktop :
&lt;/h2&gt;

&lt;p&gt;Docker desktop is a GUI version of application which can be installed in Windows, Linux and MAC. This enables users to manage containers, images and other from console.&lt;/p&gt;

</description>
      <category>docker</category>
      <category>devops</category>
    </item>
    <item>
      <title>Automatic Troubleshooting &amp; ITSM System using EventBridge and Lambda</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Wed, 21 Aug 2024 18:25:01 +0000</pubDate>
      <link>https://dev.to/aws-builders/automatic-troubleshooting-itsm-system-using-eventbridge-and-lambda-nd</link>
      <guid>https://dev.to/aws-builders/automatic-troubleshooting-itsm-system-using-eventbridge-and-lambda-nd</guid>
      <description>&lt;h2&gt;
  
  
  Introduction :
&lt;/h2&gt;

&lt;p&gt;Folks, In IT Operations, it's a very generic task to monitor server metrices like utilization of cpu/memory and disk or filesystems, but in case any of the metrics gets triggered to be critical, then dedicated persons need to perform some basic troubleshooting by logging into server and find out the initial cause of utilization which person has to perform multiple times if he gets multiple same alert that creates boredom and not productive at all. So as a workaround, there can be a system developed which will react once alarm gets triggered and act on those instances by executing few basic troubleshooting commands. Just to summarize the problem statement and expectation-&lt;/p&gt;

&lt;h2&gt;
  
  
  Problem Statement :
&lt;/h2&gt;

&lt;p&gt;Develop a system which will fulfill below expectations -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each EC2 instances should be monitored by CloudWatch.&lt;/li&gt;
&lt;li&gt;Once alarm gets triggered, something has to be there which will login to that affected EC2 instance and perform some basic troubleshooting commands.&lt;/li&gt;
&lt;li&gt;Then, create a JIRA issue to document that incident and add the output of commands in comment section.&lt;/li&gt;
&lt;li&gt;Then, send an automatic email with providing all alarm details and JIRA issue details.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture Diagram :
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobutwmhk3t0w1zcqjsbd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fobutwmhk3t0w1zcqjsbd.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites :
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;EC2 Instances&lt;/li&gt;
&lt;li&gt;CloudWatch Alarms&lt;/li&gt;
&lt;li&gt;EventBridge Rule&lt;/li&gt;
&lt;li&gt;Lambda Function&lt;/li&gt;
&lt;li&gt;JIRA Account&lt;/li&gt;
&lt;li&gt;Simple Notification Service&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Implementation Steps :
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;A. CloudWatch Agent Installation and Configuration Setup :&lt;/strong&gt;&lt;br&gt;
Open Systems Manager console and click on "Documents"&lt;br&gt;
Search for "AWS-ConfigureAWSPackage" document and execute by providing required details.&lt;br&gt;
Package Name = AmazonCloudwatchAgent&lt;br&gt;
Post installation, CloudWatch agent needs to be configured as per configuration file . For this, execute AmazonCloudWatch-ManageAgent document. Also, make sure JSON CloudWatch config file is stored in SSM Parameter.&lt;br&gt;
Once you see that metrices are reporting to CloudWatch console, then create an alarm for CPU and Memory utilizations etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;B. Setup EventBridge Rule :&lt;/strong&gt;&lt;br&gt;
To track the alarm state changes, here, we have customized pattern a little to track alarm state changes from OK to ALARM only, not reverse one. Then, add this rule to a lambda function as a trigger.&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "source": ["aws.cloudwatch"],
  "detail-type": ["CloudWatch Alarm State Change"],
  "detail": {
    "state": {
      "value": ["ALARM"]
    },
    "previousState": {
      "value": ["OK"]
    }
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;C. Create a Lambda Function for Sending Email and Logging an Incident in JIRA :&lt;/strong&gt;
This lambda function is created for multiple activities which is triggered by EventBridge rule and as a destination SNS topic is added by using AWS SDK(Boto3). Once EventBridge rule is triggered then sends JSON event content to lambda by which function captures multiple details to process in different way.
Here, as of now we have worked on two type of alarms - i. CPU Utilization and ii. Memory Utilization. Once any of these two alarms are triggered and alarm state is changed from OK to ALARM, then EventBridge gets triggered which also triggered Lambda function to perform those tasks mentioned in the form code.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Lambda Prerequisites :&lt;/strong&gt;&lt;br&gt;
We need below modules to import for make the codes work -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&amp;gt;&amp;gt; os&lt;/li&gt;
&lt;li&gt;&amp;gt;&amp;gt; sys&lt;/li&gt;
&lt;li&gt;&amp;gt;&amp;gt; json&lt;/li&gt;
&lt;li&gt;&amp;gt;&amp;gt; boto3&lt;/li&gt;
&lt;li&gt;&amp;gt;&amp;gt; time&lt;/li&gt;
&lt;li&gt;&amp;gt;&amp;gt; requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;From above modules, except 'requests' module rest all are downloaded within a lambda underlying infrastructure by default. Importing 'requests' module directly will not be supported in Lambda. Hence, first, install request module in a folder in your local machine(laptop) by executing below command -&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install requests -t &amp;lt;directory path&amp;gt; --no-user
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;_&lt;em&gt;After that, this will be downloaded in the folder from where you are executing above command or where you want to store the module source codes, here I hope lambda code is being prepared in your local machine. If yes, then create a zip file of that entire lambda source codes with module. After that, upload the zip file to lambda function.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;So, here we are performing below two scenarios -&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. CPU Utilization -&lt;/strong&gt; If CPU utilization alarm gets triggered, then lambda function need to fetch the instance and login to that instance and perform top 5 high consuming processes. Then, it will create a JIRA issue and add the process details in the comment section. Simultaneously, it will send an email with alarm details and jira issue details with process output.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Memory Utilization -&lt;/strong&gt; Same approach as above&lt;/p&gt;

&lt;p&gt;Now, let me reframe the task details which lambda is supposed to perform -&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Login to Instance&lt;/li&gt;
&lt;li&gt;Perform Basic Troubleshooting Steps.&lt;/li&gt;
&lt;li&gt;Create a JIRA Issue&lt;/li&gt;
&lt;li&gt;Send Email to Recipient with all Details&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Scenario 1: When alarm state has been changed from OK to ALARM
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;First Set (Define the cpu and memory function) :&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;################# Importing Required Modules ################
############################################################
import json
import boto3
import time
import os
import sys
sys.path.append('./python')   ## This will add requests module along with all dependencies into this script
import requests
from requests.auth import HTTPBasicAuth

################## Calling AWS Services ###################
###########################################################
ssm = boto3.client('ssm')
sns_client = boto3.client('sns')
ec2 = boto3.client('ec2')

################## Defining Blank Variable ################
###########################################################
cpu_process_op = ''
mem_process_op = ''
issueid = ''
issuekey = ''
issuelink = ''

################# Function for CPU Utilization ################
###############################################################
def cpu_utilization(instanceid, metric_name, previous_state, current_state):
    global cpu_process_op
    if previous_state == 'OK' and current_state == 'ALARM':
        command = 'ps -eo user,pid,ppid,cmd,%mem,%cpu --sort=-%cpu | head -5'
        print(f'Impacted Instance ID is : {instanceid}, Metric Name: {metric_name}')
        # Start a session
        print(f'Starting session to {instanceid}')
        response = ssm.send_command(InstanceIds = [instanceid], DocumentName="AWS-RunShellScript", Parameters={'commands': [command]})
        command_id = response['Command']['CommandId']
        print(f'Command ID: {command_id}')
        # Retrieve the command output
        time.sleep(4)
        output = ssm.get_command_invocation(CommandId=command_id, InstanceId=instanceid)
        print('Please find below output -\n', output['StandardOutputContent'])
        cpu_process_op = output['StandardOutputContent']
    else:
        print('None')

################# Function for Memory Utilization ################
############################################################### 
def mem_utilization(instanceid, metric_name, previous_state, current_state):
    global mem_process_op
    if previous_state == 'OK' and current_state == 'ALARM':
        command = 'ps -eo user,pid,ppid,cmd,%mem,%cpu --sort=-%mem | head -5'
        print(f'Impacted Instance ID is : {instanceid}, Metric Name: {metric_name}')
        # Start a session
        print(f'Starting session to {instanceid}')
        response = ssm.send_command(InstanceIds = [instanceid], DocumentName="AWS-RunShellScript", Parameters={'commands': [command]})
        command_id = response['Command']['CommandId']
        print(f'Command ID: {command_id}')
        # Retrieve the command output
        time.sleep(4)
        output = ssm.get_command_invocation(CommandId=command_id, InstanceId=instanceid)
        print('Please find below output -\n', output['StandardOutputContent'])
        mem_process_op = output['StandardOutputContent']
    else:
        print('None')
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Second Set (Create JIRA Issue) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;################## Create JIRA Issue ################
#####################################################
def create_issues(instanceid, metric_name, account, timestamp, region, current_state, previous_state, cpu_process_op, mem_process_op, metric_val):
    ## Create Issue ##
    url ='https://&amp;lt;your-user-name&amp;gt;.atlassian.net//rest/api/2/issue'
    username = os.environ['username']
    api_token = os.environ['token']
    project = 'AnirbanSpace'
    issue_type = 'Incident'
    assignee = os.environ['username']
    summ_metric  = '%CPU Utilization' if 'CPU' in metric_name else '%Memory Utilization' if 'mem' in metric_name else '%Filesystem Utilization' if metric_name == 'disk_used_percent' else None
    metric_val = metric_val
    summary = f'Client | {account} | {instanceid} | {summ_metric} | Metric Value: {metric_val}'
    description = f'Client: Company\nAccount: {account}\nRegion: {region}\nInstanceID = {instanceid}\nTimestamp = {timestamp}\nCurrent State: {current_state}\nPrevious State = {previous_state}\nMetric Value = {metric_val}'

    issue_data = {
        "fields": {
            "project": {
                "key": "SCRUM"
            },
            "summary": summary,
            "description": description,
            "issuetype": {
                "name": issue_type
            },
            "assignee": {
                "name": assignee
            }
        }
    }
    data = json.dumps(issue_data)
    headers = {
        "Accept": "application/json",
        "Content-Type": "application/json"
    }
    auth = HTTPBasicAuth(username, api_token)
    response = requests.post(url, headers=headers, auth=auth, data=data)
    global issueid
    global issuekey
    global issuelink
    issueid = response.json().get('id')
    issuekey = response.json().get('key')
    issuelink = response.json().get('self')

    ################ Add Comment To Above Created JIRA Issue ###################
    output = cpu_process_op if metric_name == 'CPUUtilization' else mem_process_op if metric_name == 'mem_used_percent' else None
    comment_api_url = f"{url}/{issuekey}/comment"
    add_comment = requests.post(comment_api_url, headers=headers, auth=auth, data=json.dumps({"body": output}))

    ## Check the response
    if response.status_code == 201:
        print("Issue created successfully. Issue key:", response.json().get('key'))
    else:
        print(f"Failed to create issue. Status code: {response.status_code}, Response: {response.text}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Third Set (Send an Email) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;################## Send An Email ################
#################################################
def send_email(instanceid, metric_name, account, region, timestamp, current_state, current_reason, previous_state, previous_reason, cpu_process_op, mem_process_op, metric_val, issueid, issuekey, issuelink):
    ### Define a dictionary of custom input ###
    metric_list = {'mem_used_percent': 'Memory', 'disk_used_percent': 'Disk', 'CPUUtilization': 'CPU'}

    ### Conditions ###
    if previous_state == 'OK' and current_state == 'ALARM' and metric_name in list(metric_list.keys()):
        metric_msg = metric_list[metric_name]
        output = cpu_process_op if metric_name == 'CPUUtilization' else mem_process_op if metric_name == 'mem_used_percent' else None
        print('This is output', output)
        email_body = f"Hi Team, \n\nPlease be informed that {metric_msg} utilization is high for the instanceid {instanceid}. Please find below more information \n\nAlarm Details:\nMetricName = {metric_name}, \nAccount = {account}, \nTimestamp = {timestamp}, \nRegion = {region}, \nInstanceID = {instanceid}, \nCurrentState = {current_state}, \nReason = {current_reason}, \nMetricValue = {metric_val}, \nThreshold = 80.00 \n\nProcessOutput: \n{output}\nIncident Deatils:\nIssueID = {issueid}, \nIssueKey = {issuekey}, \nLink = {issuelink}\n\nRegards,\nAnirban Das,\nGlobal Cloud Operations Team"
        res = sns_client.publish(
            TopicArn = os.environ['snsarn'],
            Subject = f'High {metric_msg} Utilization Alert : {instanceid}',
            Message = str(email_body)
            )
        print('Mail has been sent') if res else print('Email not sent')
    else:
        email_body = str(0)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Fourth Set (Calling Lambda Handler Function) :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;################## Lambda Handler Function ################
###########################################################
def lambda_handler(event, context):
    instanceid = event['detail']['configuration']['metrics'][0]['metricStat']['metric']['dimensions']['InstanceId']
    metric_name = event['detail']['configuration']['metrics'][0]['metricStat']['metric']['name']
    account = event['account']
    timestamp = event['time']
    region = event['region']
    current_state = event['detail']['state']['value']
    current_reason = event['detail']['state']['reason']
    previous_state = event['detail']['previousState']['value']
    previous_reason = event['detail']['previousState']['reason']
    metric_val = json.loads(event['detail']['state']['reasonData'])['evaluatedDatapoints'][0]['value']
    ##### function calling #####
    if metric_name == 'CPUUtilization':
        cpu_utilization(instanceid, metric_name, previous_state, current_state)
        create_issues(instanceid, metric_name, account, timestamp, region, current_state, previous_state, cpu_process_op, mem_process_op, metric_val)
        send_email(instanceid, metric_name, account, region, timestamp, current_state, current_reason, previous_state, previous_reason, cpu_process_op, mem_process_op, metric_val, issueid, issuekey, issuelink)
    elif metric_name == 'mem_used_percent':
        mem_utilization(instanceid, metric_name, previous_state, current_state)
        create_issues(instanceid, metric_name, account, timestamp, region, current_state, previous_state, cpu_process_op, mem_process_op, metric_val)
        send_email(instanceid, metric_name, account, region, timestamp, current_state, current_reason, previous_state, previous_reason, cpu_process_op, mem_process_op, metric_val, issueid, issuekey, issuelink)
    else:
        None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Alarm Email Screenshot :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faul5o8o4gm3vqjs00ppc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faul5o8o4gm3vqjs00ppc.png" alt=" " width="800" height="475"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Note: In ideal scenario, threshold is 80%, but for testing I changed it to 10%. Please see the Reason.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Alarm JIRA Issue :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9721s2mxnabx3grz0jj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9721s2mxnabx3grz0jj.png" alt=" " width="800" height="791"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Scenario 2: When alarm state has been changed from OK to Insufficient data
&lt;/h2&gt;

&lt;p&gt;In this scenario, if any server cpu or memory utilization metrics data are not captured, then alarm state gets changed from OK to INSUFFICIENT_DATA. This state can be achieved in two ways - a.) If server is in stopped state b.) If CloudWatch agent is not running or went in dead state.&lt;br&gt;
So, as per below script, you'll be able to see that when cpu or memory utilization alarm status gets insufficient data, then lambda will first check if instance is in running status or not. If instance is in running state, then it will login and check CloudWatch agent status. Post that, it will create a JIRA issue and post the agent status in comment section of JIRA issue. After that, it will send an email with alarm details and agent status.&lt;/p&gt;

&lt;p&gt;Full Code :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;################# Importing Required Modules ################
############################################################
import json
import boto3
import time
import os
import sys
sys.path.append('./python')   ## This will add requests module along with all dependencies into this script
import requests
from requests.auth import HTTPBasicAuth

################## Calling AWS Services ###################
###########################################################
ssm = boto3.client('ssm')
sns_client = boto3.client('sns')
ec2 = boto3.client('ec2')

################## Defining Blank Variable ################
###########################################################
cpu_process_op = ''
mem_process_op = ''
issueid = ''
issuekey = ''
issuelink = ''

################# Function for CPU Utilization ################
###############################################################
def cpu_utilization(instanceid, metric_name, previous_state, current_state):
    global cpu_process_op
    if previous_state == 'OK' and current_state == 'INSUFFICIENT_DATA':
        ec2_status = ec2.describe_instance_status(InstanceIds=[instanceid,])['InstanceStatuses'][0]['InstanceState']['Name']
        if ec2_status == 'running':
            command = 'systemctl status amazon-cloudwatch-agent;sleep 3;systemctl restart amazon-cloudwatch-agent'
            print(f'Impacted Instance ID is : {instanceid}, Metric Name: {metric_name}')
            # Start a session
            print(f'Starting session to {instanceid}')
            response = ssm.send_command(InstanceIds = [instanceid], DocumentName="AWS-RunShellScript", Parameters={'commands': [command]})
            command_id = response['Command']['CommandId']
            print(f'Command ID: {command_id}')
            # Retrieve the command output
            time.sleep(4)
            output = ssm.get_command_invocation(CommandId=command_id, InstanceId=instanceid)
            print('Please find below output -\n', output['StandardOutputContent'])
            cpu_process_op = output['StandardOutputContent']
        else:
            cpu_process_op = f'Instance current status is {ec2_status}. Not able to reach out!!'
            print(f'Instance current status is {ec2_status}. Not able to reach out!!')
    else:
        print('None')

################# Function for Memory Utilization ################
############################################################### 
def mem_utilization(instanceid, metric_name, previous_state, current_state):
    global mem_process_op
    if previous_state == 'OK' and current_state == 'INSUFFICIENT_DATA':
        ec2_status = ec2.describe_instance_status(InstanceIds=[instanceid,])['InstanceStatuses'][0]['InstanceState']['Name']
        if ec2_status == 'running':
            command = 'systemctl status amazon-cloudwatch-agent'
            print(f'Impacted Instance ID is : {instanceid}, Metric Name: {metric_name}')
            # Start a session
            print(f'Starting session to {instanceid}')
            response = ssm.send_command(InstanceIds = [instanceid], DocumentName="AWS-RunShellScript", Parameters={'commands': [command]})
            command_id = response['Command']['CommandId']
            print(f'Command ID: {command_id}')
            # Retrieve the command output
            time.sleep(4)
            output = ssm.get_command_invocation(CommandId=command_id, InstanceId=instanceid)
            print('Please find below output -\n', output['StandardOutputContent'])
            mem_process_op = output['StandardOutputContent']
            print(mem_process_op)
        else:
            mem_process_op = f'Instance current status is {ec2_status}. Not able to reach out!!'
            print(f'Instance current status is {ec2_status}. Not able to reach out!!')     
    else:
        print('None')

################## Create JIRA Issue ################
#####################################################
def create_issues(instanceid, metric_name, account, timestamp, region, current_state, previous_state, cpu_process_op, mem_process_op, metric_val):
    ## Create Issue ##
    url ='https://&amp;lt;your-user-name&amp;gt;.atlassian.net//rest/api/2/issue'
    username = os.environ['username']
    api_token = os.environ['token']
    project = 'AnirbanSpace'
    issue_type = 'Incident'
    assignee = os.environ['username']
    summ_metric  = '%CPU Utilization' if 'CPU' in metric_name else '%Memory Utilization' if 'mem' in metric_name else '%Filesystem Utilization' if metric_name == 'disk_used_percent' else None
    metric_val = metric_val
    summary = f'Client | {account} | {instanceid} | {summ_metric} | Metric Value: {metric_val}'
    description = f'Client: Company\nAccount: {account}\nRegion: {region}\nInstanceID = {instanceid}\nTimestamp = {timestamp}\nCurrent State: {current_state}\nPrevious State = {previous_state}\nMetric Value = {metric_val}'

    issue_data = {
        "fields": {
            "project": {
                "key": "SCRUM"
            },
            "summary": summary,
            "description": description,
            "issuetype": {
                "name": issue_type
            },
            "assignee": {
                "name": assignee
            }
        }
    }
    data = json.dumps(issue_data)
    headers = {
        "Accept": "application/json",
        "Content-Type": "application/json"
    }
    auth = HTTPBasicAuth(username, api_token)
    response = requests.post(url, headers=headers, auth=auth, data=data)
    global issueid
    global issuekey
    global issuelink
    issueid = response.json().get('id')
    issuekey = response.json().get('key')
    issuelink = response.json().get('self')

    ################ Add Comment To Above Created JIRA Issue ###################
    output = cpu_process_op if metric_name == 'CPUUtilization' else mem_process_op if metric_name == 'mem_used_percent' else None
    comment_api_url = f"{url}/{issuekey}/comment"
    add_comment = requests.post(comment_api_url, headers=headers, auth=auth, data=json.dumps({"body": output}))

    ## Check the response
    if response.status_code == 201:
        print("Issue created successfully. Issue key:", response.json().get('key'))
    else:
        print(f"Failed to create issue. Status code: {response.status_code}, Response: {response.text}")

################## Send An Email ################
#################################################
def send_email(instanceid, metric_name, account, region, timestamp, current_state, current_reason, previous_state, previous_reason, cpu_process_op, mem_process_op, metric_val, issueid, issuekey, issuelink):
    ### Define a dictionary of custom input ###
    metric_list = {'mem_used_percent': 'Memory', 'disk_used_percent': 'Disk', 'CPUUtilization': 'CPU'}

    ### Conditions ###
    if previous_state == 'OK' and current_state == 'INSUFFICIENT_DATA' and metric_name in list(metric_list.keys()):
        metric_msg = metric_list[metric_name]
        output = cpu_process_op if metric_name == 'CPUUtilization' else mem_process_op if metric_name == 'mem_used_percent' else None
        email_body = f"Hi Team, \n\nPlease be informed that {metric_msg} utilization alarm state has been changed to {current_state} for the instanceid {instanceid}. Please find below more information \n\nAlarm Details:\nMetricName = {metric_name}, \n Account = {account}, \nTimestamp = {timestamp}, \nRegion = {region},  \nInstanceID = {instanceid}, \nCurrentState = {current_state}, \nReason = {current_reason}, \nMetricValue = {metric_val}, \nThreshold = 80.00  \n\nProcessOutput = \n{output}\nIncident Deatils:\nIssueID = {issueid}, \nIssueKey = {issuekey}, \nLink = {issuelink}\n\nRegards,\nAnirban Das,\nGlobal Cloud Operations Team"
        res = sns_client.publish(
            TopicArn = os.environ['snsarn'],
            Subject = f'Insufficient {metric_msg} Utilization Alarm : {instanceid}',
            Message = str(email_body)
        )
        print('Mail has been sent') if res else print('Email not sent')
    else:
        email_body = str(0)

################## Lambda Handler Function ################
###########################################################
def lambda_handler(event, context):
    instanceid = event['detail']['configuration']['metrics'][0]['metricStat']['metric']['dimensions']['InstanceId']
    metric_name = event['detail']['configuration']['metrics'][0]['metricStat']['metric']['name']
    account = event['account']
    timestamp = event['time']
    region = event['region']
    current_state = event['detail']['state']['value']
    current_reason = event['detail']['state']['reason']
    previous_state = event['detail']['previousState']['value']
    previous_reason = event['detail']['previousState']['reason']
    metric_val = 'NA'
    ##### function calling #####
    if metric_name == 'CPUUtilization':
        cpu_utilization(instanceid, metric_name, previous_state, current_state)
        create_issues(instanceid, metric_name, account, timestamp, region, current_state, previous_state, cpu_process_op, mem_process_op, metric_val)
        send_email(instanceid, metric_name, account, region, timestamp, current_state, current_reason, previous_state, previous_reason, cpu_process_op, mem_process_op, metric_val, issueid, issuekey, issuelink)
    elif metric_name == 'mem_used_percent':
        mem_utilization(instanceid, metric_name, previous_state, current_state)
        create_issues(instanceid, metric_name, account, timestamp, region, current_state, previous_state, cpu_process_op, mem_process_op, metric_val)
        send_email(instanceid, metric_name, account, region, timestamp, current_state, current_reason, previous_state, previous_reason, cpu_process_op, mem_process_op, metric_val, issueid, issuekey, issuelink)
    else:
        None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Insufficient Data Email Screenshot :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8acsc9610l5gf28ert6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8acsc9610l5gf28ert6.png" alt=" " width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Insufficient data JIRA Issue :&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug6qcdqg43p943xbcwea.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fug6qcdqg43p943xbcwea.png" alt=" " width="800" height="685"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion :
&lt;/h2&gt;

&lt;p&gt;In this article, we have tested scenarios on both cpu and memory utilization, but there can be lots of metrics on which we can configure auto-incident and auto-email functionality which will reduce significant efforts in terms of monitoring and creating incidents and all. This solution has given a initial approach how we can proceed further, but for sure there can be other possibilities to achieve this goal. I believe you all will understand the way we tried to make this relatable. Please like and comment if you love this article or have any other suggestions, so that we can populate in coming articles. 🙂🙂&lt;/p&gt;

&lt;p&gt;Thanks!!&lt;br&gt;
Anirban Das&lt;/p&gt;

</description>
      <category>aws</category>
      <category>eventdriven</category>
      <category>devops</category>
      <category>python</category>
    </item>
    <item>
      <title>Automatic Golden Image Generation using CI/CD</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Sun, 23 Jun 2024 07:15:56 +0000</pubDate>
      <link>https://dev.to/aws-builders/automatic-golden-image-generation-using-cicd-12e2</link>
      <guid>https://dev.to/aws-builders/automatic-golden-image-generation-using-cicd-12e2</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;Everyone, In every organization, security and compliance guardrails are measured in order to maintain the things are aligned with client expectations and agreement. There are many types of guardrails or compliance parameters out of which golden image creation is one of them. Before going into deep dive, let's under stand what is Golden Image.&lt;br&gt;
Golden Image is basically an image that has all required or supporting packages to be installed like agent packages, software or utilities packages, vulnerability agent package etc. there can be other packages installed which are approved by client. So when you're going to build a golden image for the first time, you just have to make sure that all required tools are installed and running fine in that server(windows/linux) to support the environment. After all this needs to be aligned with approved SOE parameters document. Along with making sure all packages are installed, another thing which is OS needs to be updated with latest patches for current month. Once these all are done, then take a snapshot of that instance and consider as base image which is known as Golden Image. This image would be used for further server build activity in future.&lt;/p&gt;

&lt;h2&gt;
  
  
  Diagram:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrfiupnx303w29cxffly.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyrfiupnx303w29cxffly.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Prerequisites:
&lt;/h2&gt;

&lt;p&gt;GitLab&lt;br&gt;
Terraform&lt;br&gt;
Ansible(optional)&lt;br&gt;
AWS Cloud Platform&lt;/p&gt;

&lt;h2&gt;
  
  
  Guidelines:
&lt;/h2&gt;

&lt;p&gt;In this project, I have planned to build golden image for the first time as I didn't have any image earlier, so it's kind of we are starting from scratch. So, let me tell you guys first, that below are the planned action items to be done for this project -&amp;gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build AWS EC2 instance using Terraform.&lt;/li&gt;
&lt;li&gt;Provision EC2 instance using Ansible.&lt;/li&gt;
&lt;li&gt;Created CICD pipeline to build sequence of activities.&lt;/li&gt;
&lt;li&gt;Once entire provisioning is completed, then take an AMI of that instance.&lt;/li&gt;
&lt;li&gt;Lastly, terminate the instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;As this is done for the first time, so ansible is required because there is no OS hardening parameters implemented. After instance provisioning with latest patches and implementing all security standards, once image is created, then for next month activity, Ansible will not be required because OS hardening parameters would have baked up in last month.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Build an Instance using Terraform
&lt;/h2&gt;

&lt;p&gt;I have taken a sample base image (not last month golden image) as a reference, fetched this image using terraform and created a new EC2 instance.&lt;/p&gt;

&lt;p&gt;var.tf&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "instance_type" {
  description = "ec2 instance type"
  type        = string
  default     = "t2.micro"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;data.tf:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;## fetch AMI ID ##
data "aws_ami" "ami_id" {
  most_recent = true
  filter {
    name   = "tag:Name"
    values = ["Golden-Image_2024-06-13"]
  }
}

## Fetch SG and Keypair ##
data "aws_key_pair" "keypair" {
  key_name           = "keypair3705"
  include_public_key = true
}

data "aws_security_group" "sg" {
  filter {
    name   = "tag:Name"
    values = ["management-sg"]
  }
}

## Fetch IAM role ##
data "aws_iam_role" "instance_role" {
  name = "CustomEC2AdminAccess"
}

## Fetch networking details ##
data "aws_vpc" "vpc" {
  filter {
    name   = "tag:Name"
    values = ["custom-vpc"]
  }
}

data "aws_subnet" "subnet" {
  filter {
    name   = "tag:Name"
    values = ["management-subnet"]
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;instance.tf&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_iam_instance_profile" "test_profile" {
  name = "InstanceProfile"
  role = data.aws_iam_role.instance_role.name
}

resource "aws_instance" "ec2" {
  ami                         = data.aws_ami.ami_id.id
  instance_type               = var.instance_type
  associate_public_ip_address = true
  availability_zone           = "us-east-1a"
  key_name                    = data.aws_key_pair.keypair.key_name
  security_groups             = [data.aws_security_group.sg.id, ]
  iam_instance_profile        = aws_iam_instance_profile.test_profile.name
  subnet_id                   = data.aws_subnet.subnet.id
  user_data                   = file("userdata.sh")

  root_block_device {
    volume_size = 15
    volume_type = "gp2"
  }
  tags = {
    "Name" = "GoldenImageVM"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;output.tf&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;output "ami_id" {
  value = {
    id               = data.aws_ami.ami_id.image_id
    arn              = data.aws_ami.ami_id.arn
    image_loc        = data.aws_ami.ami_id.image_location
    state            = data.aws_ami.ami_id.state
    creation_date    = data.aws_ami.ami_id.creation_date
    image_type       = data.aws_ami.ami_id.image_type
    platform         = data.aws_ami.ami_id.platform
    owner            = data.aws_ami.ami_id.owner_id
    root_device_name = data.aws_ami.ami_id.root_device_name
    root_device_type = data.aws_ami.ami_id.root_device_type
  }
}

output "ec2_details" {
  value = {
    arn         = aws_instance.ec2.arn
    id          = aws_instance.ec2.id
    private_dns = aws_instance.ec2.private_dns
    private_ip  = aws_instance.ec2.private_ip
    public_dns  = aws_instance.ec2.public_dns
    public_ip   = aws_instance.ec2.public_ip

  }
}

output "key_id" {
  value = {
    id          = data.aws_key_pair.keypair.id
    fingerprint = data.aws_key_pair.keypair.fingerprint
  }
}

output "sg_id" {
  value = data.aws_security_group.sg.id
}

output "role_arn" {
  value = {
    arn = data.aws_iam_role.instance_role.arn
    id  = data.aws_iam_role.instance_role.id
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;userdata.sh&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash
sudo yum install jq -y
##Fetching gitlab password from parameter store
GITLAB_PWD=`aws ssm get-parameter --name "gitlab-runner_password" --region 'us-east-1' | jq .Parameter.Value | xargs`

##Set the password for ec2-user
PASSWORD_HASH=$(openssl passwd -1 $GITLAB_PWD)
sudo usermod --password "$PASSWORD_HASH" ec2-user

## Create gitlab-runner user and set password
USER='gitlab-runner'
sudo useradd -m -u 1001 -p $(openssl passwd -1 $GITLAB_PWD) $USER

##Copy the Gitlab SSH Key to gitlab-runner server
sudo mkdir /home/$USER/.ssh
sudo chmod 700 /home/$USER/.ssh
Ansible_SSH_Key=`aws ssm get-parameter --name "Ansible-SSH-Key" --region 'us-east-1' | jq .Parameter.Value | xargs`
sudo echo $Ansible_SSH_Key &amp;gt; /home/$USER/.ssh/authorized_keys
sudo chown -R $USER:$USER /home/$USER/.ssh/
sudo chmod 600 /home/$USER/.ssh/authorized_keys
sudo echo "StrictHostKeyChecking no" &amp;gt;&amp;gt; /home/$USER/.ssh/config
sudo echo "$USER  ALL=(ALL) NOPASSWD  : ALL" &amp;gt; /etc/sudoers.d/00-$USER
sudo sed -i 's/^#PermitRootLogin.*/PermitRootLogin yes/; s/^PasswordAuthentication no/PasswordAuthentication yes/' /etc/ssh/sshd_config
sudo systemctl restart sshd
sleep 40
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Here, we have used a shell script to get prerequisites installed for Ansible like user creation and providing sudo access etc.&lt;br&gt;
Provision EC2 instance using Ansible:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;em&gt;Before triggering ansible job in GitLab, please make sure you login to the server you built from gitlab runner as gitlab-runner is going to login to new server for ansible provisioning and that time it will get an error if we don't perform below one -&amp;gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;main.yml&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;name: Set hostname&lt;br&gt;
hosts: server&lt;br&gt;
become: true&lt;br&gt;
gather_facts: false&lt;br&gt;
vars_files:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;../vars/variable.yml
roles:&lt;/li&gt;
&lt;li&gt;../roles/hostnamectl&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;name: Configure other services&lt;br&gt;
hosts: server&lt;br&gt;
become: true&lt;br&gt;
roles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;../roles/ssh&lt;/li&gt;
&lt;li&gt;../roles/login_banner&lt;/li&gt;
&lt;li&gt;../roles/services&lt;/li&gt;
&lt;li&gt;../roles/timezone&lt;/li&gt;
&lt;li&gt;../roles/fs_integrity&lt;/li&gt;
&lt;li&gt;../roles/firewalld&lt;/li&gt;
&lt;li&gt;../roles/log_management&lt;/li&gt;
&lt;li&gt;../roles/rsyslog&lt;/li&gt;
&lt;li&gt;../roles/cron&lt;/li&gt;
&lt;li&gt;../roles/journald&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;name: Start Prepatch&lt;br&gt;
hosts: server&lt;br&gt;
become: true&lt;br&gt;
roles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;../roles/prepatch&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;name: Start Patching&lt;br&gt;
hosts: server&lt;br&gt;
become: true&lt;br&gt;
roles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;../roles/patch&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;name: Start Postpatch&lt;br&gt;
hosts: server&lt;br&gt;
become: true&lt;br&gt;
roles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;../roles/postpatch&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;name: Reboot the server&lt;br&gt;
hosts: server&lt;br&gt;
become: true&lt;br&gt;
tasks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;reboot:
msg: "Rebooting machine in 5 seconds"
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;


Prepare GitLab CI/CD Pipeline:
&lt;/h2&gt;



&lt;p&gt;There are 4 stages created for entire deployment activity. Initially it will start with validation to make sure if all required services are running fine as expected.&lt;/p&gt;

&lt;p&gt;If yes, then it will proceed for resource(EC2) build using Terraform. Here, I have used Terraform Cloud to make things more reliable and store state file in managed memory location provided by Hashicorp. But terraform cli can be used without any issues.&lt;/p&gt;

&lt;p&gt;After successful resource build, provisioning needs to be performed to implement basic security standards and complete OS hardening process using Ansible CLI.&lt;/p&gt;

&lt;p&gt;At last, once provisioning with patching is completed, pipeline job will take an AMI using AWS CLI commands.&lt;br&gt;
Below are the required stages for this pipeline -&amp;gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Validation&lt;/li&gt;
&lt;li&gt;InstanceBuild&lt;/li&gt;
&lt;li&gt;InstancePatching&lt;/li&gt;
&lt;li&gt;TakeAMI&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;.gitlab-ci.yml&lt;/p&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;default:&lt;br&gt;
  tags:&lt;br&gt;
    - anirban

&lt;p&gt;stages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validation&lt;/li&gt;
&lt;li&gt;InstanceBuild&lt;/li&gt;
&lt;li&gt;InstancePatching&lt;/li&gt;
&lt;li&gt;TakeAMI&lt;/li&gt;
&lt;li&gt;Terminate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;job1:&lt;br&gt;
  stage: Validation&lt;br&gt;
  script:&lt;br&gt;
    - sudo chmod +x check_version.sh&lt;br&gt;
    - source check_version.sh&lt;br&gt;
  except:&lt;br&gt;
    changes:&lt;br&gt;
      - README.md&lt;br&gt;
  artifacts:&lt;br&gt;
    when: on_success&lt;br&gt;
    paths:&lt;br&gt;
      - Validation_artifacts&lt;/p&gt;

&lt;p&gt;job2:&lt;br&gt;
  stage: InstanceBuild&lt;br&gt;
  script:&lt;br&gt;
    - sudo chmod +x BuildScript/1_Env.sh&lt;br&gt;
    - source BuildScript/1_Env.sh&lt;br&gt;
    - python3 BuildScript/2_CreateTFCWorkspace.py -vvv&lt;/p&gt;

&lt;p&gt;except:&lt;br&gt;
    changes:&lt;br&gt;
      - README.md&lt;br&gt;
  artifacts:&lt;br&gt;
    paths:&lt;br&gt;
      - Validation_artifacts&lt;br&gt;
      - content.tar.gz&lt;/p&gt;

&lt;p&gt;job3:&lt;br&gt;
  stage: InstancePatching&lt;br&gt;
  script:&lt;br&gt;
    - INSTANCE_PRIVATEIP=&lt;code&gt;aws ec2 describe-instances --filters "Name=tag:Name, Values=GoldenImageVM" --query Reservations[0].Instances[0].PrivateIpAddress | xargs&lt;/code&gt;&lt;br&gt;
    - echo -e "[server]\n$INSTANCE_PRIVATEIP" &amp;gt; ./Ansible/inventory&lt;br&gt;
    - ansible-playbook ./Ansible/playbook/main.yml -i ./Ansible/inventory&lt;br&gt;
    - sudo chmod +x BuildScript/7_Cleanup.sh&lt;br&gt;
  when: manual&lt;br&gt;
  except:&lt;br&gt;
    changes:&lt;br&gt;
      - README.md&lt;br&gt;
  artifacts:&lt;br&gt;
    when: on_success&lt;br&gt;
    paths:&lt;br&gt;
      - Validation_artifacts&lt;br&gt;
      - ./Ansible/inventory&lt;/p&gt;

&lt;p&gt;job4:&lt;br&gt;
  stage: TakeAMI&lt;br&gt;
  script:&lt;br&gt;
    - echo '------------Fetching Instance ID------------'&lt;br&gt;
    - INSTANCE_ID=&lt;code&gt;aws ec2 describe-instances --filters "Name=tag:Name, Values=GoldenImageVM" --query Reservations[0].Instances[0].InstanceId | xargs&lt;/code&gt;&lt;br&gt;
    - echo '----------Taking an Image of Instance-----------'&lt;br&gt;
    - aws ec2 create-image --instance-id $INSTANCE_ID --name "GoldenImage" --description "Golden Image created on $(date -u +"%Y-%m-%dT%H:%M:%SZ")" --no-reboot --tag-specifications "ResourceType=image, Tags=[{Key=Name,Value=GoldenImage}]" "ResourceType=snapshot,Tags=[{Key=Name,Value=DiskSnaps}]"&lt;br&gt;
  when: manual&lt;br&gt;
  except:&lt;br&gt;
    changes:&lt;br&gt;
      - README.md&lt;/p&gt;

&lt;p&gt;job5:&lt;br&gt;
  stage: Terminate&lt;br&gt;
  script:&lt;br&gt;
    - echo '------------Fetching Instance ID------------'&lt;br&gt;
    - INSTANCE_ID=&lt;code&gt;aws ec2 describe-instances --filters "Name=tag:Name, Values=GoldenImageVM" --query Reservations[0].Instances[0].InstanceId | xargs&lt;/code&gt;&lt;br&gt;
    - echo '--------------------Terminating the Instance--------------------'&lt;br&gt;
    - aws ec2 terminate-instances --instance-ids $INSTANCE_ID&lt;br&gt;
  when: manual&lt;br&gt;
  except:&lt;br&gt;
    changes:&lt;br&gt;
      - README.md&lt;br&gt;
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
&lt;br&gt;
  &lt;br&gt;
  &lt;br&gt;
  Validation:&lt;br&gt;
&lt;/h2&gt;

&lt;p&gt;As per below images, we can see instances has been launched and provisioned successfully, post that AMI has been taken.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxy2t8mhr3wky1avgh3nf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxy2t8mhr3wky1avgh3nf.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz04xf96neweuifdflu8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frz04xf96neweuifdflu8.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yc5qta1ko5sjoxfm8tb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5yc5qta1ko5sjoxfm8tb.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;So, after all we are at the end of this blog, I hope we all get an idea or approach how pipeline can be set up to build image without any manual intervention. However, in the pipeline I have referred Continuous Delivery approach, hence few stages are set to be trigged manually. There is one thing to highlight mandatorily which is "Do not set Ansible stage(job3) in gitlab as automatic. Use when: manual key to set this stage manual. As I mentioned on the top, ansible stage requires gitlab runner to login to newly build server which I could have mentioned as a command in the pipeline, but I didn't, thought of lets verify things by entering into the server from gitlab runner.&lt;/p&gt;

&lt;p&gt;Hopefully you have enjoyed this blog, please go through this one and do the hands-on for sure🙂🙂. Please let me know how did you feel, what went well and how and where I could have done little better. All responses are welcome💗💗.&lt;/p&gt;

&lt;p&gt;For upcoming updates, please stay tuned and get in touch. In the meantime, let's dive into below GitHub repository -&amp;gt; 👇👇&lt;br&gt;
&lt;a href="https://github.com/dasanirban834/automated-soe-image-pipeline" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thanks Much!!&lt;br&gt;
Anirban Das.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>gitlab</category>
      <category>terraform</category>
    </item>
    <item>
      <title>How to Set Up Runtime Monitoring ECS Cluster Using GuardDuty</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Sat, 03 Feb 2024 14:19:30 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-set-up-runtime-monitoring-ecs-cluster-using-guardduty-3a23</link>
      <guid>https://dev.to/aws-builders/how-to-set-up-runtime-monitoring-ecs-cluster-using-guardduty-3a23</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh3nz01w1y3py5txq9z2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvh3nz01w1y3py5txq9z2.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Amazon Web Services has announced a new feature in re:Invent 2023 which enables GuardDuty to monitor and detect all the potential security activities at runtime level in ECS cluster (Fargate and EC2) and EKS. GuardDuty uses ML algorithms to which needs an agent to be installed in container host to gather all the events occurring at the host. This agent can be installed manually with the help of Systems Manager Document or automatically in EC2 and Fargate respectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Architecture:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6hai6dookzsmfocy6fy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz6hai6dookzsmfocy6fy.png" alt="Image description" width="800" height="424"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Implementation Steps:&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Enable GuardDuty Runtime Monitoring&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create VPC Endpoint for GuardDuty Service&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create ECS Cluster with EC2 Host&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Set up ALB across EC2 hosts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Post Provision EC2 Instances and Install GD Agent&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Validation&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Enable GuardDuty Runtime Monitoring:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;For gathering runtime monitoring event details, it's recommended to enable runtime monitoring before EC2 instances are deployed.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open GuardDuty console &lt;a href="https://console.aws.amazon.com/guardduty/"&gt;https://console.aws.amazon.com/guardduty/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Enable GuardDuty service in your AWS account.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select "Runtime Monitoring" in the left navigation pane.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Under "Configuration" section, please enable "Runtime Monitoring" option.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrs392222k940n2m312d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzrs392222k940n2m312d.png" alt="Image description" width="800" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Create VPC Endpoint for GuardDuty Service:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Before agent is installed, VPC endpoint with guardduty service needs to be created in order to establish connection between agent and GuardDuty console. Please follow below steps -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open VPC console &lt;a href="https://console.aws.amazon.com/vpc/"&gt;https://console.aws.amazon.com/vpc/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Select "Endpoints" from left navigation bar.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create a new endpoints with service name "com.amazonaws..guardduty-data".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Make sure "Enable DNS name" is checked under "Additional Settings".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose required subnets and security groups for endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Use below custom policy -&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "*",
            "Resource": "*",
            "Effect": "Allow",
            "Principal": "*"
        },
        {
            "Condition": {
                "StringNotEquals": {
                    "aws:PrincipalAccount": "&amp;lt;account-id&amp;gt;"
                }
            },
            "Action": "*",
            "Resource": "*",
            "Effect": "Deny",
            "Principal": "*"
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Create ECS Cluster with EC2 Host:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Here, we are creating a ECS cluster with underlying EC2 hosts, there are two options to create - either Fargate (Serverless) or EC2 (IaaS). Fargate option can be chosen when avoiding administrative efforts is required to reduce operational headaches. Mostly it's good to have cluster with EC2 host so that we can get complete insights on activities happening at lower level from view and troubleshooting perspective.&lt;br&gt;
&lt;em&gt;Note: During creation of cluster, ECS service will automatically build underlying EC2 hosts that requires Amazon Linux 2 kernel-5.10 OS version (if you are using Amazon Linux 2 AMI). If OS version is &amp;lt; 5.10, then it will not allow to install GuardDuty agents and throw below error.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Error:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;install errors: error: Failed dependencies:
    kernel &amp;gt;= 5.4.0 is needed by amazon-guardduty-agent-1.0-0.x86_64
failed to run commands: exit status 1
Failed to install package; install status Failed
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we are creating cluster with containers on basic httpd image which is stored into ECR repository.&lt;/p&gt;

&lt;p&gt;Create a Cluster:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open ECS console &lt;a href="https://console.aws.amazon.com/ecs/"&gt;https://console.aws.amazon.com/ecs/&lt;/a&gt;

&lt;ul&gt;
&lt;li&gt;Click on Create Cluster.

&lt;ul&gt;
&lt;li&gt;Give Cluster name.&lt;/li&gt;
&lt;li&gt;Under "Infrastructure" section, choose below settings -

&lt;ul&gt;
&lt;li&gt;Click on checkbox: Amazon EC2 instances&lt;/li&gt;
&lt;li&gt;Auto Scaling group (ASG): Create new ASG&lt;/li&gt;
&lt;li&gt;Provisioning Model: On-Demand&lt;/li&gt;
&lt;li&gt;Operating system/Architecture: Amazon Linux 2 (kernel 5.10)&lt;/li&gt;
&lt;li&gt;EC2 Instance Type: t2.micro&lt;/li&gt;
&lt;li&gt;Desired Capacity:

&lt;ul&gt;
&lt;li&gt;Minimum: 2 Maximum: 4&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;SSH Keypair: Choose a keypair&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Under "Network settings for Amazon EC2 instances" -

&lt;ul&gt;
&lt;li&gt;Choose correct VPC, Subnets and Security Groups.&lt;/li&gt;
&lt;li&gt;Auto Assign Public IP: Turn On&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Submit the catalog.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Create a Task Definition:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Task definition is a blueprint of container which are to be created, that takes couple of inputs from user like CPU, Memory requirements, task role, container image and port details.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open "Task Definitions" from left navigation bar.&lt;/li&gt;
&lt;li&gt;Click on "Create new task definition"&lt;/li&gt;
&lt;li&gt;Under "Infrastructure requirements" section, choose below -

&lt;ul&gt;
&lt;li&gt;Launch Type: Amazon EC2 Instances&lt;/li&gt;
&lt;li&gt;Operating system/Architecture: Linux X86_64&lt;/li&gt;
&lt;li&gt;Network Mode: Default (later discussed why "default")&lt;/li&gt;
&lt;li&gt;Task Size:

&lt;ul&gt;
&lt;li&gt;CPU: 0.8 vCPU, Memory: 0.9 GB&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Task Role: &lt;/li&gt;
&lt;li&gt;Under "Container" section, choose below -

&lt;ul&gt;
&lt;li&gt;Name: &lt;/li&gt;
&lt;li&gt;Image URI: &lt;/li&gt;
&lt;li&gt;Essential Container: YES&lt;/li&gt;
&lt;li&gt;Host Port: 80, Container Port: 80 (For simplicity, port 80 is taken)&lt;/li&gt;
&lt;li&gt;Resource Allocation Limits:

&lt;ul&gt;
&lt;li&gt;CPU: 0.5 vCPU, Memory Hard Limit: 0.7GB, Soft Limit: 0.5 GB&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;li&gt;Under "Logging" section, keep this as default&lt;/li&gt;
&lt;li&gt;Under HealthCheck section -

&lt;ul&gt;
&lt;li&gt;Command: CMD-SHELL,echo hello world&lt;/li&gt;
&lt;li&gt;Interval: 5 seconds&lt;/li&gt;
&lt;li&gt;Timeout: 5 seconds&lt;/li&gt;
&lt;li&gt;Start Period: 10 seconds&lt;/li&gt;
&lt;li&gt;Retries: 3
This will generate below JSON content of task definition -
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "taskDefinitionArn": "arn:aws:ecs:us-east-1:&amp;lt;account-id&amp;gt;:task-definition/mytaskdef1:2",
    "containerDefinitions": [
        {
            "name": "httpd",
            "image": "&amp;lt;image-uri&amp;gt;",
            "cpu": 512,
            "memory": 717,
            "memoryReservation": 512,
            "portMappings": [
                {
                    "name": "httpd-80-tcp",
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp",
                    "appProtocol": "http"
                }
            ],
            "essential": true,
            "environment": [],
            "environmentFiles": [],
            "mountPoints": [],
            "volumesFrom": [],
            "workingDirectory": "/",
            "ulimits": [],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-create-group": "true",
                    "awslogs-group": "/ecs/mytaskdef1",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                },
                "secretOptions": []
            },
            "healthCheck": {
                "command": [
                    "CMD-SHELL",
                    "echo hello world"
                ],
                "interval": 5,
                "timeout": 5,
                "retries": 3,
                "startPeriod": 10
            }
        }
    ],
    "family": "mytaskdef1",
    "taskRoleArn": "arn:aws:iam::&amp;lt;account-id&amp;gt;:role/ecsTaskExecutionRole",
    "executionRoleArn": "arn:aws:iam::&amp;lt;account-id&amp;gt;:role/ecsTaskExecutionRole",
    "revision": 2,
    "volumes": [],
    "status": "ACTIVE",
    "requiresAttributes": [
        {
            "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
        },
        {
            "name": "ecs.capability.execution-role-awslogs"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.17"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.21"
        },
        {
            "name": "com.amazonaws.ecs.capability.task-iam-role"
        },
        {
            "name": "ecs.capability.container-health-check"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
        },
        {
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.29"
        }
    ],
    "placementConstraints": [],
    "compatibilities": [
        "EC2"
    ],
    "requiresCompatibilities": [
        "EC2"
    ],
    "cpu": "819",
    "memory": "922",
    "runtimePlatform": {
        "cpuArchitecture": "X86_64",
        "operatingSystemFamily": "LINUX"
    },
    "registeredAt": "2024-01-26T15:20:05.159Z",
    "registeredBy": "arn:aws:iam::&amp;lt;account-id&amp;gt;:root",
    "tags": []
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  &lt;strong&gt;Create Tasks:&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Open the cluster and click on "Run Tasks" option.&lt;/li&gt;
&lt;li&gt;Under Compute option, choose "launch Type" as EC2.&lt;/li&gt;
&lt;li&gt;Under "Deployment Configuration" choose Application Type as "Task"&lt;/li&gt;
&lt;li&gt;Choose Task Definition which we created in last step.&lt;/li&gt;
&lt;li&gt;Choose Desired Tasks: 2&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flskrk66owj9k3udelwu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flskrk66owj9k3udelwu4.png" alt="Image description" width="800" height="96"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Set up ALB across EC2 hosts:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;We have set up an application load balancer which is distributing the income traffic at port 80 across all the ECS tasks register in the target group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsw5pa3uhmu1wpj9ax31u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsw5pa3uhmu1wpj9ax31u.png" alt="Image description" width="800" height="129"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Above snapshot shows that both the target groups are in healthy state as 200 success code is come. Also default website is coming as expected.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6m2sevcuc4ip599ir5n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy6m2sevcuc4ip599ir5n.png" alt="Image description" width="640" height="203"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Post Provision EC2 Instances and Install GD Agent:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;As of now, we haven't installed GuardDuty agents into EC2 instances manually, so below image shows that both instances and cluster are not healthy because of agent reporting issues as we can see "Agent not reporting" message under Issue section. Hence it's required to install agents manually, there are two different ways to do this -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Install GuardDuty Agent through System Managers Document.&lt;/li&gt;
&lt;li&gt;Install agents manually by downloading RPM scripts.
Before install agents, lets ensure that kernel OS version &amp;gt;= 5.10 as below -
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ecs-host-1b ~]# uname -r
5.10.205-195.804.amzn2.x86_64
[root@ecs-host-1b ~]#
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here, we have chosen to install GuardDuty agent using Systems Manger -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open Systems Manager console and click on "Documents" option. Look for "AmazonGuardDuty-ConfigureRuntimeMonitoringSsmPlugin" document.&lt;/li&gt;
&lt;li&gt;Provide package name as "AmazonGuardDuty-RuntimeMonitoringSsmPlugin"&lt;/li&gt;
&lt;li&gt;Choose the instances where agent needs to be installed and click Run.&lt;/li&gt;
&lt;li&gt;Post successful installation, validate the agent status by running

&lt;ul&gt;
&lt;li&gt;sudo systemctl status amazon-guardduty-agent
&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[root@ecs-host-1b ~]# sudo systemctl status amazon-guardduty-agent
● amazon-guardduty-agent.service - Amazon GuardDuty Agent
   Loaded: loaded (/usr/lib/systemd/system/amazon-guardduty-agent.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2024-01-26 16:07:07 UTC; 4min 42s ago
 Main PID: 22504 (amazon-guarddut)
    Tasks: 14
   Memory: 114.6M (limit: 128.0M)
   CGroup: /system.slice/amazon-guardduty-agent.service
           └─22504 /opt/aws/amazon-guardduty-agent/bin/amazon-guardduty-agent --worker-threads 8
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Now we can see both EC2 instances are ECS cluster are in healthy state in GuardDuty console as below -&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F950jmr5gjtc3qszk6yig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F950jmr5gjtc3qszk6yig.png" alt="Image description" width="800" height="170"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Validation:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;After successful agent reporting, we'll see security findings are getting gathered in GuardDuty console. For instant findings, we have used &lt;a href="https://github.com/awslabs/amazon-guardduty-tester/tree/master"&gt;https://github.com/awslabs/amazon-guardduty-tester/tree/master&lt;/a&gt; repository. After sometimes, we are able to see couple of findings present in the console as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94rffbte23wb8jb50fr3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F94rffbte23wb8jb50fr3.png" alt="Image description" width="800" height="223"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope this article will help you a lot in configuration things as expected. I have given few important links which you might need during setting this up -&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/guardduty/latest/ug/how-runtime-monitoring-works-ec2.html"&gt;https://docs.aws.amazon.com/guardduty/latest/ug/how-runtime-monitoring-works-ec2.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/guardduty/latest/ug/prereq-runtime-monitoring-ec2-support.html"&gt;https://docs.aws.amazon.com/guardduty/latest/ug/prereq-runtime-monitoring-ec2-support.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/guardduty/latest/ug/managing-gdu-agent-ec2-manually.html"&gt;https://docs.aws.amazon.com/guardduty/latest/ug/managing-gdu-agent-ec2-manually.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/guardduty/latest/ug/runtime-monitoring-agent-release-history.html"&gt;https://docs.aws.amazon.com/guardduty/latest/ug/runtime-monitoring-agent-release-history.html&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thanks for reading!! Let's connect in &lt;a href="https://www.linkedin.com/in/anirban-das-507816169/"&gt;https://www.linkedin.com/in/anirban-das-507816169/&lt;/a&gt;&lt;br&gt;
Happy Learning !! Cheers!!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>security</category>
      <category>community</category>
      <category>cloud</category>
    </item>
    <item>
      <title>How CloudWatch Network Monitor Performs Connectivity Test to EC2 Instances</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Mon, 01 Jan 2024 09:58:18 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-cloudwatch-network-monitor-performs-connectivity-test-to-ec2-instances-2c3m</link>
      <guid>https://dev.to/aws-builders/how-cloudwatch-network-monitor-performs-connectivity-test-to-ec2-instances-2c3m</guid>
      <description>&lt;h2&gt;
  
  
  &lt;strong&gt;Introduction:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Amazon Web Services recently released a feature in CloudWatch "Network Monitor" which is responsible to perform connectivity tests between source and destination. This feature doesn't require any manual user intervention, it's a manage service by AWS which works smoothly without any agents installation i.e. no requirement of agents as well. This feature not only works within AWS environment, but also works between AWS and On-Premises environment to ensure no connectivity losses.&lt;/p&gt;

&lt;p&gt;Ref Link: &lt;a href="https://aws.amazon.com/about-aws/whats-new/2023/12/amazon-cloudwatch-network-monitor-generally-available/" rel="noopener noreferrer"&gt;https://aws.amazon.com/about-aws/whats-new/2023/12/amazon-cloudwatch-network-monitor-generally-available/&lt;/a&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Pattern:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4wj7sulmnizvrll5fel.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl4wj7sulmnizvrll5fel.gif" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pbstf6md9g0jdn8tvwm.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3pbstf6md9g0jdn8tvwm.jpg" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Solution Overview:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Below steps to be followed to implement -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open CloudWatch console and click on "Network Monitor" option.&lt;/li&gt;
&lt;li&gt;Click on "Create Monitor" and provide the basic required details as below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygeizln2fiwpd9se1ker.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fygeizln2fiwpd9se1ker.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Provide subnet name as "Source" and give "Destination IP" or IP ranges.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zpnpngvluxqsjnonekf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zpnpngvluxqsjnonekf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Review the provided details and submit.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06cbjx1kqkoo52zts0gf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F06cbjx1kqkoo52zts0gf.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Once submitted, the monitoring resource would be created along with provided probe information. Generally it takes time around 3-4 minutes to set up completely and come in "Active" state. After taking time to get metrics, it will display like below.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg40wc1fmu2pottrtigef.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg40wc1fmu2pottrtigef.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Please make sure this resource status is "Healthy" which confirms successful connectivity test.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As we have provided aggregated time as 30s, so the servers will be pinged in each 30 seconds time interval and provide the status of healthiness since it creation.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Packet Loss:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;If this connectivity test gets failed or interrupted due to some network glitches or failure, packet loss section would provide the value "100%" with graphical representation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2dx02a1l4gu64hdse9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv2dx02a1l4gu64hdse9y.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Round Trip Time(RTT):&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;That defines the travel duration of traffics from source to destination.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd1ruicnwgof162spen0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdd1ruicnwgof162spen0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Communication Protocols:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;TCP and ICMP these two protocols are supported by this feature for now. ICMP probes carry echo request from mentioned source address to mentioned destination address and if destination resources replies back with echo request, then it gets considered as successful connectivity test. RTT and packet loss both are calculated on metric information from source to destination and vice versa.&lt;/p&gt;

&lt;p&gt;RTT = (time taken from source-to-destination) + (time taken from destination-to-source)&lt;br&gt;
Packet Loss = (% loss from source-to-destination) + (% loss from destination-to-source)&lt;/p&gt;

&lt;p&gt;Note: The port number for corresponding protocol must be opened at security group level of that instance. For ICMP, it takes "All" in port section so no action required apart from adding ICMP rule, but in TCP, port needs to be taken care.&lt;/p&gt;

&lt;p&gt;In case of TCP, probe carries TCP SYN packets from mentioned source details to destination and expects TCP SYN+ACK reply from destination.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Supported Region:&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Asia Pacific (Hong Kong)    ap-east-1&lt;br&gt;
Asia Pacific (Mumbai)   ap-south-1&lt;br&gt;
Asia Pacific (Seoul)    ap-northeast-2&lt;br&gt;
Asia Pacific (Singapore)    ap-southeast-1&lt;br&gt;
Asia Pacific (Sydney)   ap-southeast-2&lt;br&gt;
Asia Pacific (Tokyo)    ap-northeast-1&lt;br&gt;
Canada West (Calgary)   ca-west-1&lt;br&gt;
Europe (Frankfurt)  eu-central-1&lt;br&gt;
Europe (Ireland)    eu-west-1&lt;br&gt;
Europe (London) eu-west-2&lt;br&gt;
Europe (Paris)  eu-west-3&lt;br&gt;
Europe (Stockholm)  eu-north-1&lt;br&gt;
Middle East (Bahrain)   me-south-1&lt;br&gt;
South America (São Paulo)  sa-east-1&lt;br&gt;
US East (N. Virginia)   us-east-1&lt;br&gt;
US East (Ohio)  us-east-2&lt;br&gt;
US West (N. California) us-west-1&lt;br&gt;
US West (Oregon)    us-west-2&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Create an Alarm:&lt;/strong&gt;
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Open CloudWatch metrics and choose "AWS/NetworkMonitor" default namespace. &lt;/li&gt;
&lt;li&gt;Search with correct probe ID.&lt;/li&gt;
&lt;li&gt;Select the metrics and click on "bell" icon to configure alarms.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmtou0n14umuksh6ztm0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmtou0n14umuksh6ztm0.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Keep the details as it is and set condition that Packet Loss should not be greater than "0" as below -&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7awr3bbaajc3f8vrt5v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft7awr3bbaajc3f8vrt5v.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;As a trigger, we have created EventBridge rule with lambda function, so that once packet loss alarm gets triggered, then it sends an email with  sufficient information using SNS topic API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;EventBridge Rule Configuration:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Category: CloudWatch Alarm State Change&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Below is the schema of event pattern.
```
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;{&lt;br&gt;
  "source": ["aws.cloudwatch"],&lt;br&gt;
  "detail-type": ["CloudWatch Alarm State Change"],&lt;br&gt;
  "resources": ["arn:aws:cloudwatch:us-east-1::alarm:cw-network-packetloss-alarm-ec2"],&lt;br&gt;
  "detail": {&lt;br&gt;
    "state": {&lt;br&gt;
      "value": ["ALARM"]&lt;br&gt;
    }&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;- Integrate EventBridge rule with Lambda function.

![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9flu32a58udmtdz01mkc.png)

**Lambda Function Code:**

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;import json&lt;br&gt;
import boto3&lt;br&gt;
import os&lt;/p&gt;

&lt;p&gt;def lambda_handler(event, context):&lt;br&gt;
    id = event['id']&lt;br&gt;
    account = event['account']&lt;br&gt;
    timestamps = event['time']&lt;br&gt;
    region = event['region']&lt;br&gt;
    alarmarn = event['resources'][0]&lt;br&gt;
    alarmname = event['detail']['alarmName']&lt;br&gt;
    alarmreason = event['detail']['state']['reason']&lt;br&gt;
    previousState = event['detail']['previousState']['value']&lt;br&gt;
    currentstate = event['detail']['state']['value']&lt;br&gt;
    Monitorname = event['detail']['configuration']['metrics'][0]['metricStat']['metric']['dimensions']['Monitor']&lt;br&gt;&lt;br&gt;
    msg = f'Hi Team,\n\nPlease be informed that alarm \'{alarmname}\' in accountId {account} is in {currentstate} state. Please find below set of details to get more insights.\n\nAlarm Details:\n ID = {id},\nAccount = {account},\nTimestamp = {timestamps},\nRegion = {region},\nPreviousState = {previousState},\nCurrentState = {currentstate},\nAlarmARN = {alarmarn},\nAlarm_Reason = {alarmreason}\n\nThanks &amp;amp; Regards,\nAmazon Cloud Services'&lt;br&gt;
    sns_client = boto3.client('sns')&lt;br&gt;
    res = sns_client.publish(&lt;br&gt;
        TopicArn = os.environ['snsarn'],&lt;br&gt;
        Subject = f'Alarm: High Packet Loss Trigger Alert',&lt;br&gt;
        Message = str(msg)&lt;br&gt;
        )&lt;/p&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
## **Testing:**

For simplicity, we have executed below command which changes the alarm status from OK to ALARM forcefully for short period of time.
As a result, alarm status of probe has been changed as below snap.

**Command**: _aws cloudwatch set-alarm-state --alarm-name "cw-network-packetloss-alarm-ec2" --state-value ALARM --state-reason "testing purposes"_

![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ix6t5i37ii20o4ospid.png)

Once lambda function is triggered successfully, it interacts with SNS topic to send email messages to subscribed email address.


![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/moo5nesxi4ustahz6ok2.png)

## **Pricing:**

Please check below link to get pricing estimation on CloudWatch NetworkMonitor.

Link: https://aws.amazon.com/cloudwatch/pricing/ 

## **Conclusion:**

In this article we have seen how we can configure agentless network monitoring system within AWS network or hybrid network to ensure proper monitoring of packet loss and other metrices.
Hope this blog will help you to configure the things properly. Please let me know for more information or suggestion and follow me to get more on AWS.

Cheers!!
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>aws</category>
      <category>cloudwatch</category>
      <category>eventbridge</category>
      <category>lambda</category>
    </item>
    <item>
      <title>How to Get Custom Email Notification for EC2 State Changes Using EventBridge &amp; Lambda</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Sun, 24 Dec 2023 16:40:33 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-get-custom-email-notification-for-ec2-state-changes-3jno</link>
      <guid>https://dev.to/aws-builders/how-to-get-custom-email-notification-for-ec2-state-changes-3jno</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introduction:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In any sort of architecture, monitoring is an important part without which an architecture cannot be treated as well architect. So, Cloudwatch is there to provide monitoring service, but aoart from this, any kind of state change must be tracked so closely so that operations team can be aware of this change and work on accordingly. Amazon EventBridge provides that level of triggering service which gets triggered in different sort of activities and SNS topic would be there to get people notified. One of those use cases is EC2 state change.&lt;/p&gt;

&lt;p&gt;Suppose someone did patching activity and rebooted post patching or someone launched some new instances or due to some network glitches server got rebooted, but you're not notified as there is no alerting system configured. Definitely there is a ap which you think off to get those fixed, otherwise servers would keep starting/rebooting/stopping and team won't be aware.&lt;/p&gt;

&lt;p&gt;To sort this issue, EventBridge comes into the picture with integration of SNS topic. So, you might be thinking that SNS will send an alert to recipients. YES, that's correct, it will send an alert for sure, but that notification email seems to be clumsy to you as you'll see so many information which seems not to be needed. Customers prefect neat and clear and "to-the-point" information to have better clarification. Here is a problem as sns  notification email cannot be modified according to customers wish. That clearly tells that scripting is needed for sure to get this implemented.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2rwjdoqfonc306da819.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2rwjdoqfonc306da819.gif" alt="Image description" width="529" height="302"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solutions Overview:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This blog consists of following steps:&lt;/p&gt;

&lt;p&gt;1.Create an EventBridge rule as below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs68rixkq0kj9195dghv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjs68rixkq0kj9195dghv.png" alt="Image description" width="602" height="923"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4j3q2g0sl8o7yxek5feq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4j3q2g0sl8o7yxek5feq.png" alt="Image description" width="604" height="932"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, here basically EventBridge rule is checking below API calls from CloudTrail -&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;RunInstances&lt;/li&gt;
&lt;li&gt;StopInstances&lt;/li&gt;
&lt;li&gt;StartInstances&lt;/li&gt;
&lt;li&gt;RebootInstances&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Associate this rule with Lambda function. Below is the python code which takes events details in JSON as an input and fetches required details from JSON content.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os
import json
import boto3

def lambda_handler(event, context):

    EventID = event['detail']['eventID']
    Account = event['account']
    Timestamp = event['time']
    Region = event['region']
    InstanceID = event['detail']['requestParameters']['instancesSet']['items']
    EventName = event['detail']['eventName']
    SourceIP = event['detail']['sourceIPAddress']
    InitiatedBy = event['detail']['userIdentity']['arn']

    msg_status = {'StopInstances': 'stopped', 'StartInstances': 'started', 'TerminateInstances': 'terminated', 'RebootInstances': 'rebooted'}
    instance_id = []

    if len(InstanceID) &amp;gt; 1:
        for id in range(0, len(event['detail']['requestParameters']['instancesSet']['items'])):
            v = event['detail']['requestParameters']['instancesSet']['items'][id]['instanceId']
            instance_id.append(v)
        body = f'Hi Team, \n\nPlease be informed that multiple instances got {msg_status[EventName]} simultaneously.\n\nEventID = {EventID}, \nAccount = {Account}, \nTimestamp = {Timestamp}, \nRegion = {Region}, \nInstanceID = {instance_id}, \nEventName = {EventName}, \nSourceIP = {SourceIP}, \nInitiatedBy = {InitiatedBy} \n\nRegards,\nCloud Team'
        sns_client = boto3.client('sns')
        snsarn = os.environ['snsarn']
        res = sns_client.publish(
            TopicArn = snsarn,
            Subject = f'Alert-Multiple Instances are {msg_status[EventName]}',
            Message = str(body)
            )
    elif len(InstanceID) == 1:
        instance_id = InstanceID[0]['instanceId']
        body = f'Hi Team, \n\nThis is to inform you that EC2 instance with {instance_id} is {msg_status[EventName]}.Please find below information. \n\nEventID = {EventID}, \nAccount = {Account}, \nTimestamp = {Timestamp}, \nRegion = {Region}, \nInstanceID = {instance_id}, \nEventName = {EventName}, \nSourceIP = {SourceIP}, \nInitiatedBy = {InitiatedBy} \n\nRegards,\nCloud Team'

        sns_client = boto3.client('sns')
        snsarn = os.environ['snsarn']
        res = sns_client.publish(
            TopicArn = snsarn,
            Subject = f'Alert - {instance_id} is {msg_status[EventName]}',
            Message = str(body)
            )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;ol&gt;
&lt;li&gt;Here, SNS topic name is stored in Environment Variable section in lambda.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Note: Here, we didn't add SNS topic in Destination section because there will be below unwanted stuffs occurred if added in destination section.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SNS will send email as per python code with customizations you did. (Expected Behavior)&lt;/li&gt;
&lt;li&gt;SNS will also send email with complete lambda execution details like EventID, Event Timestamps, Payload etc. which is already part of customized email, no need to get separate email. Hence adding sns topic as destination has been omitted.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg91a7qvtiqg7qo85uumn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg91a7qvtiqg7qo85uumn.png" alt="Image description" width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Summary:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This post will guide you to configure customize email notification settings using EventBridge and Lambda with the help of python scripting. This same approach can be used in other cases as well like getting notified when ec2 metrics(cpu/mem/disk) is in ALARM state, so basically EventBridge will be triggered once alarm status will be switched from OK to ALARM. &lt;/p&gt;

&lt;p&gt;From configurational changes perspective this will help a lot to identify details in deep with AWS Config service. For more details, please checkout &lt;/p&gt;
&lt;div class="crayons-card c-embed text-styles text-styles--secondary"&gt;
    &lt;a href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html" rel="noopener noreferrer"&gt;
      docs.aws.amazon.com
    &lt;/a&gt;
&lt;/div&gt;

&lt;p&gt;&lt;br&gt;&lt;br&gt;
Please follow me for more contents.&lt;br&gt;&lt;br&gt;
Thanks for Reading!!&lt;br&gt;&lt;br&gt;
Let's connect &lt;a href="https://www.linkedin.com/in/anirban-das-507816169/"&gt;&lt;/a&gt;&lt;a href="https://www.linkedin.com/in/anirban-das-507816169/"&gt;https://www.linkedin.com/in/anirban-das-507816169/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>python</category>
      <category>lambda</category>
      <category>eventbridge</category>
    </item>
    <item>
      <title>Get a Custom Weekly Inventory of EC2 Instances Using Lambda Function</title>
      <dc:creator>Anirban Das</dc:creator>
      <pubDate>Sun, 05 Nov 2023 02:29:28 +0000</pubDate>
      <link>https://dev.to/aws-builders/get-a-custom-weekly-inventory-of-ec2-instances-using-lambda-function-3kfb</link>
      <guid>https://dev.to/aws-builders/get-a-custom-weekly-inventory-of-ec2-instances-using-lambda-function-3kfb</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;While working in any projects based on AWS, it’s mandatory to have an inventory of EC2 instances weekly/monthly basis to get compact details on instances and their attributes if the environment is very big. Also, from compliance perspective, it helps to auditor to understand the details very precisely like when an instance got build, what was the image version or when that image was created, whether SSM agent is working fine or not, something like this can be playing a key role during compliance assessment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rz67zarw7xpi4nw98w9.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rz67zarw7xpi4nw98w9.gif" alt="Image description" width="816" height="624"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Functionalities:
&lt;/h2&gt;

&lt;p&gt;This is a very straightforward solution to get weekly EC2 instances inventory report that uses EventBridge as a scheduler, lambda function used as a trigger based service which holds the actually python script. Once the inventory file is generated, lambda pushes it to S3 bucket for storing for long term and parallelly sends an email notification using SES(Simple Email Service).&lt;/p&gt;

&lt;h2&gt;
  
  
  Lets Get Started:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Create a Lambda Function:&lt;/strong&gt;&lt;br&gt;
Create a lambda function as below with python 3.11 runtime.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcx5jnhw4xi0qgvbiaeze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcx5jnhw4xi0qgvbiaeze.png" alt="Image description" width="800" height="302"&gt;&lt;/a&gt;&lt;br&gt;
Before uploading the source code, just wanted to let you all know that lambda doesn’t support any external python community modules like xmltodict, openpyxl etc. So here, while developing the codes locally, we can install python modules in that project folder.&lt;br&gt;
Use “pip3 install openpyxl -t . — no-user” to install python modules in same project directory. Post that, make the entire project directory compressed. Below one is the python code to generate inventory file&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import openpyxl
import time
import os
from botocore.exceptions import ClientError
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.application import MIMEApplication

client = boto3.client('ec2', region_name = 'us-east-1')
ssm_client = boto3.client('ssm', region_name = 'us-east-1')
s3 = boto3.client('s3', region_name = 'us-east-1')
ses = boto3.client('ses',region_name='us-east-1')

def instance_attributes():
    data = [['InstanceName', 'InstanceID', 'InstanceType', 'PrivateIP', 'Operaring System', 'State', 'SGID','SubnetID','VPCID','AvailabilityZone','SSM', 'ImageID', 'ImageCreated', 'OSDisk', 'OSDiskID', 'OSDiskSize'],]
    resp = client.describe_instances()
    for res in resp['Reservations']:
        for ins in res['Instances']:
            os = ins['PlatformDetails']
            priv_ip = ins['PrivateIpAddress']
            state = ins['State']['Name']
            subnetid = ins['SubnetId']
            vpcid = ins['VpcId']
            sgid = [sg['GroupId'] for sg in ins['SecurityGroups']][0]
            ins_id = ins['InstanceId']
            image_id = ins['ImageId']
            image_date = [img['CreationDate'] for img in client.describe_images(ImageIds=[image_id])['Images']]
            image_final_date = image_date[0] if len(image_date) &amp;gt;= 1 else 'Image may not be available'
            ins_ty = ins['InstanceType']
            monitoring_stat = ins['Monitoring']['State']
            launch_time = ins['LaunchTime']
            az = ins['Placement']['AvailabilityZone']
            ssm_agent_stat = ssm_client.get_connection_status(Target=ins_id)['Status']
            volname = [vol['DeviceName'] for vol in ins['BlockDeviceMappings']]
            volid = [vol['Ebs']['VolumeId'] for vol in ins['BlockDeviceMappings']]
            root_disk_name = volname[0]
            root_disk_id = volid[0]
            root_vol_size = [sz['Size'] for sz in client.describe_volumes(VolumeIds=[root_disk_id])['Volumes']][0]
            #last_reboot = ssm_client.send_command(DocumentName='AWS-RunShellScript', Parameters={'commands': [cmd]}, InstanceIds=ins_id)
            for tag_name in ins['Tags']:
                if tag_name['Key'] == 'Name':
                    name = tag_name['Value']
                    data.append([name, ins_id, ins_ty, priv_ip, os, state, sgid, subnetid, vpcid, az, ssm_agent_stat, image_id, image_final_date, root_disk_name, root_disk_id, root_vol_size])

    if name and ins_id:
        print('All the attributes are extracted from Instances')
        time.sleep(1)
        print('Creating an Inventory')
        workbook = openpyxl.Workbook()
        worksheet = workbook.active
        worksheet.title = 'Inventory'
        for row_data in data:
            worksheet.append(row_data)
        path = '/tmp/ec2-inventory.xlsx'
        workbook.save(path)
        print(f'Success!! Workbook created at {path} location', '\n')
        s3.upload_file(path, 'cf-templates-czmyfizwsx0a-us-east-1', 'ec2-inventory.xlsx')
    else:
        print('Fail!! Please check the syntax once again', '\n')

def sendemail():
    SENDER = '&amp;lt;emailid registered in SES&amp;gt;'
    RECIPIENT = '&amp;lt;emailid registered in SES&amp;gt;'
    SUBJECT = "Weekly Instances Report"
    #CONFIGURATION_SET = "ConfigSet"
    ATTACHMENT = "/tmp/ec2-inventory.xlsx"
    BODY_TEXT = "Hello Team,\n\nPlease find the attached instance inventory report.\n\nRegards,\nAnirban Das\nCloud Operations Team"
    BODY_HTML = """\
    &amp;lt;html&amp;gt;
    &amp;lt;head&amp;gt;&amp;lt;/head&amp;gt;
    &amp;lt;body&amp;gt;
    &amp;lt;p&amp;gt;Hello Team&amp;lt;/p&amp;gt;
    &amp;lt;p&amp;gt;Please find the attached instance inventory report.&amp;lt;/p&amp;gt;
    &amp;lt;p&amp;gt;Regards,&amp;lt;/p&amp;gt;
    &amp;lt;p&amp;gt;Anirban Das&amp;lt;/p&amp;gt;
    &amp;lt;p&amp;gt;Cloud Operations Team&amp;lt;/p&amp;gt;
    &amp;lt;/body&amp;gt;
    &amp;lt;/html&amp;gt;
    """
    CHARSET = "utf-8"
    msg = MIMEMultipart('mixed')
    # Add subject, from and to lines.
    msg['Subject'] = SUBJECT 
    msg['From'] = SENDER 
    msg['To'] = RECIPIENT
    # Create a multipart/alternative child container.
    msg_body = MIMEMultipart('alternative')

    # Encode the text and HTML content and set the character encoding. This step is
    # necessary if you're sending a message with characters outside the ASCII range.
    textpart = MIMEText(BODY_TEXT.encode(CHARSET), 'plain', CHARSET)
    htmlpart = MIMEText(BODY_HTML.encode(CHARSET), 'html', CHARSET)

    # Add the text and HTML parts to the child container.
    msg_body.attach(textpart)
    msg_body.attach(htmlpart)

    # Define the attachment part and encode it using MIMEApplication.
    att = MIMEApplication(open(ATTACHMENT, 'rb').read())

    # Add a header to tell the email client to treat this part as an attachment,
    # and to give the attachment a name.
    att.add_header('Content-Disposition','attachment',filename=os.path.basename(ATTACHMENT))

    # Attach the multipart/alternative child container to the multipart/mixed
    # parent container.
    msg.attach(msg_body)

    # Add the attachment to the parent container.
    msg.attach(att)
    #print(msg)
    try:
        #Provide the contents of the email.
        response = ses.send_raw_email(
            Source=SENDER,
            Destinations=[
                RECIPIENT
            ],
            RawMessage={
                'Data':msg.as_string(),
            },
            #ConfigurationSetName=CONFIGURATION_SET
        )
    # Display an error if something goes wrong. 
    except ClientError as e:
        print(e.response['Error']['Message'])
    else:
        print(f"Email sent! Message ID: {response['MessageId']}")

def lambda_handler(event, conte):
    instance_attributes()
    sendemail()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the lambda function you created and upload the zip file.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Create an EventBridge Rule:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open the AWS services option and select EventBridge service&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In the left navigation pane, you’ll see option “Rules”, click on it.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once you click on “Create Rule” option, fill the “Name” and “Description” options.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose rule type as “Schedule”.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Click on “Continue in EventBridge Scheduler” option to proceed further.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejv70adsb0nv2fvp6yfa.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fejv70adsb0nv2fvp6yfa.png" alt="Image description" width="800" height="685"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Choose occurrence as “Recurring Schedule” and select “Cron Based Schedule” as a type.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Here, it’s setup that lambda function is supposed to trigger in every SUNDAY at 12:00 AM. Accordingly cron expression is give like this cron(00 12 ? * SUN *).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flexible window is not chosen here, so taken as “Off” which means rule should triggered instantly.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzknke5s8h80msob91jq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzknke5s8h80msob91jq.png" alt="Image description" width="720" height="790"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Choose the target as Lambda function which we created in step 1.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Finally submit the request.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0cy87v34fgdrp36gxd5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk0cy87v34fgdrp36gxd5.png" alt="Image description" width="720" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Create Verified Identities in SES:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Open the Simple Email Service console and click on “Verified Identities” option.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Choose “Email address” as an email type and provide the email address.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Once created, you will get an email from AWS an click on the confirmation link to approve this identity request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Post that, verify the status of identity. It must be “Verified”.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Testing:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As we have created EventBridge rule with a particular cron schedule, hence it will be triggered as per schedule we have mentioned. For Adhoc testing, we can use “Test” option in Lambda.&lt;/p&gt;

&lt;p&gt;In Test option normally some JSON content is required to put as Lambda internal structure takes an event and context as an input, so deleting that would result an error. That’s a reason we need to use blank JSON format i.e. {} like below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1w8vlevl2pbx62ia2uvy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1w8vlevl2pbx62ia2uvy.png" alt="Image description" width="720" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is the output you’ll get after testing through Lambda.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmx9iyx8tgp71t684hub9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmx9iyx8tgp71t684hub9.png" alt="Image description" width="720" height="136"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Below is the kind of email you’ll receive with an attachment of EC2 inventory.&lt;/p&gt;

&lt;p&gt;Mail Snapshot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foilhs9psl0n6iigu6e21.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foilhs9psl0n6iigu6e21.png" alt="Image description" width="720" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Inventory Snapshot:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczlqd5yh1xkm3iee6f9l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fczlqd5yh1xkm3iee6f9l.png" alt="Image description" width="720" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;~~ Happy Ending ~~&lt;/p&gt;

&lt;p&gt;Hope this blog will help to understand how a good inventory can be generated for small as well as big environments. Here, we have chosen a few attributes of EC2 instances and related volumes, but yes there can be so many things we can include in the same inventory, that will definitely be a part of the requirement. 🙂🙂&lt;/p&gt;

&lt;p&gt;Cheers!&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
