<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hung____</title>
    <description>The latest articles on DEV Community by Hung____ (@hung____).</description>
    <link>https://dev.to/hung____</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hung____"/>
    <language>en</language>
    <item>
      <title>Document Processing Using Amazon Bedrock Data Automation (BDA)</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Wed, 14 Jan 2026 08:53:37 +0000</pubDate>
      <link>https://dev.to/hung____/document-processing-using-amazon-bedrock-data-automation-bda-4oe5</link>
      <guid>https://dev.to/hung____/document-processing-using-amazon-bedrock-data-automation-bda-4oe5</guid>
      <description>&lt;p&gt;&lt;strong&gt;AWS Bedrock Data Automation&lt;/strong&gt; (BDA) is a cloud-based service designed to make it easier to get insights from unstructured data such as documents, images, video, and audio. .&lt;/p&gt;

&lt;p&gt;Here are some example use cases:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Document processing&lt;/strong&gt;: BDA helps automate intelligent document processing (IDP) at scale without requiring complex steps such as document classification, data extraction, normalization, or validation. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Media analysis&lt;/strong&gt;: BDA enriches unstructured video content by generating scene-level summaries, detecting unsafe or explicit material, extracting on-screen text, and classifying content based on advertisements or brands.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generative AI assistants&lt;/strong&gt;: BDA improves retrieval-augmented generation (RAG)–based question-answering systems by supplying detailed, specific information extracted from documents, images, video, and audio.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this blog I want to walk through the AWS BDA workshop&lt;/p&gt;

&lt;p&gt;Here is the official workshop: &lt;a href="https://catalog.us-east-1.prod.workshops.aws/workshops/c64e3606-ab68-4521-81ea-b2eb36c993b9/en-US" rel="noopener noreferrer"&gt;https://catalog.us-east-1.prod.workshops.aws/workshops/c64e3606-ab68-4521-81ea-b2eb36c993b9/en-US&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here is my forked repo that have the updated template and notebooks with all the results: &lt;a href="https://github.com/Hung-00/sample-document-processing-with-amazon-bedrock-data-automation" rel="noopener noreferrer"&gt;https://github.com/Hung-00/sample-document-processing-with-amazon-bedrock-data-automation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because the original template in the workshop have some problems (outdated LLM) when deploy so I update my own template, you can use it instead of the original template: &lt;a href="https://github.com/Hung-00/sample-document-processing-with-amazon-bedrock-data-automation/blob/main/bda.yaml" rel="noopener noreferrer"&gt;https://github.com/Hung-00/sample-document-processing-with-amazon-bedrock-data-automation/blob/main/bda.yaml&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You should also complete the workshop with the updated notebooks in my forked repo, there is some change in LLM, for example I use&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding the Core Concepts
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/bda-standard-output.html" rel="noopener noreferrer"&gt;BDA's standard output&lt;/a&gt; feature provides immediate value with least configuration. Simply send your file to BDA, and it returns commonly required information based on the data type:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Documents&lt;/strong&gt;: Page-level text extraction, element detection (tables, figures, charts), structural analysis with markdown formatting, and document summaries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Images&lt;/strong&gt;: Content moderation, text detection, and image summaries&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Video&lt;/strong&gt;: Scene summaries, transcripts, and content moderation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Audio&lt;/strong&gt;: Transcriptions and audio summaries&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What makes standard output powerful is its flexibility. You can configure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Response Granularity&lt;/strong&gt;: Choose from document, page, element, line, or word-level extraction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Text Formats&lt;/strong&gt;: Get results in plaintext, markdown, HTML, or CSV&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bounding Boxes&lt;/strong&gt;: Extract precise element locations on pages&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generative Fields&lt;/strong&gt;: Enable AI-generated summaries and descriptions for figures and charts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And when you need specific information extracted from documents or images, &lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/bda-custom-output-idp.html" rel="noopener noreferrer"&gt;custom output with blueprints&lt;/a&gt; is your solution. A blueprint is essentially a schema that defines exactly what fields you want to extract, their data types, and validation rules.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Features of Blueprints:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Catalog Blueprints&lt;/strong&gt;: Pre-built blueprints for common documents like forms, paystubs, receipts, driver's licenses, bank statements, and medical insurance cards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Blueprints&lt;/strong&gt;: Define your own schemas with fields, groups, and tables&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automatic Matching&lt;/strong&gt;: When processing files with multiple document types, BDA automatically matches each document to the appropriate blueprint&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Normalization&lt;/strong&gt;: Apply natural language context for data validation and normalization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Explore the notebooks in the repository to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;11_getting_started_with_bda.ipynb&lt;/strong&gt;: Learn BDA basics and API workflow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;12_standard_output_extended.ipynb&lt;/strong&gt;: Deep dive into standard output configuration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;13_custom_outputs_and_blueprints.ipynb&lt;/strong&gt;: Master custom blueprints and projects&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;21_mortgage_and_lending.ipynb&lt;/strong&gt;: Build a mortgage document processing solution&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;22_medical_claims_processing.ipynb&lt;/strong&gt;: Create an end-to-end claims processing workflow&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhb8uif5puu2zzbg3pq1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkhb8uif5puu2zzbg3pq1.png" alt=" " width="280" height="351"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will go through 2 real-world use cases in the workshop.&lt;/p&gt;

&lt;h3&gt;
  
  
  Mortgage and Lending: Accelerating Loan Processing
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fku6x1r8qrm3xcp7yspyt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fku6x1r8qrm3xcp7yspyt.png" alt=" " width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The mortgage industry handles massive volumes of documentation for each loan application. A typical lending package includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identity verification documents (driver's licenses, passports)&lt;/li&gt;
&lt;li&gt;Financial documents (bank statements, W-2 forms, paystubs, checks)&lt;/li&gt;
&lt;li&gt;Property documents (homeowner insurance applications, appraisals)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Challenge&lt;/strong&gt;: Manual review of these documents is slow, expensive, and error-prone. Loan officers spend hours verifying information across multiple document types.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The BDA Solution&lt;/strong&gt;: By creating a project with multiple blueprints (both catalog and custom), BDA can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Automatically split&lt;/strong&gt; multi-page PDF packages into individual documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Classify&lt;/strong&gt; each document type (driver's license, bank statement, W-2, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Match&lt;/strong&gt; documents to the appropriate blueprint&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extract&lt;/strong&gt; structured data from each document&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Validate&lt;/strong&gt; information consistency across documents&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each document is processed with its specific blueprint, extracting exactly the fields needed for loan verification. Processing time drops from hours to minutes, with higher accuracy and consistency.&lt;/p&gt;

&lt;p&gt;Using this custom blueprint to process a Homeowner Insurance Form&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "$schema": "http://json-schema.org/draft-07/schema#",
    "description": "This blueprint will process a homeowners insurance applicatation form",
    "class": "default",
    "type": "object",
    "properties": {
        "Insured Name":{
           "type":"string",
           "inferenceType":"explicit",
           "instruction":"Insured's Name",
        },
           "Insurance Company":{
           "type":"string",
           "inferenceType":"explicit",
           "instruction":"insurance company name",
        },  
           "Insured Address":{
           "type":"string",
           "inferenceType":"explicit",
           "instruction":"the address of the insured property",
        },
           "Email Address":{
           "type":"string",
           "inferenceType":"explicit",
           "instruction":"the primary email address",
        }
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu834g9ir2l9hk5txyun6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu834g9ir2l9hk5txyun6.png" alt=" " width="800" height="1052"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Invoke data automation with code like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response = run_client.invoke_data_automation_async(
    inputConfiguration={'s3Uri':  f"s3://{bucket_name}/{object_name}"},
    outputConfiguration={'s3Uri': f"s3://{bucket_name}/{output_name}"},
    blueprints=[{'blueprintArn': blueprint_arn, 'stage': 'LIVE'}],
    dataAutomationProfileArn = dataAutomationProfileArn)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsr2mi4eedqlpt1vg205y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsr2mi4eedqlpt1vg205y.png" alt=" " width="800" height="407"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A lending package is a single PDF file that contains multiple documents needed to apply for a loan and BDA can also handle that.&lt;/p&gt;

&lt;p&gt;When processing a 50-page lending package, BDA automatically detects:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A driver's license on pages 1-2&lt;/li&gt;
&lt;li&gt;Bank statements on pages 3-15&lt;/li&gt;
&lt;li&gt;W-2 forms on pages 16-18&lt;/li&gt;
&lt;li&gt;Paystubs on pages 19-30&lt;/li&gt;
&lt;li&gt;A check image on page 31&lt;/li&gt;
&lt;li&gt;Insurance documents on pages 32-50&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9r9or001rx9fwpymofm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9r9or001rx9fwpymofm.png" alt=" " width="800" height="529"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favi45aak8aua8crmfm9m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favi45aak8aua8crmfm9m.png" alt=" " width="800" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0t5pswu5ouho7e5a4uu4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0t5pswu5ouho7e5a4uu4.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Medical Claims Processing
&lt;/h3&gt;

&lt;p&gt;Healthcare organizations process millions of insurance claims annually. Each claim involves multiple documents, data validation, and policy verification.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzjxraiieddjjzrvwdri.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwzjxraiieddjjzrvwdri.png" alt=" " width="800" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The medical claims solution demonstrates BDA's power when integrated with Amazon Bedrock Agents and Knowledge Bases:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Document Ingestion&lt;/strong&gt;: Medical claim forms (CMS 1500) are submitted and stored in S3&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;BDA Processing&lt;/strong&gt;: A custom blueprint extracts all claim fields including patient information, provider details, diagnosis codes, procedure codes, and charges&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent Orchestration&lt;/strong&gt;: A Bedrock Agent receives the extracted data and orchestrates the verification workflow&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Action Groups&lt;/strong&gt;: The agent uses Lambda-backed action groups to:

&lt;ul&gt;
&lt;li&gt;Query member and patient information from Aurora PostgreSQL&lt;/li&gt;
&lt;li&gt;Validate coverage eligibility&lt;/li&gt;
&lt;li&gt;Check claim data consistency&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge Base Integration&lt;/strong&gt;: The agent queries a Bedrock Knowledge Base containing Evidence of Coverage (EoC) documents to verify:

&lt;ul&gt;
&lt;li&gt;Treatment coverage under the patient's plan&lt;/li&gt;
&lt;li&gt;Policy limits and exclusions&lt;/li&gt;
&lt;li&gt;Pay requirements&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Report Generation&lt;/strong&gt;: The agent generates a comprehensive verification report and stores it in S3&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Key Innovation&lt;/strong&gt;: By combining BDA's extraction capabilities with Bedrock Agents' orchestration and Knowledge Bases' RAG capabilities, the solution provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Best accuracy&lt;/strong&gt; in field extraction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated verification&lt;/strong&gt; against policy documents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistent decision-making&lt;/strong&gt; based on documented coverage rules&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complete audit trails&lt;/strong&gt; for compliance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ingest Evidence of Coverage Documents directly into Knowledge Base.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frr9a15wqol9f65ua773y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frr9a15wqol9f65ua773y.png" alt=" " width="800" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From this health insurance claim form:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwwvqnh8znilruim693v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frwwvqnh8znilruim693v.png" alt=" " width="707" height="914"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BDA can extract information base on blueprint:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5cwevvx5ueeemiclbj8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr5cwevvx5ueeemiclbj8.png" alt=" " width="316" height="760"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now we invoke AI agent for claim verification. Everything is done pretty perfect.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjne7lwkafokhzk1qwsy7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjne7lwkafokhzk1qwsy7.png" alt=" " width="800" height="482"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;I think &lt;strong&gt;Amazon Bedrock Data Automation&lt;/strong&gt; represents a innovative shift in how organizations handle unstructured data. By combining powerful extraction capabilities with flexible configuration options and integration with other AWS services, BDA can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mortgage lenders&lt;/strong&gt; to process loan applications 10x faster&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Healthcare providers&lt;/strong&gt; to automate claims processing with agent-based verification&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Financial institutions&lt;/strong&gt; to extract insights from complex reports&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Legal teams&lt;/strong&gt; to process and analyze large document sets&lt;/li&gt;
&lt;li&gt;... and many more use cases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I think this is amazing, go have a look at this service.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>ai</category>
      <category>data</category>
    </item>
    <item>
      <title>AWS DevOps Agent Demo: Investigating ALB Health Check Failures</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Fri, 12 Dec 2025 11:14:51 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-devops-agent-demo-investigating-alb-health-check-failures-2be6</link>
      <guid>https://dev.to/aws-builders/aws-devops-agent-demo-investigating-alb-health-check-failures-2be6</guid>
      <description>&lt;p&gt;&lt;strong&gt;AWS DevOps Agent&lt;/strong&gt; is a service that autonomously investigates incidents and identifies root causes. &lt;br&gt;
In this demo, I'll try to simulate a scenario where EC2 instances behind an Application Load Balancer start failing health checks, and watch AWS DevOps Agent diagnose the problem.&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;In production environments, one of the most common incidents is ALB targets becoming unhealthy. This can happen for many reasons:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application crashes&lt;/li&gt;
&lt;li&gt;Database connection failures&lt;/li&gt;
&lt;li&gt;Memory exhaustion&lt;/li&gt;
&lt;li&gt;Dependency timeouts&lt;/li&gt;
&lt;li&gt;Misconfigured health checks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this demo, I'll deploy a Flask web application behind an ALB and simulate a database connection failure that causes the health endpoint to return 503 errors. The ALB will mark the targets as unhealthy, trigger CloudWatch alarms, and AWS DevOps Agent will investigate the root cause.&lt;/p&gt;

&lt;p&gt;Here is the diagram, just a simple ELB, two instances and alarms. I recommend deploy this stack to us-east-1 cause at this time writing this, AWS DevOps Agent service is only available there.&lt;/p&gt;

&lt;p&gt;I have also included a Lambda function that auto shut down instance after 2 hours, no worry about unexpected cost.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4bdsc118cpehnjlo2wj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv4bdsc118cpehnjlo2wj.png" alt=" " width="507" height="403"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudFormation template: &lt;a href="https://gist.github.com/Hung-00/e53f4c980baf13d9bb8902fd36a79a6b" rel="noopener noreferrer"&gt;https://gist.github.com/Hung-00/e53f4c980baf13d9bb8902fd36a79a6b&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check the output of the stacks after successfully created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fzs9gwn1m1xapiho5rd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6fzs9gwn1m1xapiho5rd.png" alt=" " width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go ahead connect to the two instances. You can connect through Session Manager&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bwx8gf937i27bvge7kh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6bwx8gf937i27bvge7kh.png" alt=" " width="800" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Run this command to check heatlh status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://localhost/health
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Server is healthy.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hp2yykbgnhvkdku66f6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0hp2yykbgnhvkdku66f6.png" alt=" " width="284" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwwx2s3svs9qx1c1r6ic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzwwx2s3svs9qx1c1r6ic.png" alt=" " width="800" height="253"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now use this command to interrupt both servers:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -s http://localhost/simulate/unhealthy

or 

curl http://localhost/simulate/crash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fra6cj7hu83uqpwt3q9gj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fra6cj7hu83uqpwt3q9gj.png" alt=" " width="800" height="52"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvc1kw38ghu9iokdunk4e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvc1kw38ghu9iokdunk4e.png" alt=" " width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The alarm had triggered.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnthmwlq4ixnd3hrktc4f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnthmwlq4ixnd3hrktc4f.png" alt=" " width="800" height="421"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Now let's head to AWS DevOps Agent.
&lt;/h2&gt;

&lt;p&gt;You can learn how to &lt;a href="https://docs.aws.amazon.com/devopsagent/latest/userguide/getting-started-with-aws-devops-agent-creating-an-agent-space.html" rel="noopener noreferrer"&gt;create an Agent Space&lt;/a&gt;, the process is straightforward and simple.&lt;/p&gt;

&lt;p&gt;Open &lt;strong&gt;Operator Access&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvu7w93u6w2inbhwetidz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvu7w93u6w2inbhwetidz.png" alt=" " width="800" height="325"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Have a look at your system in tab  &lt;strong&gt;DevOps Center&lt;/strong&gt;. You can see the stack's resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7azpjehkykp1zm803xu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7azpjehkykp1zm803xu.png" alt=" " width="800" height="402"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Change to tab &lt;strong&gt;Incident Response&lt;/strong&gt; and let's investigate latest alarm.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbewfwtdytxt5avarbp8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhbewfwtdytxt5avarbp8.png" alt=" " width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3lcqqs63tsb85s5g9mn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw3lcqqs63tsb85s5g9mn.png" alt=" " width="662" height="836"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can watch the Investigation Progress. The agent shows its reasoning as it investigates:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7uhjs3bvnqbrr7wq3r9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc7uhjs3bvnqbrr7wq3r9.png" alt=" " width="663" height="840"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It is interesting that AWS DevOps Agent can actually know the user did interrupt the server. As you can see below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4duaczs0sq54bwvco1sc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4duaczs0sq54bwvco1sc.png" alt=" " width="800" height="296"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each investigation has it own chat session, you can ask the agent about it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rmjcgcwxwzqz6xm90qp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4rmjcgcwxwzqz6xm90qp.png" alt=" " width="358" height="759"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to tab &lt;strong&gt;Prevention&lt;/strong&gt; and run. Agent will analyze and give you some recommendations to improve based on investigation in history.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fforv1zj1kcpym056qv6k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fforv1zj1kcpym056qv6k.png" alt=" " width="800" height="593"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5y6ktl6e5l2geahylzl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5y6ktl6e5l2geahylzl.png" alt=" " width="800" height="906"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS DevOps Agent&lt;/strong&gt; can not resolve the incidents by itself. You need to fix the root cause and implement the recommendations on your own. &lt;/p&gt;

&lt;p&gt;Finally, run this command to restore healthy status:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -s http://localhost/simulate/healthy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Remember to delete the stack if you don't want to continue.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this demo, we saw how AWS DevOps Agent cuts down resolution time, finds root causes quickly, and suggests ways to prevent similar issues. &lt;/p&gt;

&lt;p&gt;The agent works best when it understands your full environment — AWS accounts, external tools, everything. Adding MCP servers for custom integrations could make it even more powerful.&lt;/p&gt;

&lt;p&gt;It's free during preview, with some usage limits. Security is administrator-controlled through IAM permissions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I think DevOps Agent is a tool, not a replacement. Engineers are still essential for implementing fixes, designing infrastructure improvements, and making critical decisions when rollbacks are needed&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is just a simple scenario to have a first look at &lt;strong&gt;AWS DevOps Agent&lt;/strong&gt;, I will try to stimulate more scenarios in the future.&lt;/p&gt;

&lt;p&gt;Thank you.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>devops</category>
      <category>agents</category>
    </item>
    <item>
      <title>Kiroween 2025 - Haunting coding extension</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Fri, 05 Dec 2025 20:20:22 +0000</pubDate>
      <link>https://dev.to/hung____/kiroween-2025-haunting-coding-extension-2e8o</link>
      <guid>https://dev.to/hung____/kiroween-2025-haunting-coding-extension-2e8o</guid>
      <description>&lt;h1&gt;
  
  
  Haunting Extension 🩸👻🎃
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Inspiration
&lt;/h2&gt;

&lt;p&gt;Developers spend countless hours staring at code editors—why not make it fun? With Halloween spirit in mind, we wanted to transform the mundane coding experience into something memorable. What if errors didn't just show red squiggles, but actually &lt;em&gt;bled&lt;/em&gt;? What if deleting code felt like a dramatic horror movie moment? That spark of "what if coding was spooky?" became Blood Drip.&lt;/p&gt;

&lt;h2&gt;
  
  
  What it does
&lt;/h2&gt;

&lt;p&gt;Blood Drip is a VS Code extension that adds horror-themed visual effects to your coding environment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;🩸 Blood Drip Animation&lt;/strong&gt;: Error lines drip blood (5 drops reducing to 1, then repeating)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;👻 Ghost Cursor&lt;/strong&gt;: Random Halloween emojis (👻🎃💀🦇🕷️🧛🧟) follow your cursor&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🪓 Code Killer&lt;/strong&gt;: Delete 3+ lines and get dramatic murder notifications ("MASSACRE! 20 lines slaughtered!")&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;💀 Spooky TODO Icons&lt;/strong&gt;: Skull, ghost, or tombstone icons replace boring TODO markers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🕯️ Candlelight Mode&lt;/strong&gt;: Lines away from cursor dim, creating a spotlight effect&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiq6obkygwqufrkd2gu63.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiq6obkygwqufrkd2gu63.png" alt=" " width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnuxkksfwk4oqjcvipsd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhnuxkksfwk4oqjcvipsd.png" alt=" " width="733" height="63"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj15vyquz285qbsifsthh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj15vyquz285qbsifsthh.png" alt=" " width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  How we built it
&lt;/h2&gt;

&lt;p&gt;Kiro is amazing. We used the &lt;strong&gt;VS Code Extension API&lt;/strong&gt; with JavaScript. The architecture follows a controller pattern:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;EffectManager&lt;/strong&gt;: Central coordinator that initializes and manages all effects&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Individual Controllers&lt;/strong&gt;: BloodDripController, GhostCursorController, CodeKillerController, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Configuration Manager&lt;/strong&gt;: Handles user settings and real-time config changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Decoration API&lt;/strong&gt;: VS Code's system for adding visual elements to the editor&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Challenges we ran into
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Animation Performance&lt;/strong&gt;: Our first blood drip implementation created and disposed decoration types every frame—causing flickering and memory leaks. Solution: decoration pooling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;VS Code Decoration Limitations&lt;/strong&gt;: You can't directly animate decorations. We had to swap between pre-created decoration types to simulate animation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Emoji Consistency&lt;/strong&gt;: Emojis render differently across operating systems. We tested and selected widely-supported Halloween emojis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resource Management&lt;/strong&gt;: Extensions run in VS Code's process. Poor cleanup = sluggish editor. We implemented proper disposal patterns and pause-on-focus-loss.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Feature Conflicts&lt;/strong&gt;: Some effects (like fog and candlelight) competed visually. We had to make tough decisions about which features to keep.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Accomplishments that we're proud of
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Smooth Animations&lt;/strong&gt;: The blood drip effect runs at consistent frame rates without impacting editor performance&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code Killer Feature&lt;/strong&gt;: The escalating murder messages (eliminated → MURDER → CARNAGE → MASSACRE → GENOCIDE) add genuine fun to refactoring&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Random Ghost Cursor&lt;/strong&gt;: 12 different Halloween emojis keep every cursor movement surprising&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Zero-Config Fun&lt;/strong&gt;: Works immediately on install with sensible defaults&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full Customization&lt;/strong&gt;: Every feature can be toggled independently via VS Code settings&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What we learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;VS Code's Extension API is powerful but quirky&lt;/strong&gt;—understanding decoration types and their lifecycle is crucial&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance matters in extensions&lt;/strong&gt;—debouncing, pooling, and pausing are essential patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Users want control&lt;/strong&gt;—every feature needs an on/off switch&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simple ideas can be technically complex&lt;/strong&gt;—"make blood drip" sounds easy until you're debugging animation frame timing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fun features drive engagement&lt;/strong&gt;—the Code Killer feature gets the most reactions&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  What's next for Haunting Extension
&lt;/h2&gt;

&lt;p&gt;Future features we're excited about:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;🕸️ Cobweb Corners&lt;/strong&gt;: Decorative cobwebs appear on stale/unchanged code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;⚡ Lightning Flash&lt;/strong&gt;: Brief screen flash when saving files with errors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;👁️ Watching Eyes&lt;/strong&gt;: Blinking eyes in the gutter on long functions (code smell indicator)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🦴 Skeleton Comments&lt;/strong&gt;: Bone emojis on commented-out dead code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🐀 Creepy Crawlies&lt;/strong&gt;: Spiders crawl across unused imports&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;🎭 Haunted Line Numbers&lt;/strong&gt;: Spooky symbols on Friday the 13th or Halloween&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Seasonal Themes&lt;/strong&gt;: Christmas horror mode, Valentine's bleeding hearts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Have a look: &lt;a href="https://github.com/Hung-00/kiroween-haunting-extension" rel="noopener noreferrer"&gt;https://github.com/Hung-00/kiroween-haunting-extension&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Thank you!&lt;/p&gt;

</description>
      <category>kiro</category>
      <category>aws</category>
    </item>
    <item>
      <title>Understanding Multi-Agent Patterns in Strands Agent: Graph, Swarm, and Workflow</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Fri, 21 Nov 2025 18:24:40 +0000</pubDate>
      <link>https://dev.to/aws-builders/understanding-multi-agent-patterns-in-strands-agent-graph-swarm-and-workflow-4nb8</link>
      <guid>https://dev.to/aws-builders/understanding-multi-agent-patterns-in-strands-agent-graph-swarm-and-workflow-4nb8</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpou6sy259gvxesnc0gik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpou6sy259gvxesnc0gik.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Building complex, useful AI applications often requires more than a single agent. When you need multiple AI agents to collaborate, the question becomes: &lt;strong&gt;how should they work together?&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;Strands Agent offers three distinct orchestration patterns, each designed for different scenarios.&lt;/p&gt;

&lt;p&gt;In this post, we'll explore the &lt;strong&gt;Graph&lt;/strong&gt;, &lt;strong&gt;Swarm&lt;/strong&gt;, and &lt;strong&gt;Workflow&lt;/strong&gt; patterns through simple, practical examples using AWS Bedrock and Amazon Nova.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three Patterns at a Glance
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;Execution Flow&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Graph&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;LLM decides routing&lt;/td&gt;
&lt;td&gt;Conditional branching &amp;amp; decision trees&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Swarm&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Agents hand off autonomously&lt;/td&gt;
&lt;td&gt;Collaborative problem-solving&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Workflow&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Pre-defined DAG&lt;/td&gt;
&lt;td&gt;Repeatable processes with parallel tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The key difference? &lt;strong&gt;Who controls the flow&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Graph&lt;/strong&gt;: Developer defines the map, LLM chooses the path&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Swarm&lt;/strong&gt;: Agents decide who to hand off to next&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow&lt;/strong&gt;: System executes a fixed dependency graph&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7uqrrusdsqxsk195dkdt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7uqrrusdsqxsk195dkdt.png" alt=" " width="800" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 1: Graph
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to use&lt;/strong&gt;: The Graph pattern is ideal when you have a structured process with conditional branches based on inputs. Specifically, use Graph when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need intelligent routing where the LLM evaluates context and decides the best path forward&lt;/li&gt;
&lt;li&gt;Your process has multiple possible paths but the correct path depends on input characteristics (complexity, urgency, user type, content category)&lt;/li&gt;
&lt;li&gt;You want cycles and loops for retry logic, escalation paths, or iterative refinement when initial attempts fail&lt;/li&gt;
&lt;li&gt;You need human-in-the-loop approval gates or decision points embedded in the flow&lt;/li&gt;
&lt;li&gt;You want to maintain control over possible outcomes while allowing AI flexibility within those boundaries&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example Architecture - Pizza Ordering System
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Order Taker
    ├─→ Simple Processor → Confirmer
    └─→ Custom Processor → Confirmer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The LLM at the &lt;code&gt;Order Taker&lt;/code&gt; node decides whether to route to the simple or custom processor based on order complexity.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Graph&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BedrockModel&lt;/span&gt;

&lt;span class="n"&gt;nova_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BedrockModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;amazon.nova-pro-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Define agents
&lt;/span&gt;&lt;span class="n"&gt;order_taker&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;nova_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Route to &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;simple_processor&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; for standard orders,
                   &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;custom_processor&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; for special requests&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;take_order&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Create graph
&lt;/span&gt;&lt;span class="n"&gt;graph&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Graph&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;order_taker&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;order_taker&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;simple_processor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;simple_processor&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;custom_processor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;custom_processor&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Define possible routing paths
&lt;/span&gt;&lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_edge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;order_taker&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;simple_processor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_edge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;order_taker&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;custom_processor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;I want a large pepperoni pizza&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Real-World Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Customer support routing&lt;/li&gt;
&lt;li&gt;Loan approval workflows with risk assessment branches&lt;/li&gt;
&lt;li&gt;Multi-step troubleshooting systems&lt;/li&gt;
&lt;li&gt;Insurance claim processing&lt;/li&gt;
&lt;li&gt;Medical imaging analysis with specialist referral paths&lt;/li&gt;
&lt;li&gt;In content moderation, automatic approval for clearly safe content, flagging for review, and immediate blocking for violations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key advantage&lt;/strong&gt;: Combines structure with flexibility - you define the possible paths, but the AI makes intelligent routing decisions. This prevents unpredictable behavior while enabling sophisticated decision-making.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 2: Swarm
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to use&lt;/strong&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have distinct specializations where each agent brings unique expertise to the problem&lt;/li&gt;
&lt;li&gt;Tasks require iterative collaboration with multiple rounds of back-and-forth between agents&lt;/li&gt;
&lt;li&gt;Agents need to self-organize and determine when their contribution is complete&lt;/li&gt;
&lt;li&gt;The optimal sequence of work isn't always predictable upfront&lt;/li&gt;
&lt;li&gt;The problem is too complex for a single agent but doesn't fit a fixed pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example Architecture - Blog Post Creation
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Researcher → Writer → Editor
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each agent decides when its work is complete and hands off to the next agent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Swarm&lt;/span&gt;

&lt;span class="n"&gt;researcher&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;nova_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Research the topic, then hand off to &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;writer&lt;/span&gt;&lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;research_topic&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;writer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;nova_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Write the draft, then hand off to &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;editor&lt;/span&gt;&lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;write_draft&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Create swarm
&lt;/span&gt;&lt;span class="n"&gt;swarm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Swarm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;swarm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;researcher&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;researcher&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;swarm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;writer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;writer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;swarm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;editor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;editor&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;swarm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_entry_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;researcher&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;swarm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Create a blog post about AI multi-agent systems&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Real-World Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Code review systems where security, performance, and style specialists each contribute feedback&lt;/li&gt;
&lt;li&gt;Content creation pipelines&lt;/li&gt;
&lt;li&gt;Legal document review and drafting&lt;/li&gt;
&lt;li&gt;Sales process&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key advantage&lt;/strong&gt;: Natural collaboration that mimics human teams. Agents autonomously determine when to hand off based on task completion. The system can handle unexpected complexities as agents self-coordinate.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern 3: Workflow
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;When to use&lt;/strong&gt;: The Workflow pattern is perfect for repeatable processes with clear dependencies. Deploy Workflow when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You have a well-defined, repeatable process that doesn't change between executions&lt;/li&gt;
&lt;li&gt;You need parallel execution to maximize efficiency and reduce total runtime&lt;/li&gt;
&lt;li&gt;Process steps have clear inputs and outputs that flow between tasks&lt;/li&gt;
&lt;li&gt;You want predictable, deterministic behavior every single time&lt;/li&gt;
&lt;li&gt;You need audit trails and visibility into each step's execution&lt;/li&gt;
&lt;li&gt;Failure handling requires retry logic for specific tasks without restarting everything&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example Architecture - Email Campaign Pipeline
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Load Data → Segment Customers → ┌─→ VIP Emails  ──┐
                                ├─→ Regular Emails├─→ Schedule Campaign
                                └─→ New Emails ───┘
                                    (parallel)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The workflow executes tasks based on a dependency graph (DAG), running independent tasks in parallel.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Workflow&lt;/span&gt;

&lt;span class="c1"&gt;# Create workflow
&lt;/span&gt;&lt;span class="n"&gt;workflow&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Workflow&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Add tasks
&lt;/span&gt;&lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;load&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;load_agent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;segment&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;segment_agent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vip_email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;vip_email_agent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;regular_email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;regular_email_agent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_task&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;schedule&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;schedule_agent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Define dependencies
&lt;/span&gt;&lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_dependency&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;segment&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;load&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_dependency&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vip_email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;segment&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_dependency&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;regular_email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;segment&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_dependency&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;schedule&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;vip_email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_dependency&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;schedule&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;regular_email&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;workflow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Create personalized email campaign&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Real-World Use Cases
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Data ETL pipelines&lt;/li&gt;
&lt;li&gt;Batch processing jobs&lt;/li&gt;
&lt;li&gt;Employee onboarding automation&lt;/li&gt;
&lt;li&gt;Report generation systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key advantage&lt;/strong&gt;: Deterministic execution with automatic parallelization. Perfect for predictable, repeatable processes where efficiency and reliability are most important.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pattern Combinations
&lt;/h2&gt;

&lt;p&gt;These patterns aren't mutually exclusive. You can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use a &lt;strong&gt;Workflow&lt;/strong&gt; as a tool within a &lt;strong&gt;Graph&lt;/strong&gt; node&lt;/li&gt;
&lt;li&gt;Have a &lt;strong&gt;Swarm&lt;/strong&gt; agent invoke a &lt;strong&gt;Workflow&lt;/strong&gt; for a specific task&lt;/li&gt;
&lt;li&gt;Embed a &lt;strong&gt;Graph&lt;/strong&gt; within a larger &lt;strong&gt;Workflow&lt;/strong&gt; pipeline&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Shared State Across Multi-Agent Patterns
&lt;/h2&gt;

&lt;p&gt;Both &lt;strong&gt;Graph&lt;/strong&gt; and &lt;strong&gt;Swarm&lt;/strong&gt; patterns support passing shared state to all agents through the &lt;code&gt;invocation_state&lt;/code&gt; parameter. This enables sharing context and configuration across agents without exposing it to the LLM.&lt;/p&gt;

&lt;h3&gt;
  
  
  How Shared State Works
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;invocation_state&lt;/code&gt; is automatically propagated to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;All agents in the pattern via their &lt;code&gt;**kwargs&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Tools via &lt;code&gt;ToolContext&lt;/code&gt; when using &lt;code&gt;@tool(context=True)&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Tool-related hooks (BeforeToolCallEvent, AfterToolCallEvent)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example Usage
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Graph&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Swarm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ToolContext&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BedrockModel&lt;/span&gt;

&lt;span class="n"&gt;nova_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BedrockModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;amazon.nova-pro-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Define shared state with configuration and context
&lt;/span&gt;&lt;span class="n"&gt;shared_state&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user123&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sess456&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;debug_mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;api_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;secret_key_789&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;database_connection&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;db_connection_object&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;# Use with Graph pattern
&lt;/span&gt;&lt;span class="n"&gt;graph&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Graph&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyzer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;analyzer_agent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;processor&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;processor_agent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analyze customer data&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;invocation_state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;shared_state&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Use with Swarm pattern (same shared_state)
&lt;/span&gt;&lt;span class="n"&gt;swarm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Swarm&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;swarm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;researcher&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;researcher_agent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;swarm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;writer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;writer_agent&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;swarm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_entry_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;researcher&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;swarm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Create customer report&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;invocation_state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;shared_state&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Accessing Shared State in Tools
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;query_customer_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tool_context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ToolContext&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Query customer database using shared configuration.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="c1"&gt;# Access invocation_state from tool context
&lt;/span&gt;    &lt;span class="n"&gt;user_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tool_context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;invocation_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;debug_mode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tool_context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;invocation_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;debug_mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;db_conn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tool_context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;invocation_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;database_connection&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;debug_mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Querying for user: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Use shared context for personalized queries
&lt;/span&gt;    &lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db_conn&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;user_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;results&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;send_notification&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tool_context&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ToolContext&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Send notification using shared API key.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;api_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tool_context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;invocation_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;api_key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;session_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tool_context&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;invocation_state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;session_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Use API key from shared state
&lt;/span&gt;    &lt;span class="n"&gt;notification_service&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;session_id&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Notification sent successfully&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Important Distinctions
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Shared State (&lt;code&gt;invocation_state&lt;/code&gt;):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Configuration and objects passed behind the scenes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not visible&lt;/strong&gt; to the LLM in prompts&lt;/li&gt;
&lt;li&gt;Used for: API keys, database connections, user context, debug flags&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pattern-Specific Data Flow:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Data that the LLM should reason about&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visible&lt;/strong&gt; in conversation context&lt;/li&gt;
&lt;li&gt;Graph: Explicit state dictionary passed between agents&lt;/li&gt;
&lt;li&gt;Swarm: Shared conversation history and context from handoffs&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Best Practice:&lt;/strong&gt; Use &lt;code&gt;invocation_state&lt;/code&gt; for context and configuration that shouldn't appear in prompts, while using each pattern's specific data flow mechanisms for data the LLM should reason about.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Graph&lt;/strong&gt; = Structured routing with AI-driven decisions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Swarm&lt;/strong&gt; = Autonomous collaboration between specialists&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workflow&lt;/strong&gt; = Fixed dependencies with parallel execution&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Choose based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How &lt;strong&gt;predictable&lt;/strong&gt; your process is (Workflow &amp;gt; Graph &amp;gt; Swarm)&lt;/li&gt;
&lt;li&gt;How much &lt;strong&gt;AI&lt;/strong&gt;  do you want (Swarm &amp;gt; Graph &amp;gt; Workflow)&lt;/li&gt;
&lt;li&gt;Whether you need &lt;strong&gt;parallel execution&lt;/strong&gt; (Workflow best, Swarm/Graph sequential)&lt;/li&gt;
&lt;li&gt;Start simple, scale complexity&lt;/li&gt;
&lt;li&gt;Consider maintenance and debugging&lt;/li&gt;
&lt;li&gt;Performance considerations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The right pattern isn't about which is "best" in absolute terms—it's about matching your specific requirements. Many systems will use all three patterns in different parts of their architecture.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Agent built with Strand Agent framework and deployed to Bedrock Agentcore</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Mon, 29 Sep 2025 03:53:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-agent-built-with-strand-agent-framework-and-deployed-to-bedrock-agentcore-3h24</link>
      <guid>https://dev.to/aws-builders/aws-agent-built-with-strand-agent-framework-and-deployed-to-bedrock-agentcore-3h24</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fclwr42rk2ibxdcae0rpy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fclwr42rk2ibxdcae0rpy.png" alt=" " width="800" height="549"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this post, I will show you how I created my AWS Agent built with Strand Agent framework and deployed to Bedrock Agentcore.&lt;/p&gt;

&lt;p&gt;First you need to create an execution role for AgentCore Runtime.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ECRImageAccess",
            "Effect": "Allow",
            "Action": [
                "ecr:BatchGetImage",
                "ecr:GetDownloadUrlForLayer"
            ],
            "Resource": [
                "arn:aws:ecr:us-east-1:123456789012:repository/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogStreams",
                "logs:CreateLogGroup"
            ],
            "Resource": [
                "arn:aws:logs:us-east-1:123456789012:log-group:/aws/bedrock-agentcore/runtimes/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:DescribeLogGroups"
            ],
            "Resource": [
                "arn:aws:logs:us-east-1:123456789012:log-group:*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:us-east-1:123456789012:log-group:/aws/bedrock-agentcore/runtimes/*:log-stream:*"
            ]
        },
        {
            "Sid": "ECRTokenAccess",
            "Effect": "Allow",
            "Action": [
                "ecr:GetAuthorizationToken"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "xray:PutTraceSegments",
                "xray:PutTelemetryRecords",
                "xray:GetSamplingRules",
                "xray:GetSamplingTargets"
            ],
            "Resource": [
                "*"
            ]
        },
        {
            "Effect": "Allow",
            "Resource": "*",
            "Action": "cloudwatch:PutMetricData",
            "Condition": {
                "StringEquals": {
                    "cloudwatch:namespace": "bedrock-agentcore"
                }
            }
        },
        {
            "Sid": "GetAgentAccessToken",
            "Effect": "Allow",
            "Action": [
                "bedrock-agentcore:GetWorkloadAccessToken",
                "bedrock-agentcore:GetWorkloadAccessTokenForJWT",
                "bedrock-agentcore:GetWorkloadAccessTokenForUserId"
            ],
            "Resource": [
                "arn:aws:bedrock-agentcore:us-east-1:123456789012:workload-identity-directory/default",
                "arn:aws:bedrock-agentcore:us-east-1:123456789012:workload-identity-directory/default/workload-identity/agent*"
            ]
        },
        {
            "Sid": "BedrockModelInvocation",
            "Effect": "Allow",
            "Action": [
                "bedrock:InvokeModel",
                "bedrock:InvokeModelWithResponseStream"
            ],
            "Resource": [
                "arn:aws:bedrock:*::foundation-model/*",
                "arn:aws:bedrock:us-east-1:123456789012:*"
            ]
        }
    ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The import part is in &lt;strong&gt;"arn:aws:bedrock-agentcore:us-east-1:123456789012:workload-identity-directory/default/workload-identity/agent*"&lt;/strong&gt;, &lt;em&gt;agent*&lt;/em&gt; is set here so that you can create agent starts with &lt;strong&gt;agent&lt;/strong&gt; and then just atttach this policy, you don't need to create a new policy or role for new agent.&lt;/p&gt;

&lt;p&gt;Also you need  some more policy so that the tools from AWS MCP server can have access to your resource and do the task. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzb3xpzvykwxdujemfbcr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzb3xpzvykwxdujemfbcr.png" alt=" " width="307" height="281"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For this project, I want to create a supervisor agent that control other AWS MCP agents using Strands Agents framework and deploy Bedrock AgentCore. &lt;/p&gt;

&lt;p&gt;The only LLM is use is &lt;strong&gt;us.anthropic.claude-3-7-sonnet-20250219-v1:0&lt;/strong&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bedrock_model = BedrockModel(
        model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
        region_name="us-east-1",
    )
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here is the prompt I use for my orchestrator agent, it works pretty well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SUPERVISOR_AGENT_PROMPT = """
You are Orchestrator Agent, a sophisticated orchestrator designed to coordinate support across AWS services.
Your role is to analyze incoming queries and route them to the most appropriate specialized agent.

Available Specialized Agents:

SPECIALIZED SERVICES:
- AWS CloudTrail Assistant: Security auditing, compliance monitoring, API call tracking, and forensic analysis
- AWS CloudWatch Assistant: Monitoring, logging, observability, alarm management, and performance analysis
- AWS Cost Assistant: Cost analysis, spend reports, billing optimization, and usage tracking
- AWS Diagram Assistant: Architecture visualization, AWS diagram generation, and infrastructure mapping
- AWS Documentation Researcher: Search AWS documentation for technical information and best practices
- AWS DynamoDB Assistant: NoSQL database operations, data modeling, and performance optimization
- AWS IAM Assistant: Identity and access management, policy analysis, and security audits
- AWS Nova Canvas: AI image generation and creative visual content
- AWS Terraform Assistant: Infrastructure as code, Terraform best practices, and security compliance

Key Responsibilities:
- Accurately classify and route queries to appropriate specialized agents
- Maintain conversation context using memory for personalized responses
- Coordinate multi-step problems requiring multiple agents
- Provide cohesive responses when multiple agents are needed

Decision Protocol:
- Security audits/compliance/API tracking → AWS CloudTrail Assistant
- Monitoring/logging/observability/alarms → AWS CloudWatch Assistant
- Cost/billing/usage analysis → AWS Cost Assistant
- Architecture diagrams/visualization → AWS Diagram Assistant
- Documentation/research/best practices → AWS Documentation Researcher
- NoSQL/DynamoDB/data modeling → AWS DynamoDB Assistant
- IAM/security/permissions/policies → AWS IAM Assistant
- Image creation/AI art → AWS Nova Canvas
- Infrastructure/Terraform/IaC → AWS Terraform Assistant


Always leverage user context from memory to provide personalized assistance. Just give me enough answer, do not overthinking.
"""
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I have created 9 agents for 9 diferrent AWS MCP servers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vlfueewhyozzmhtb7lr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vlfueewhyozzmhtb7lr.png" alt=" " width="271" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Here is the list of all AWS MCP servers: &lt;a href="https://github.com/awslabs/mcp/tree/main" rel="noopener noreferrer"&gt;https://github.com/awslabs/mcp/tree/main&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here is an example of an agent:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import os

from mcp import StdioServerParameters, stdio_client
from strands import Agent, tool
from strands.models import BedrockModel
from strands.tools.mcp import MCPClient
from strands_tools import file_write, think


@tool
def aws_cost_assistant(query: str) -&amp;gt; str:
    bedrock_model = BedrockModel(
        model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
        region_name="us-east-1",
    )

    response = str()

    try:
        env = {}
        cost_mcp_server = MCPClient(
            lambda: stdio_client(
                StdioServerParameters(
                    command="uvx",
                    args=["awslabs.cost-explorer-mcp-server@latest"],
                    env=env,
                )
            )
        )

        with cost_mcp_server:

            tools = cost_mcp_server.list_tools_sync() + [think]

            cost_agent = Agent(
                model=bedrock_model,
                system_prompt="""You are a AWS account cost analyst. You can do the following tasks:
                - Amazon EC2 Spend Analysis: View detailed breakdowns of EC2 spending for the last day
                - Amazon Bedrock Spend Analysis: View breakdown by region, users and models over the last 30 days
                - Service Spend Reports: Analyze spending across all AWS services for the last 30 days
                - Detailed Cost Breakdown: Get granular cost data by day, region, service, and instance type
                - Interactive Interface: Use Claude to query your cost data through natural language
                """,
                tools=tools,
            )
            response = str(cost_agent(query))
            print("\n\n")

        if len(response) &amp;gt; 0:
            return response

        return "I apologize, but I couldn't properly analyze your question. Could you please rephrase or provide more context?"

    except Exception as e:
        return f"Error processing your query: {str(e)}"


if __name__ == "__main__":
    aws_cost_assistant("Get my cost usage of this month. I just the number of money.")

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For every agent, I use &lt;strong&gt;think&lt;/strong&gt; tool from Strand Agent so that the agent can brainstorm better.&lt;/p&gt;

&lt;p&gt;At the end of every agent file, I added:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if __name__ == "__main__":
    aws_cost_assistant("Get my usage of last 7 days per service")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I added this code so that I can test the agent independently just by running the file, check that my agent is working or not:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;py .\aws_cost_assistant.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bbhsdwk357h0fqw99i1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2bbhsdwk357h0fqw99i1.png" alt=" " width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You don't have to invoke the agent through orchestrator agent.&lt;/p&gt;

&lt;p&gt;I've also created the memory hook using AgentCore memory. The memory hook provides:&lt;/p&gt;

&lt;h2&gt;
  
  
  OrchestratorMemoryHooks
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;User Context Retrieval: Automatically retrieves relevant context before processing queries&lt;/li&gt;
&lt;li&gt;Interaction Storage: Saves user-agent interactions for personalized responses&lt;/li&gt;
&lt;li&gt;Multi-Strategy Memory: Supports user preferences and semantic memory storage&lt;/li&gt;
&lt;li&gt;Session Management: Maintains context across conversation sessions&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  MemoryManager
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Resource Management: Creates and manages AgentCore Memory resources&lt;/li&gt;
&lt;li&gt;Strategy Configuration: Configures user preference and semantic memory strategies&lt;/li&gt;
&lt;li&gt;Actor-Specific Memory: Isolated memory namespaces for different users&lt;/li&gt;
&lt;li&gt;Automatic Expiry: 90-day event expiry for data management&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Start Docker and run commands to launch your agent AgentCore Runtime:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;agentcore configure -e agent.py --execution-role arn:aws:iam::123456789012:role/agentcore_role

agentcore launch
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4pa6b87tyki17m7irgh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx4pa6b87tyki17m7irgh.png" alt=" " width="656" height="815"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wr7ahpauxj4m51l7ryj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wr7ahpauxj4m51l7ryj.png" alt=" " width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I've prepared a simple chat app to visualize the agent. You can find it in &lt;strong&gt;chat_app&lt;/strong&gt; folder and run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;streamlit run agent_chat_app.py --server.headless true
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5du4qt4l7sg3qkc8amkt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5du4qt4l7sg3qkc8amkt.png" alt=" " width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Go to &lt;strong&gt;View invocation code&lt;/strong&gt; on your agent console, get &lt;strong&gt;agentRuntimeArn&lt;/strong&gt; and &lt;strong&gt;runtimeSessionId&lt;/strong&gt;. Paste them in the settings:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak041d8a7i1bqasrmobw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fak041d8a7i1bqasrmobw.png" alt=" " width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsccvdmz8b8n5stvt6f9k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsccvdmz8b8n5stvt6f9k.png" alt=" " width="655" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you can start working with the agent. Try asking this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Get my cost usage of this month. I just the number of money. 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The response you get from the agent also have token usage, tool used and I also visualize that:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9krojij1i9ouxdlvu9sz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9krojij1i9ouxdlvu9sz.png" alt=" " width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjxo70gzti2xok9sddvc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhjxo70gzti2xok9sddvc.png" alt=" " width="800" height="608"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5xuh25s3jqf40j5u3oo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl5xuh25s3jqf40j5u3oo.png" alt=" " width="800" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mku1or6045ivmsmqmcr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mku1or6045ivmsmqmcr.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All of the above is created based on the formatted response from the agent. I've included everything I could.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vv3nqmq7f4ybhtobnsq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2vv3nqmq7f4ybhtobnsq.png" alt=" " width="799" height="714"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here is an example I told my agent to create a DynamoDB table&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0pcj3ez3plonku19qlj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx0pcj3ez3plonku19qlj.png" alt=" " width="749" height="447"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fix4q6z4hqz5b8b8lum4j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fix4q6z4hqz5b8b8lum4j.png" alt=" " width="800" height="553"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can test the agent in agent sandbox on console:&lt;br&gt;
Here is an example with &lt;strong&gt;aws_diagram_assistant&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z0saze1pqwqmflvz9eo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6z0saze1pqwqmflvz9eo.png" alt=" " width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Here I was asking about my chat history. As you can see that the agent can access the memory, everything is working as expected:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiu37ddl55tu7tm4te01q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiu37ddl55tu7tm4te01q.png" alt=" " width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That is all for my AWS Agent built with Strand Agent framework and deployed to Bedrock Agentcore.&lt;br&gt;
Have fun coding your agent!&lt;/p&gt;

&lt;p&gt;Reference:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/Hung-00/aws-strand-agent-core" rel="noopener noreferrer"&gt;https://github.com/Hung-00/aws-strand-agent-core&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/what-is-bedrock-agentcore.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/what-is-bedrock-agentcore.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://strandsagents.com/latest/documentation/docs/" rel="noopener noreferrer"&gt;https://strandsagents.com/latest/documentation/docs/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/strands-agents/tools" rel="noopener noreferrer"&gt;https://github.com/strands-agents/tools&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://awslabs.github.io/mcp/" rel="noopener noreferrer"&gt;https://awslabs.github.io/mcp/&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>bedrock</category>
      <category>strandagent</category>
      <category>agentcore</category>
    </item>
    <item>
      <title>Create and manage inference profiles on Amazon Bedrock</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Sun, 17 Aug 2025 12:44:53 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-and-manage-inference-profiles-on-amazon-bedrock-3ba6</link>
      <guid>https://dev.to/aws-builders/create-and-manage-inference-profiles-on-amazon-bedrock-3ba6</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsh1wguv41edeayz5fkj4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsh1wguv41edeayz5fkj4.png" alt=" " width="800" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Bedrock Inference Profiles is a powerful feature that allows you to track costs and usage metrics when invoking foundation models on Bedrock. Think of them as custom aliases for your models that provide detailed insights into how your AI applications consume resources. In this tutorial, I'll show you how to manage inference profiles.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Are Inference Profiles?
&lt;/h2&gt;

&lt;p&gt;Inference profiles in Amazon Bedrock serve two main purposes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost Tracking&lt;/strong&gt;: Monitor expenses per application, team, or project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Usage Analytics&lt;/strong&gt;: Track invocation patterns and resource consumption.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;There are two types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;System-defined profiles&lt;/strong&gt;: Created by AWS for cross-region inference.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5u3a8ybmguyhbxen3lvw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5u3a8ybmguyhbxen3lvw.png" alt=" " width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Application profiles&lt;/strong&gt;: Custom profiles you create for your specific needs. You can't view application inference profiles in the Amazon Bedrock console, only through CLI or API.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo5o7h5ns2417s4igaip.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqo5o7h5ns2417s4igaip.png" alt=" " width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Each model have different inference types supported. Here is the model inference types supported breakdown:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;inferenceTypesSupported is ['ON_DEMAND'] or ['ON_DEMAND', 'PROVISIONED']&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Can invoke directly&lt;/li&gt;
&lt;li&gt;✅ Can create application inference profiles from these&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;inferenceTypesSupported: ['INFERENCE_PROFILE'] or ['PROVISIONED']&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;❌ Cannot invoke directly&lt;/li&gt;
&lt;li&gt;❌ Cannot create application inference profiles directly&lt;/li&gt;
&lt;li&gt;✅ Can only access through system-defined profiles&lt;/li&gt;
&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;If you want to create inference profile, &lt;strong&gt;inferenceTypesSupported  ** must have **ON_DEMAND&lt;/strong&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82o16r83125hc94oxla5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F82o16r83125hc94oxla5.png" alt=" " width="800" height="15"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can use the function below to all the model that have &lt;strong&gt;ON_DEMAND&lt;/strong&gt; in your selected region. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Remember that each region have different *&lt;em&gt;inferenceTypesSupported  *&lt;/em&gt; for each models.&lt;br&gt;
&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import boto3
import json
from botocore.exceptions import ClientError

def list_available_models():
    bedrock_client = boto3.client('bedrock', region_name='us-east-1')
    try:
        response = bedrock_client.list_foundation_models()
        models = response.get('modelSummaries', [])
        available_models = []
        for model in models:
            if 'ON_DEMAND' in model.get('inferenceTypesSupported', []):
                model_info = {
                    'modelId': model.get('modelId'),
                    'modelArn': model.get('modelArn'),
                    'modelName': model.get('modelName'),
                    'inferenceTypes': model.get('inferenceTypesSupported', [])
                }
                available_models.append(model_info)
        print(available_models)
        return available_models

    except ClientError as e:
        print(f"Error listing models: {e}")
        return []
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In &lt;strong&gt;us-east-1&lt;/strong&gt;, you can create profile for almost every model, from every provider, even Amazon Nova:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivmrabvxkppo4yk0q1t0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fivmrabvxkppo4yk0q1t0.png" alt=" " width="800" height="354"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But in &lt;strong&gt;ap-southeast-1&lt;/strong&gt;, you can only create for 8 models from Anthropic and Cohere:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4uppo7f9popt3p8lqrb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu4uppo7f9popt3p8lqrb.png" alt=" " width="800" height="73"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Currently, you can only create an inference profile using the Amazon Bedrock API. You can use this function here to create profile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def create_inference_profile(
    region,
    profile_name,
    model_arn,
    model_name,
    tags=None,
    description=None
):
    bedrock_client= boto3.client('bedrock', region_name=region)

    try:
        params = {
            'inferenceProfileName': profile_name,
            'modelSource': {
                'copyFrom': model_arn
            }
        }

        if description:
            params['description'] = description

        if tags:
            params['tags'] = tags

        response = bedrock_client.create_inference_profile(**params)

        profile_info = {
            'inferenceProfileArn': response['inferenceProfileArn'],
            'inferenceProfileId': response.get('inferenceProfileId'),
            'status': response.get('status'),
            'profileName': profile_name,
            'region': region,
            'model': model_name,
            'modelArn': model_arn
        }

        print(f"Profile's ARN: {profile_info['inferenceProfileArn']}")

        return profile_info
    except ClientError as e:
        print(f"ErrorCode: {e.response['Error']['Code']}")
        print(f"Error: {e.response['Error']['Message']}")

        return None
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For example, here I'm trying to create profile for Amazon Nova in us-east-1, also remember to add your desired tags:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;profile = create_inference_profile(
    region="us-east-1",
    profile_name="Nova Pro Production",
    model_arn="arn:aws:bedrock:us-east-1::foundation-model/amazon.nova-pro-v1:0",
    model_name="Nova Pro",
    tags= [
        {
            'key': 'region',
            'value': 'us-east-1'
        },
        {
            'key': 'project',
            'value': 'my-project'
        },
        {
            'key': 'env',
            'value': 'production'
        },
    ],
    description="Nova Pro for my application running on production"
)

if profile:
    print(f"Inference Profile Created Successfully!")
    print(f"Name: {profile['profileName']}")
    print(f"ARN: {profile['inferenceProfileArn']}")
    print(f"Region: {profile['region']}")
    print(f"Model: {profile['model']}")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Unfortunately, right now at August 2025, you can not view the inference profile on AWS console. Then you can check all the profiles with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws bedrock list-inference-profiles --region us-east-1 --type-equals APPLICATION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzo04q9qu7sj1dok5nnv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzo04q9qu7sj1dok5nnv.png" alt=" " width="800" height="291"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get more details and tags for your inference profile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Get detailed information about a specific profile
aws bedrock get-inference-profile --region us-east-1 --inference-profile-identifier arn:aws:bedrock:us-east-1:xxxxxxx:application-inference-profile/xxxxxxx

# View tags for your profile
aws bedrock list-tags-for-resource --region us-east-1 --resource-arn arn:aws:bedrock:us-east-1:xxxxxxx:application-inference-profile/xxxxxxxx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fir537m8ik7n3hxbxryyc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fir537m8ik7n3hxbxryyc.png" alt=" " width="800" height="185"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now let's try to invoke with the newly created inference profile:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;bedrock_runtime = boto3.client("bedrock-runtime", region_name="us-east-1")

try:
    request_body = {
        "messages": [
            {
                "role": "user",
                "content": [
                    {
                        "text": "What is the capital of Canada?"
                    }
                ]
            }
        ],
    }

    response = bedrock_runtime.invoke_model(
        modelId="arn:aws:bedrock:us-east-1:151182331915:application-inference-profile/xxxxxxx",
        body=json.dumps(request_body),
        contentType="application/json",
    )

    response_body = json.loads(response["body"].read())
    print(response_body)
except ClientError as e:
    print(e.response['Error']['Code'])
    print(e.response['Error']['Message'])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As you can see that it works:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7b9e8jjnbtk45wbklk86.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7b9e8jjnbtk45wbklk86.png" alt=" " width="800" height="341"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now all the cost using this inference profile will be tagged. You now can manage the cost of your application more efficent.&lt;/p&gt;

&lt;p&gt;You can delete the profile with this command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws bedrock delete-inference-profile --inference-profile-identifier "arn:aws:bedrock:us-east-1:xxxxxx:application-inference-profile/xxxxxx"  --region us-east-1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is also this &lt;a href="https://github.com/aws-samples/sample-bedrock-inference-profile-mgmt-tool" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS Bedrock Inference Profile Management Tool&lt;/strong&gt;&lt;/a&gt; to have you more with managing inference profiles, have a look!&lt;/p&gt;

&lt;h2&gt;
  
  
  Create inference profile with AWS CDK
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;from aws_cdk import (
    Stack,
    custom_resources as cr,
    aws_iam as iam,
    CfnOutput,
    # aws_logs as logs,
)
from constructs import Construct


class BedrockInferenceProfileStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -&amp;gt; None:
        super().__init__(scope, construct_id, **kwargs)

        self.profile_name = "inference-profile"
        self.inference_profile = cr.AwsCustomResource(
            self,
            "BedrockInferenceProfile",
            on_create=cr.AwsSdkCall(
                service="Bedrock",
                action="createInferenceProfile",
                parameters={
                    "inferenceProfileName": self.profile_name,
                    "description": "Claude Inference Profile",
                    "modelSource": {
                        "copyFrom": "arn:aws:bedrock:ap-southeast-1::foundation-model/anthropic.claude-3-5-sonnet-20240620-v1:0"
                    },
                },
                physical_resource_id=cr.PhysicalResourceId.of(self.profile_name),
            ),
            on_update=cr.AwsSdkCall(
                service="Bedrock",
                action="getInferenceProfile",
                parameters={"inferenceProfileIdentifier": self.profile_name},
                physical_resource_id=cr.PhysicalResourceId.of(self.profile_name),
            ),
            on_delete=cr.AwsSdkCall(
                service="Bedrock",
                action="deleteInferenceProfile",
                parameters={
                    "inferenceProfileIdentifier": self.profile_name
                },
            ),
            policy=cr.AwsCustomResourcePolicy.from_statements(
                [
                    iam.PolicyStatement(
                        actions=[
                            "bedrock:CreateInferenceProfile",
                            "bedrock:DeleteInferenceProfile",
                            "bedrock:GetInferenceProfile",
                        ],
                        resources=["*"],
                    )
                ]
            ),
        )

        self.profileARN = self.inference_profile.get_response_field(
            "inferenceProfileArn"
        )

        CfnOutput(
            self,
            "ProfileARN",
            value=self.profileARN,
            description="Inference Profile",
        )

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hope you can create your inference profile successfully!&lt;/p&gt;

&lt;p&gt;Reference:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/track-allocate-and-manage-your-generative-ai-cost-and-usage-with-amazon-bedrock/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/machine-learning/track-allocate-and-manage-your-generative-ai-cost-and-usage-with-amazon-bedrock/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws-samples/sample-bedrock-inference-profile-mgmt-tool" rel="noopener noreferrer"&gt;https://github.com/aws-samples/sample-bedrock-inference-profile-mgmt-tool&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>bedrock</category>
      <category>genai</category>
      <category>nova</category>
    </item>
    <item>
      <title>Create training job for YOLO model on Amazon SageMaker with AWS Lambda</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Tue, 12 Aug 2025 18:12:57 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-training-job-for-yolo-model-on-amazon-sagemaker-with-aws-lambda-1n51</link>
      <guid>https://dev.to/aws-builders/create-training-job-for-yolo-model-on-amazon-sagemaker-with-aws-lambda-1n51</guid>
      <description>&lt;p&gt;In this blog, I will show you how to create a training job for YOLO11x model on Amazon SageMaker through a Lambda function, and then deploy it into an enpoint.&lt;/p&gt;

&lt;p&gt;I have prepared a repo that have all the code I use, please have a look: &lt;br&gt;
&lt;a href="https://github.com/Hung-00/Amazon-SageMaker-YOLO-training-job" rel="noopener noreferrer"&gt;https://github.com/Hung-00/Amazon-SageMaker-YOLO-training-job&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi655cttok03ou30z8kty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi655cttok03ou30z8kty.png" alt=" " width="800" height="642"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  The process
&lt;/h2&gt;

&lt;p&gt;First, you need to have an image that contains all the packages and code files for training.&lt;/p&gt;

&lt;p&gt;Making the image from scratch is kinda tricky, so that I have made this simple Dockerfile, you can have a look and give it a try.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM pytorch/pytorch:2.0.1-cuda11.7-cudnn8-runtime

# Set timezone to avoid interactive prompt
ENV DEBIAN_FRONTEND=noninteractive
ENV TZ=UTC

# Install system dependencies
RUN apt-get update &amp;amp;&amp;amp; apt-get install -y \
    git \
    python3-pip \
    libglib2.0-0 \
    libsm6 \
    libxext6 \
    libxrender-dev \
    libgomp1 \
    libgl1-mesa-glx \
    tzdata \
    &amp;amp;&amp;amp; rm -rf /var/lib/apt/lists/*

# Upgrade pip
RUN pip install --upgrade pip

# Create requirements.txt with pinned versions
RUN echo "numpy==1.24.4\n\
sagemaker-training\n\
ultralytics==8.3.170\n\
albumentationsx\n\
opencv-python-headless==4.9.0.80\n\
pillow==10.1.0\n\
pandas==2.0.3\n\
matplotlib==3.7.2\n\
seaborn==0.12.2\n\
tqdm==4.66.1\n\
pyyaml==6.0.1\n\
scipy==1.10.1" &amp;gt; /requirements.txt

# Install all dependencies at once
RUN pip install -r /requirements.txt

# Set up the working directory
WORKDIR /opt/ml/code

# Copy training script
COPY train.py /opt/ml/code/train.py
COPY code/inference.py /opt/ml/code/inference.py
COPY code/requirements.txt /opt/ml/code/requirements.txt
# Only for testing
COPY debug.py /opt/ml/code/debug.py 

# Set the entrypoint to the training script
ENV SAGEMAKER_PROGRAM train.py

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;blockquote&gt;
&lt;p&gt;Notice the version &lt;strong&gt;ultralytics==8.3.170&lt;/strong&gt;, latest verion at July 2025, you may need to upgrade to a higher version to access YOLO12 or later YOLO version.&lt;br&gt;
And &lt;strong&gt;albumentationsx&lt;/strong&gt; is an upgrade version of &lt;strong&gt;albumentations&lt;/strong&gt;, it gives you better augmentations when training your model.&lt;br&gt;
All the steps copying code files is important because we are using &lt;strong&gt;sagemaker-training-toolkit&lt;/strong&gt;. Read more about it here: &lt;a href="https://github.com/aws/sagemaker-training-toolkit" rel="noopener noreferrer"&gt;https://github.com/aws/sagemaker-training-toolkit&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjou0utq49nk53osyqhu7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjou0utq49nk53osyqhu7.png" alt=" " width="214" height="188"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have also created a script &lt;strong&gt;upload_image_to_ECR.py&lt;/strong&gt; to build and upload the image straight to ECR, just make sure you have Docker running.&lt;/p&gt;

&lt;p&gt;The image will be around 3.8GB.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv30prpsz0j8jb73rhx61.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv30prpsz0j8jb73rhx61.png" alt=" " width="800" height="216"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Or you can test on local first, here is the commands you can use to test the image on local&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker build -t yolo .

docker run --rm -it \
  --gpus all \
  -v $(pwd)/local_test/input/data:/opt/ml/input/data \
  -v $(pwd)/local_test/model:/opt/ml/model \
  -v $(pwd)/local_test/output:/opt/ml/output \
  -e SM_MODEL_DIR=/opt/ml/model \
  -e SM_CHANNEL_TRAIN=/opt/ml/input/data/train \
  -e SM_CHANNEL_VALIDATION=/opt/ml/input/data/validation \
  -e SM_OUTPUT_DATA_DIR=/opt/ml/output/data \
  yolo \
  /bin/bash
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your &lt;strong&gt;local_test&lt;/strong&gt; folder should looks like this structure so that you can bring it to container, you have to prepare training data by yourself:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs07drfhs8tk8ihzu3j2h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs07drfhs8tk8ihzu3j2h.png" alt=" " width="548" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Alsp, the &lt;strong&gt;dataset.yaml&lt;/strong&gt; should looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;names:
- class_0
- class_1
- class_2
- class_3
nc: 4
path: /opt/ml/input/data
train: train
val: validation
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test installation success or not&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python debug.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test result:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0t48sk77v8p7k7bdvib4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0t48sk77v8p7k7bdvib4.png" alt=" " width="768" height="828"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Start training in container:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python train.py --epochs 1 --batch-size 2 --imgsz 640
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Training results:&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foz87dwb7xot4by5ovm22.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foz87dwb7xot4by5ovm22.png" alt=" " width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My &lt;strong&gt;train.py&lt;/strong&gt; will put &lt;strong&gt;model,pt&lt;/strong&gt; and other result images into 2 folder:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;/opt/ml/model/&lt;br&gt;
/opt/ml/output/data/&lt;br&gt;
Everything in these two directories gets uploaded to S3.&lt;br&gt;
This is where you should save your trained model artifacts and other results when training.&lt;br&gt;
You can read about this here: &lt;a href="https://nono.ma/sagemaker-model-dir-output-dir-and-output-data-dir-parameters" rel="noopener noreferrer"&gt;https://nono.ma/sagemaker-model-dir-output-dir-and-output-data-dir-parameters&lt;/a&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;In &lt;strong&gt;model&lt;/strong&gt; folder:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmtfq6l1jwbe7n8tu7tn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbmtfq6l1jwbe7n8tu7tn.png" alt=" " width="209" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;output/data&lt;/strong&gt; folder:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjg835eysxlpmernk5ap.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffjg835eysxlpmernk5ap.png" alt=" " width="277" height="314"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After the training job finish, inside the S3 bucket will have these 2 zip files. contains all of the above:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhcnzkhuc12acldsnxvu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvhcnzkhuc12acldsnxvu.png" alt=" " width="800" height="60"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now you know how the code inside the image works, then let's create training job with Lambda.&lt;/p&gt;

&lt;p&gt;Go to &lt;strong&gt;2_create training_job\trigger_training.py&lt;/strong&gt; in my GitHub repo, take that function and deploy a new Lambda function with it.&lt;/p&gt;

&lt;p&gt;Go to IAM create a role so that SageMaker can assume like below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhi8w4q7rdlvy6a0xkyq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxhi8w4q7rdlvy6a0xkyq.png" alt=" " width="378" height="622"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgg9ndn6silzirzs5f7cr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgg9ndn6silzirzs5f7cr.png" alt=" " width="501" height="652"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Set up &lt;strong&gt;SAGEMAKER_ROLE_ARN&lt;/strong&gt; with the role above and &lt;strong&gt;ECR_IMAGE_URI&lt;/strong&gt; with the URI of the latest image in your ECR repo, like &lt;code&gt;***********.dkr.ecr.ap-southeast-1.amazonaws.com/yolo11-training:latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Your S3 bucket that has data should looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;s3://your-data-bucket/
├── train/
│   ├── images/
│   │   ├── image1.jpg
│   │   ├── image2.jpg
│   │   └── ...
│   ├── labels/
│   │   ├── image1.txt
│   │   ├── image2.txt
│   │   └── ...
│   └── dataset.yaml
└── val/
    ├── images/
    │   └── ...
    └── labels/
        └── ...
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Create a test event like format below, replace S3 path with the correct S3 URI, also give it an output bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    {
        "training_data_s3": "s3://your-bucket/path/to/train",
        "validation_data_s3": "s3://your-bucket/path/to/val",
        "output_s3": "s3://your-bucket/path/to/output",
        "instance_type": "ml.g4dn.xlarge",
        "hyperparameters": {
            "epochs": 200,
            "batch-size": 16,
            "learning-rate": 0.01,
            "imgsz": 640
        }
    }
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Click the test event to create a training job, the job will took around 35 minutes for 200 epochs with my settings:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27dq2z3xgy25at4lqfwb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27dq2z3xgy25at4lqfwb.png" alt=" " width="800" height="374"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The result will be create in your output bucket:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv9h7ln4fkl2ojiv30ek.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv9h7ln4fkl2ojiv30ek.png" alt=" " width="800" height="265"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Now go to &lt;strong&gt;3_create_endpoint\create_endpoint.py&lt;/strong&gt;, take the code and deploy a Lambda function to create endpoint&lt;/p&gt;

&lt;p&gt;Create an IAM role for SageMaker to assume with &lt;strong&gt;S3FullAccess&lt;/strong&gt; policy. Save it to &lt;strong&gt;SAGEMAKER_ENPOINT_ROLE_ARN&lt;/strong&gt; envirment variable.&lt;/p&gt;

&lt;p&gt;Create test event, replace with your output bucket:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
        "bucket_and_train_folder": "s3://your-output-bucket/trained-model/yolo11x-20250807-093817",
        "instance_type": "ml.c5.xlarge"
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test the event to deploy the endpoint, it may take up to 5 minutes.&lt;/p&gt;

&lt;p&gt;You can test the endpoint with the simple code below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;!pip install opencv-python
import boto3, cv2, time, base64, json, os


infer_start_time = time.time()

# Read the image into a numpy array
orig_image = cv2.imread('images-test/a.jpg')

# Conver the array into jpeg
jpeg = cv2.imencode('.jpg', orig_image)[1]
# Serialize the jpg using base 64
payload = base64.b64encode(jpeg).decode('utf-8')

conf = 0.85
iou = 0.8
payload = f"{payload},{conf},{iou}"

runtime= boto3.client('runtime.sagemaker')
response = runtime.invoke_endpoint(EndpointName="your-yolo11x-endpoint", ContentType='text/csv', Body=payload)

response_body = response['Body'].read()
result = json.loads(response_body.decode('ascii'))

infer_end_time = time.time()

print(f"Inference Time = {infer_end_time - infer_start_time:0.4f} seconds")

print(result)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkfle5nyqkuz6sgjbuty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzkfle5nyqkuz6sgjbuty.png" alt=" " width="800" height="163"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hope this would be a helpful document.&lt;/p&gt;

&lt;p&gt;Reference:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;a href="https://sagemaker.readthedocs.io/en/stable/frameworks/sklearn/using_sklearn.html" rel="noopener noreferrer"&gt;https://sagemaker.readthedocs.io/en/stable/frameworks/sklearn/using_sklearn.html&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://nono.ma/sagemaker-model-dir-output-dir-and-output-data-dir-parameters" rel="noopener noreferrer"&gt;https://nono.ma/sagemaker-model-dir-output-dir-and-output-data-dir-parameters&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://stackoverflow.com/questions/69024005/how-to-use-sagemaker-estimator-for-model-training-and-saving" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/69024005/how-to-use-sagemaker-estimator-for-model-training-and-saving&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/aws/sagemaker-training-toolkit" rel="noopener noreferrer"&gt;https://github.com/aws/sagemaker-training-toolkit&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Hung-00/Amazon-SageMaker-YOLO-training-job" rel="noopener noreferrer"&gt;https://github.com/Hung-00/Amazon-SageMaker-YOLO-training-job&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>aws</category>
      <category>yolo</category>
      <category>lambda</category>
      <category>sagemaker</category>
    </item>
    <item>
      <title>Create open-cv Python layer for AWS Lambda</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Sun, 13 Jul 2025 16:34:31 +0000</pubDate>
      <link>https://dev.to/aws-builders/create-open-cv-python-layer-for-aws-lambda-hcj</link>
      <guid>https://dev.to/aws-builders/create-open-cv-python-layer-for-aws-lambda-hcj</guid>
      <description>&lt;p&gt;AWS Lambda allows you to run code without provisioning or managing servers. One common use case is to handle image processing tasks such as object detection, image transformation, and computer vision tasks with OpenCV.&lt;/p&gt;

&lt;p&gt;However, OpenCV is a large library, and packaging it with your Lambda function code can be tricky because Lambda has a size limit for deployment packages. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpon0nhtzdona7y8i0wyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpon0nhtzdona7y8i0wyr.png" alt=" " width="800" height="106"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A better and only approach is to use Lambda Layers, which enable you to reuse common libraries across multiple functions. I have struggled a lot when trying to create a layer for &lt;code&gt;opencv-python&lt;/code&gt; so that I can use &lt;code&gt;cv2&lt;/code&gt; and &lt;code&gt;numpy&lt;/code&gt; in my code. There are some documents, repos out there but they are all outdated.&lt;/p&gt;

&lt;p&gt;In this blog post, I will show you how to create an OpenCV layer for AWS Lambda in 2025.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step by step
&lt;/h2&gt;

&lt;p&gt;First let's make a folder. The important thing is the python version here. You need to change the version based on the version of the Lambda function.&lt;br&gt;
Here I'm using Python 3.11:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p opencv-layer/python/lib/python3.11/site-packages/
cd opencv-layer
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, you download all packages. Remember to change the Python version.&lt;br&gt;
You need to change both versions in the command below.&lt;/p&gt;

&lt;p&gt;Change between &lt;code&gt;pip&lt;/code&gt; and &lt;code&gt;pip3&lt;/code&gt; based on your enviroment.&lt;/p&gt;

&lt;p&gt;The key part here is using the &lt;code&gt;--platform manylinux2014_x86_64&lt;/code&gt; flag which ensures compatibility with Lambda's environment, which is Linux.&lt;/p&gt;

&lt;p&gt;Package &lt;code&gt;opencv-python&lt;/code&gt; is too big, over the limit 150MB.&lt;br&gt;
Use &lt;code&gt;opencv-python-headless&lt;/code&gt; instead of the standard &lt;code&gt;opencv-python&lt;/code&gt; package since Lambda doesn't need GUI components, the size of the layer will be reduced a lot which is good.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pip3 install --platform manylinux2014_x86_64 --implementation cp --python-version 3.11 --only-binary=:all: --upgrade --target python/lib/python3.11/site-packages/ opencv-python-headless
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then you will have a folder look liek this, for this one I use Python 3.11:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fom5ty2hlxtfc9a8k67lu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fom5ty2hlxtfc9a8k67lu.png" alt=" " width="800" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Create a ZIP file of the layer contents.&lt;br&gt;
If you are on Window, you can manually zip the &lt;code&gt;python&lt;/code&gt; folder.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;zip -r opencv-layer.zip python/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Publish the layer.&lt;br&gt;
You can also upload the zip file to S3 and create the layer manually with AWS console.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws lambda publish-layer-version \
    --layer-name opencv-layer \
    --description "OpenCV layer" \
    --zip-file fileb://opencv-layer.zip \
    --compatible-runtimes python3.11

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Finally now you just need to add the layer to the Lambda function you want. Make sure that they have the same Python version.&lt;/p&gt;

&lt;h2&gt;
  
  
  Test the layer
&lt;/h2&gt;

&lt;p&gt;You can use this simple code to make sure that the layer is working.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import cv2
import numpy

def lambda_handler(event, context):
    print(cv2.__version__)
    print(numpy.__version__)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hope this blog would be helpful! &lt;br&gt;
Thank you for reading!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>opencv</category>
      <category>lambda</category>
      <category>python</category>
    </item>
    <item>
      <title>AWS RDS Tutorial</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Wed, 22 May 2024 07:28:49 +0000</pubDate>
      <link>https://dev.to/hung____/aws-rds-tutorial-3aha</link>
      <guid>https://dev.to/hung____/aws-rds-tutorial-3aha</guid>
      <description>&lt;p&gt;Please have a look at this first: &lt;a href="https://dev.to/hungrushb/amazon-rds-create-database-deep-dive-2m8j"&gt;https://dev.to/hungrushb/amazon-rds-create-database-deep-dive-2m8j&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  I. Preparation
&lt;/h1&gt;

&lt;h3&gt;
  
  
  Create VPC
&lt;/h3&gt;

&lt;p&gt;1 . Create a simple VPC with name &lt;strong&gt;labRDS&lt;/strong&gt;. Keep everything as default and create.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faeu3pjj1k3z8r6rjxoev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faeu3pjj1k3z8r6rjxoev.png" alt="Image description" width="453" height="566"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faeu3pjj1k3z8r6rjxoev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Faeu3pjj1k3z8r6rjxoev.png" alt="Image description" width="453" height="566"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F569a91uc7rgopvim5aul.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F569a91uc7rgopvim5aul.png" alt="Image description" width="456" height="858"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbzc6wdkwhyot6o3w8zo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbzc6wdkwhyot6o3w8zo.png" alt="Image description" width="800" height="196"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtmsu605dzhexo5qofl6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqtmsu605dzhexo5qofl6.png" alt="Image description" width="767" height="686"&gt;&lt;/a&gt;&lt;br&gt;
2 . Update &lt;strong&gt;Enable auto-assign public IPv4 address&lt;/strong&gt; for 2 public subnets.&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshzuzvpq04d1qlm4wro1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fshzuzvpq04d1qlm4wro1.png" alt="Image description" width="800" height="168"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyay35lp5vwcwca6m1nc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffyay35lp5vwcwca6m1nc.png" alt="Image description" width="776" height="790"&gt;&lt;/a&gt;&lt;br&gt;
Repeat for public subnet 2.&lt;/p&gt;
&lt;h3&gt;
  
  
  Create EC2 security group
&lt;/h3&gt;

&lt;p&gt;3 . Head to &lt;strong&gt;EC2 console&lt;/strong&gt;, choose &lt;strong&gt;Security Groups&lt;/strong&gt;, and &lt;strong&gt;Create security group&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Choose your VPC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzo2lt2ouf35fo0nxj99o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzo2lt2ouf35fo0nxj99o.png" alt="Image description" width="593" height="287"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Add inbound rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;HTTP (80): Select HTTP from the list or enter port 80.&lt;/li&gt;
&lt;li&gt;HTTPS (443): Select HTTPS from the list or enter port 443.&lt;/li&gt;
&lt;li&gt;Custom TCP Rule (5000): Select Custom TCP Rule and enter port 5000.&lt;/li&gt;
&lt;li&gt;SSH (22): Select SSH from the list or enter port 22.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All source is &lt;strong&gt;Anywhere IPv4&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufzcidv4trsx3d0d5axi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fufzcidv4trsx3d0d5axi.png" alt="Image description" width="800" height="222"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down and create.&lt;/p&gt;
&lt;h3&gt;
  
  
  Create a Security Group for a DB Instance
&lt;/h3&gt;

&lt;p&gt;4 . Create another SG with name &lt;strong&gt;labRDS-DB-SG&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flj7rentc31ickgj0ncsy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flj7rentc31ickgj0ncsy.png" alt="Image description" width="595" height="226"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Choose MYSQL/Aurora and port 3306.&lt;/li&gt;
&lt;li&gt;For source, choose the EC2 SG we've created from last step.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2n83778lzixzhsu367ba.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2n83778lzixzhsu367ba.png" alt="Image description" width="800" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8k9zw0b0l1ylg27q24t9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8k9zw0b0l1ylg27q24t9.png" alt="Image description" width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down and create.&lt;/p&gt;
&lt;h3&gt;
  
  
  Creating a DB Subnet Group
&lt;/h3&gt;

&lt;p&gt;5 . Go to &lt;strong&gt;RDS console&lt;/strong&gt;, create a new &lt;strong&gt;subnet group&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvpxxqc1ujlh0a4v2we3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvpxxqc1ujlh0a4v2we3.png" alt="Image description" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;6 . Enter name &lt;strong&gt;labRDS-subnet-group&lt;/strong&gt; and choose VPC correctly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklsmc5e3chrt0dquwlqy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fklsmc5e3chrt0dquwlqy.png" alt="Image description" width="721" height="347"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Select 2 AZs that had 2 private subnet we created.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm3sp9pjl4bxoe76x12u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbm3sp9pjl4bxoe76x12u.png" alt="Image description" width="800" height="30"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uuwhsk5rqommhoebq5i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uuwhsk5rqommhoebq5i.png" alt="Image description" width="704" height="617"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down and create.&lt;/p&gt;

&lt;p&gt;An &lt;strong&gt;Amazon Relational Database Service (RDS) Subnet Group&lt;/strong&gt; is an &lt;strong&gt;Amazon Virtual Private Cloud (VPC)&lt;/strong&gt; resource used to define a particular set of network addresses that are accessible by an RDS instance. The subnet group defines the IP ranges for each Availability Zone, which allows for increased availability and reliability of the database instance. It also ensures that only authorized databases can access the associated subnets and prevents any unauthorized access from outside sources. Additionally, by using a Subnet Group, the user has full control over which resources have access to their database instances.&lt;/p&gt;
&lt;h1&gt;
  
  
  II. Create EC2 instance
&lt;/h1&gt;

&lt;p&gt;7 . Go to &lt;strong&gt;EC2 console&lt;/strong&gt; and launch a new instance.&lt;/p&gt;

&lt;p&gt;Enter name &lt;strong&gt;labRDS-server&lt;/strong&gt;. From the &lt;strong&gt;Amazon Machine Image (AMI)&lt;/strong&gt;, choose an HVM version of &lt;strong&gt;Amazon Linux 2023&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi12e3ermm56xywhhz3zm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi12e3ermm56xywhhz3zm.png" alt="Image description" width="731" height="807"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Under the &lt;strong&gt;Instance type&lt;/strong&gt; section, choose the &lt;strong&gt;t2.micro&lt;/strong&gt; instance type, which is pre-selected by default.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdm7yf3aezcjzenz7tyt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdm7yf3aezcjzenz7tyt.png" alt="Image description" width="729" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;8 . Create a new key pair &lt;strong&gt;labRDS&lt;/strong&gt;, download the key and choose it from the options.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4mpfbuo7banzzkp10iy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw4mpfbuo7banzzkp10iy.png" alt="Image description" width="543" height="574"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05arrht1wy23kgnkb6xe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F05arrht1wy23kgnkb6xe.png" alt="Image description" width="711" height="214"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;9 . Edit &lt;strong&gt;Networking&lt;/strong&gt;, choose &lt;strong&gt;VPC&lt;/strong&gt;, &lt;strong&gt;Subnet&lt;/strong&gt;, enable &lt;strong&gt;public IP&lt;/strong&gt; and &lt;strong&gt;security group&lt;/strong&gt; exactly like in picture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70rkoloxd5ds3uc8bvyi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F70rkoloxd5ds3uc8bvyi.png" alt="Image description" width="703" height="610"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmepilswv5zegec96tyl7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmepilswv5zegec96tyl7.png" alt="Image description" width="361" height="797"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Check again and launch.&lt;/p&gt;

&lt;p&gt;10 . Access EC2 instance with the downloaded keypair. You can use &lt;strong&gt;MobaXTerm&lt;/strong&gt; or &lt;strong&gt;Putty&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqkes0jvnlqbksvruros.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqkes0jvnlqbksvruros.png" alt="Image description" width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h1&gt;
  
  
  III. Creating a RDS DB Instance
&lt;/h1&gt;
&lt;h3&gt;
  
  
  Install Git and NodeJS
&lt;/h3&gt;

&lt;p&gt;11 . First, update your system packages to make sure you’re using the latest version. &lt;br&gt;
Find Git Packages.&lt;br&gt;
Install Git.&lt;br&gt;
Finally, check the Git version was successfully installed.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo dnf update -y
sudo dnf search git
sudo dnf install git -y
git --version
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcpiwfvk8t9r6nuxdy2q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frcpiwfvk8t9r6nuxdy2q.png" alt="Image description" width="437" height="47"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;12 . Install Node.js with the script below.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Color for formatting
GREEN='\033[0;32m'
NC='\033[0m' # Colorless

# Check if NVM is installed
if ! command -v nvm &amp;amp;&amp;gt; /dev/null; then
  # Step 1: Install nvm
  curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
  source ~/.nvm/nvm.sh
fi

# Verify nvm installation
nvm --version

# Install the LTS version of Node.js
nvm install --lts

# Use the installed LTS version
nvm use --lts

# Verify Node.js and npm installation
node -v
npm -v

# Step 4: Create package.json file (if it doesn't exist yet)
if [ ! -f package.json ]; then
  npm init -y
  echo -e **${GREEN}Created file package.json.${NC}**
fi

#Step 5: Install necessary npm packages
echo -e **Installing required npm packages...**
npm install express dotenv express-handlebars body-parser mysql

#Step 6: Install nodemon as a development dependency
echo -e **Installing nodemon as a development dependency...**
npm install --save-dev nodemon
npm install -g nodemon

# Step 7: Add npm start script to package.json
if ! grep -q '**start**:' package.json; then
  npm set-script start **index.js** # Replace **your-app.js** with your entry point file
  echo -e **${GREEN}Added npm start script to package.json.${NC}**
fi

echo -e **${GREEN}Installation completed. You can now start building and running your Node.js application using 'npm start'.${NC}**
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0dh7a58pdt8wj6bf07c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff0dh7a58pdt8wj6bf07c.png" alt="Image description" width="800" height="643"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Create RDS DB Instance
&lt;/h3&gt;

&lt;p&gt;13 . Navigate to &lt;strong&gt;RDS console&lt;/strong&gt; to create a new database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6usj8ch2kmy0eo8uem9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp6usj8ch2kmy0eo8uem9.png" alt="Image description" width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;14 . Choose &lt;strong&gt;Standard&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetq77a2ft9l39ywvgvie.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fetq77a2ft9l39ywvgvie.png" alt="Image description" width="755" height="194"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;MySQL&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbsgjtyei0kdmdkzn2ww.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffbsgjtyei0kdmdkzn2ww.png" alt="Image description" width="681" height="634"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;Templates&lt;/strong&gt;, choose &lt;strong&gt;Dev/Test&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkh55o8085rotsne17nep.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkh55o8085rotsne17nep.png" alt="Image description" width="687" height="557"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;Single DB instance&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59f86olubr70dyd4cjgj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F59f86olubr70dyd4cjgj.png" alt="Image description" width="756" height="293"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enter &lt;strong&gt;database-labRDS&lt;/strong&gt; for &lt;strong&gt;DB indentifier&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Open &lt;strong&gt;Credential Settings&lt;/strong&gt;. If you want to specify a password, uncheck the Auto generate a password box if it’s already selected.&lt;/li&gt;
&lt;li&gt;Change the &lt;strong&gt;Master username&lt;/strong&gt; value if you want.&lt;/li&gt;
&lt;li&gt;Enter the same password in both &lt;strong&gt;Master password&lt;/strong&gt; and &lt;strong&gt;Confirm password&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7mvde9qsxnw6p8us92g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb7mvde9qsxnw6p8us92g.png" alt="Image description" width="755" height="699"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;configuration&lt;/strong&gt;, choose &lt;strong&gt;Burstable classes&lt;/strong&gt; and &lt;strong&gt;db.t3.micro&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7tlgvww3pt6mul8047p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz7tlgvww3pt6mul8047p.png" alt="Image description" width="621" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;Storage&lt;/strong&gt;, choose &lt;strong&gt;General purpose (gp2)&lt;/strong&gt;, change &lt;strong&gt;Allocated storage&lt;/strong&gt; to &lt;strong&gt;20 GiB&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxds0h3mls2qqn6dcchx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxds0h3mls2qqn6dcchx.png" alt="Image description" width="751" height="732"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;Connectivity&lt;/strong&gt;, choose &lt;strong&gt;Connect to an EC2 compute resource&lt;/strong&gt;, then choose your server instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsm4rw6nx1w3eh57xmlg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnsm4rw6nx1w3eh57xmlg.png" alt="Image description" width="757" height="679"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;Additional VPC security group&lt;/strong&gt;, choose your DB security group.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90bnpk1etsgnc9jeqj51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F90bnpk1etsgnc9jeqj51.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Keep the rest default like below.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftil1ze7dwc5qz7axefuz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftil1ze7dwc5qz7axefuz.png" alt="Image description" width="667" height="312"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scroll down and create.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqr4bdywtdxlcma6ue2o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpqr4bdywtdxlcma6ue2o.png" alt="Image description" width="693" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;View your connection detail.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favvd6te32v3topltkpbl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Favvd6te32v3topltkpbl.png" alt="Image description" width="800" height="274"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv44rbnq8h9jj3jslgwnh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv44rbnq8h9jj3jslgwnh.png" alt="Image description" width="494" height="391"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;15 . Inspect your new database&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7b9z0397432x0xzuryls.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7b9z0397432x0xzuryls.png" alt="Image description" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk23sw6bslfbzstb6kfk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkk23sw6bslfbzstb6kfk.png" alt="Image description" width="800" height="618"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwqanzw8dj0gaurgqylu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flwqanzw8dj0gaurgqylu.png" alt="Image description" width="800" height="764"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumxh5d1b4fvkbao06j1j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumxh5d1b4fvkbao06j1j.png" alt="Image description" width="800" height="697"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Viewing Logs and Events on AWS RDS
&lt;/h3&gt;

&lt;p&gt;16 . Click on the Log &amp;amp; events tab. Here, you can view various logs such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Error log: Records errors that occur on the instance.&lt;/li&gt;
&lt;li&gt;General log: Records general activities on the instance.&lt;/li&gt;
&lt;li&gt;Slow query log: Records slow queries.&lt;/li&gt;
&lt;li&gt;Event log: Displays important events related to the instance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30c24rzicqgruwoqmvz1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30c24rzicqgruwoqmvz1.png" alt="Image description" width="800" height="735"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose one log and view it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjj706iblrh2vt3szwuvn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjj706iblrh2vt3szwuvn.png" alt="Image description" width="800" height="594"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Viewing Maintenance Information
&lt;/h3&gt;

&lt;p&gt;Here, you will see information about the maintenance schedule, including the times when the DB instance will be automatically backed up and maintenance tasks will be performed. You can also view the history of previous maintenance events.&lt;/p&gt;

&lt;p&gt;You can also view automatic backups and manual backups. You can also configure and manage backup settings.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq50mfqik2nbubwiqe48.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzq50mfqik2nbubwiqe48.png" alt="Image description" width="800" height="656"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  IV. Deploy the application
&lt;/h1&gt;

&lt;p&gt;17 . Clone this repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/AWS-First-Cloud-Journey/AWS-FCJ-Management
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjsmqrtgc6t26e8w9ei3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxjsmqrtgc6t26e8w9ei3.png" alt="Image description" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;18 . Install MySQL.&lt;/p&gt;

&lt;p&gt;First, go save your database endpoint.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojl6h62nmn3aj0bqizqc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojl6h62nmn3aj0bqizqc.png" alt="Image description" width="800" height="592"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Note: To execute this script, you need to have sudo permissions and make sure you have provided the correct database information (RDS Endpoint, database name, username and password) before run script.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;#!/bin/bash

# Set variables for MySQL RPM and database information
MYSQL_RPM_URL="https://dev.mysql.com/get/mysql80-community-release-el9-1.noarch.rpm"
DB_HOST="replace this with your database endpoint"
DB_NAME="first_cloud_users"
DB_USER="admin"
DB_PASS="12341234"

# Check if MySQL Community repository RPM already exists
if [ ! -f mysql80-community-release-el9-1.noarch.rpm ]; then
  sudo wget $MYSQL_RPM_URL
fi

# Install MySQL Community repository
sudo dnf install -y mysql80-community-release-el9-1.noarch.rpm

# You need the public key of mysql to install the software.
sudo rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2023

# Install MySQL server
sudo dnf install -y mysql-community-server

# Start MySQL server
sudo systemctl start mysqld

# Enable MySQL to start on boot
sudo systemctl enable mysqld

# Check MySQL version
mysql -V

# Create or update the .env file with database information
echo "DB_HOST=$DB_HOST" &amp;gt;&amp;gt; .env
echo "DB_NAME=$DB_NAME" &amp;gt;&amp;gt; .env
echo "DB_USER=$DB_USER" &amp;gt;&amp;gt; .env
echo "DB_PASS=$DB_PASS" &amp;gt;&amp;gt; .env

# Connect to MySQL
mysql -h $DB_HOST -P 3306 -u $DB_USER -p 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then enter your password to login.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4z10tpszghw82fs0djd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4z10tpszghw82fs0djd.png" alt="Image description" width="735" height="917"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE DATABASE IF NOT EXISTS first_cloud_users;

USE first_cloud_users;

CREATE TABLE `user`
(
    `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
    `first_name` VARCHAR(45) NOT NULL,
    `last_name` VARCHAR(45) NOT NULL,
    `email` VARCHAR(100) NOT NULL UNIQUE,
    `phone` VARCHAR(15) NOT NULL,
    `comments` TEXT NOT NULL,
    `status` ENUM('active', 'inactive') NOT NULL DEFAULT 'active'
) ENGINE = InnoDB;

INSERT INTO `user`
(`first_name`, `last_name`, `email`, `phone`, `comments`, `status`)
VALUES
('Amanda', 'Nunes', 'anunes@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Alexander', 'Volkanovski', 'avolkanovski@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Khabib', 'Nurmagomedov', 'knurmagomedov@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Kamaru', 'Usman', 'kusman@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Israel', 'Adesanya', 'iadesanya@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Henry', 'Cejudo', 'hcejudo@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Valentina', 'Shevchenko', 'vshevchenko@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Tyron', 'Woodley', 'twoodley@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Rose', 'Namajunas', 'rnamajunas@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Tony', 'Ferguson', 'tferguson@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Jorge', 'Masvidal', 'jmasvidal@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Nate', 'Diaz', 'ndiaz@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Conor', 'McGregor', 'cmcGregor@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Cris', 'Cyborg', 'ccyborg@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Tecia', 'Torres', 'ttorres@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Ronda', 'Rousey', 'rrousey@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Holly', 'Holm', 'hholm@ufc.com', '012345 678910', 'I love AWS FCJ', 'active'),
('Joanna', 'Jedrzejczyk', 'jjedrzejczyk@ufc.com', '012345 678910', 'I love AWS FCJ', 'active');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsmbe36y4mv0yqr2pj0i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwsmbe36y4mv0yqr2pj0i.png" alt="Image description" width="800" height="613"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SHOW DATABASES;
USE first_cloud_users;

SHOW TABLES;
DESCRIBE user;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ru00xqsb77v95sazc2p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ru00xqsb77v95sazc2p.png" alt="Image description" width="268" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8vmmnmid1qv7sihziq5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8vmmnmid1qv7sihziq5.png" alt="Image description" width="755" height="387"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;19 . &lt;/p&gt;

&lt;p&gt;Go to application directory.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cd AWS-FCJ-Management/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once you are in the application directory, run the following command to start the application:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;npm start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnz9cim37t5zktkslv51.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbnz9cim37t5zktkslv51.png" alt="Image description" width="511" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Get your instance's public IPv4, access it with http through port 5000 and you should see the app is running:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gt82af83kwlm46l51dw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2gt82af83kwlm46l51dw.png" alt="Image description" width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3m35q3243nfzj6ohay8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy3m35q3243nfzj6ohay8.png" alt="Image description" width="800" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Try to add new user and check.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsukzwh0d5xz2vx699doz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsukzwh0d5xz2vx699doz.png" alt="Image description" width="800" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feidmzzet2tqg1uytds1y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feidmzzet2tqg1uytds1y.png" alt="Image description" width="347" height="110"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  VI. Clean up
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;Terminate EC2 instance&lt;/li&gt;
&lt;li&gt;Delete DB Instance, also release the Elastic IP addresses if there is any left.&lt;/li&gt;
&lt;li&gt;Delete DB Snapshots&lt;/li&gt;
&lt;li&gt;Delete VPC&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ll47mp5ij7kohiyhgye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ll47mp5ij7kohiyhgye.png" alt="Image description" width="800" height="218"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Congratulation!&lt;/p&gt;

&lt;p&gt;20 . You can go and check log to see the difference:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrpbeddg93j35mcid372.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwrpbeddg93j35mcid372.png" alt="Image description" width="800" height="328"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjfsf32o7chxvtw0mfj7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsjfsf32o7chxvtw0mfj7.png" alt="Image description" width="761" height="338"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs0ac0ygoxfiymqf6j3h.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffs0ac0ygoxfiymqf6j3h.png" alt="Image description" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fept01pf181b0ljk20pic.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fept01pf181b0ljk20pic.png" alt="Image description" width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ns4awwtzppyt8iqwn2m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6ns4awwtzppyt8iqwn2m.png" alt="Image description" width="800" height="317"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlrlzba3brh1gx9tffjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmlrlzba3brh1gx9tffjt.png" alt="Image description" width="800" height="294"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  V. Create snapshot and restore
&lt;/h1&gt;

&lt;p&gt;Create a snapshot.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff94ntrpi2q39mq0rwf7l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff94ntrpi2q39mq0rwf7l.png" alt="Image description" width="786" height="560"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Wait for it to be &lt;strong&gt;Available&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawknalg8kuinvylz7x6c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fawknalg8kuinvylz7x6c.png" alt="Image description" width="800" height="123"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose &lt;strong&gt;Restore&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fob0br5wpsyjflup0iz18.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fob0br5wpsyjflup0iz18.png" alt="Image description" width="800" height="139"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Enter &lt;strong&gt;labRDS-restore&lt;/strong&gt;. Also remember to choose &lt;strong&gt;Burstable classes&lt;/strong&gt; and &lt;strong&gt;db.t3.micro&lt;/strong&gt;. Then &lt;strong&gt;Restore DB instance&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzip7jzyzb56wu00ip6pd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzip7jzyzb56wu00ip6pd.png" alt="Image description" width="760" height="672"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You then can return to step &lt;strong&gt;18&lt;/strong&gt; to set up a new database connection to &lt;strong&gt;labRDS-restore&lt;/strong&gt;. Re-run the app and you will see it can fetch data normally again.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Amazon RDS - A closer look when create database</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Tue, 21 May 2024 07:19:01 +0000</pubDate>
      <link>https://dev.to/hung____/amazon-rds-create-database-deep-dive-2m8j</link>
      <guid>https://dev.to/hung____/amazon-rds-create-database-deep-dive-2m8j</guid>
      <description>&lt;h3&gt;
  
  
  Amazon Relational Database Service (RDS)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Amazon Relational Database Service (RDS)&lt;/strong&gt; is a managed database service that lets you run relational database systems in the cloud. RDS takes care of setting up the database system, performing backups, ensuring high availability, and patching the database software and the underlying operating system. RDS also makes it easy to recover from database failures, restore data, and scale your databases to achieve the level of performance and availability that your application requires.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon RDS&lt;/strong&gt; was first released on 22 October 2009, supporting &lt;strong&gt;MySQL&lt;/strong&gt; databases. This was followed by support for &lt;strong&gt;Oracle Database&lt;/strong&gt; in June 2011, &lt;strong&gt;Microsoft SQL Server&lt;/strong&gt; in May 2012, *&lt;em&gt;PostgreSQL **in November 2013, and **MariaDB *&lt;/em&gt;(a fork of MySQL) in October 2015, and an additional 80 features during 2017.&lt;/p&gt;

&lt;p&gt;In November 2014, AWS announced &lt;strong&gt;Amazon Aurora&lt;/strong&gt;, a MySQL-compatible database offering enhanced high availability and performance, and in October 2017 a &lt;strong&gt;PostgreSQL-compatible&lt;/strong&gt; database offering was launched.&lt;/p&gt;

&lt;p&gt;In March 2019 AWS announced support of **PostgreSQL **11 in RDS, five months after official release.&lt;/p&gt;

&lt;p&gt;To deploy a database using RDS, you start by configuring a database instance, which is an isolated database environment. A database instance exists in a virtual private cloud (VPC) that you specify, but unlike an EC2 instance, AWS fully manages database instances. You can’t establish an SSH session to them, and they don’t show up under your EC2 instances.&lt;/p&gt;

&lt;h3&gt;
  
  
  ------
&lt;/h3&gt;

&lt;h3&gt;
  
  
  Database Engines
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rtg6nla13jj0bgkpbev.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2rtg6nla13jj0bgkpbev.png" alt="Image description" width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A database engine is simply the software that stores, organizes, and retrieves data in a database. Each database instance runs only one database engine. RDS offers the following six database engines to choose from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Amazon Aurora&lt;/strong&gt; Aurora is Amazon’s drop-in binary replacement for &lt;strong&gt;MySQL&lt;/strong&gt; and &lt;strong&gt;PostgreSQL&lt;/strong&gt;. Aurora offers better write performance than both by using a virtualized storage layer that reduces the number of writes to the underlying storage. It provides two editions:

&lt;ul&gt;
&lt;li&gt;MySQL compatible&lt;/li&gt;
&lt;li&gt;PostgreSQL compatible &lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Depending on the edition you choose, Aurora is compatible with PostgreSQL or MySQL import and export tools and snapshots. Aurora is designed to let you seamlessly migrate from an existing deployment that uses either of those two open source databases. For MySQL-compatible editions, Aurora supports only the InnoDB storage engine. Also, the Aurora Backtrack feature for MySQL lets you, within a matter of seconds, restore your database to any point in time within the last 72 hours. In addition, the Amazon Aurora Serverless feature can automatically scale your database up and down on- demand. You pay compute costs only for when the database in active, potentially saving you large sums of money in the long run.&lt;br&gt;
    - Automatic allocation of storage space in 10 GB increments up to 64 TBs&lt;br&gt;
    - Fivefold performance increase over the vanilla MySQL version&lt;br&gt;
    - Automatic six-way replication across availability zones to improve availability and fault tolerance&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MySQL&lt;/strong&gt; MySQL is designed for OLTP applications such as blogs and e- commerce. RDS offers the latest MySQL Community Edition versions. MySQL offers two storage engines— MyISAM and InnoDB — but you should use the latter with RDS- managed automatic backups.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MariaDB&lt;/strong&gt; MariaDB is a drop- in binary replacement for MySQL. It was created due to concerns about MySQL’s future after Oracle acquired the company that developed it. MariaDB supports the XtraDB and InnoDB storage engines, but AWS recommends using the latter for maximum compatibility with RDS.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;PostgreSQL&lt;/strong&gt; PostgreSQL advertises itself as the most Oracle- compatible open source database. This is a good choice when you have in- house applications that were developed for Oracle but want to keep costs down.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Oracle&lt;/strong&gt; Oracle is one of the most widely deployed relational database management systems. Some applications expressly require an Oracle database. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Microsoft SQL Server&lt;/strong&gt; RDS offers multiple Microsoft SQL Server versions, ranging from 2012 SP4 GDR to the present. For the edition, you can choose Express, Web, Standard, or Enterprise. The variety of flavors makes it possible to migrate an existing SQL Server database from an on- premises deployment to RDS without having to perform any database upgrades.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IBM Db2&lt;/strong&gt; RDS offers multiple IBM Db2 versions, mostly 11.5.9.0 version. For the edition, you can choose Standard, or Advanced. Amazon RDS for Db2 supports most of the features and capabilities of the IBM Db2 database. Some features might have limited support or restricted privileges.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Licensing Considerations
&lt;/h3&gt;

&lt;p&gt;RDS provides two models for licensing the database engine software you run. The license included model covers the cost of the license in the pricing for an RDS instance. The bring your own license (BYOL) model requires you to obtain a license for the database engine you run.&lt;/p&gt;

&lt;p&gt;License Included MariaDB and MySQL use the GNU General Public License (GPL) v2.0, and PostgreSQL uses the PostgreSQL license, all of which allow for free use of the respective software. All versions and editions of Microsoft SQL Server that you run on RDS include a license, as does Oracle Database Standard Edition Two (SE2). Bring Your Own License Only the Oracle database engine supports this licensing model. The following Oracle Database editions allow you to bring your own license:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enterprise Edition (EE)&lt;/li&gt;
&lt;li&gt;Standard Edition Two (SE2)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Database Instance Classes
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvo8hgw567nwl6y7zr6u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcvo8hgw567nwl6y7zr6u.png" alt="Image description" width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When launching a database instance, you must decide how much processing power, memory, network bandwidth, and disk throughput it needs. RDS offers a variety of database instance classes to meet the diverse performance needs of different databases. If you get it wrong or if your needs change, you can switch your instance to a different class. RDS divides database instance classes into the following three types.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Standard&lt;/strong&gt; Standard instance classes meet the needs of most databases. The latest- generation instance class is db.m6i, which provides up to:

&lt;ul&gt;
&lt;li&gt;512 GB of memory&lt;/li&gt;
&lt;li&gt;128 vCPU&lt;/li&gt;
&lt;li&gt;40 Gbps network bandwidth&lt;/li&gt;
&lt;li&gt;50,000 Mbps (6,250 MBps) disk throughput&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Memory Optimized&lt;/strong&gt; Memory-optimized instance classes are for databases that have hefty performance require- ments. Providing more memory to a database allows it to store more data in memory, which can result in faster query times. The most memory- optimized instance class is db.x1e, and it provides up to:

&lt;ul&gt;
&lt;li&gt;3,904 GB of memory&lt;/li&gt;
&lt;li&gt;128 vCPU&lt;/li&gt;
&lt;li&gt;25 Gbps network bandwidth&lt;/li&gt;
&lt;li&gt;14,000 Mbps (1,750 MBps) disk throughput
Database instances use EBS storage. Both the standard and memory- optimized instance class types are EBS optimized, meaning they provide dedicated bandwidth for transfers to and from EBS storage.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Burstable Performance&lt;/strong&gt; Burstable performance instances are for development, test, and other nonproduction data- bases. The latest burstable performance instance class available is db.t4g, and it gives you up to:

&lt;ul&gt;
&lt;li&gt;32 GB of memory&lt;/li&gt;
&lt;li&gt;8 vCPU&lt;/li&gt;
&lt;li&gt;5 Gbps network bandwidth&lt;/li&gt;
&lt;li&gt;2,048 Mbps (256 MBps) disk throughput
The db.t3, db.m5, and db.r5 classes are based on the AWS Nitro System, accounting for significantly improved performance over older generation instance classes. Note that disk reads and writes count against the maximum disk throughput on these instance classes.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Storage
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Understanding Input/Output Operations per Second&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;IOPS (Input/output operations per second) is a performance indicator that measures the speed and efficiency of a storage device based on the number of read/write operations it can complete within a second. It is also a standard performance benchmark for storage systems, such as hard disk drives (HDD), flash drives, and solid-state drives (SSD).&lt;/p&gt;

&lt;p&gt;AWS measures storage performance in input/output operations per second (IOPS). An input/ output (I/O) operation is either a read from or write to storage. All things being equal, the more IOPS you can achieve, the faster your database can store and retrieve data. RDS allocates you a number of IOPS depending on the type of storage you select, and you can’t exceed this threshold. The speed of your database storage is limited by the number of IOPS allocated to it. The amount of data you can transfer in a single I/O operation depends on the page size that the database engine uses.&lt;/p&gt;

&lt;p&gt;Example: &lt;/p&gt;

&lt;p&gt;MySQL and MariaDB have a page size of 16 KB. Hence, writing 16 KB of data to disk would constitute one I/O operation. Oracle, PostgreSQL, and Microsoft SQL Server, IBM Db2 use a page size of 8 KB. Writing 16 KB of data using one of those database engines would consume two I/O operations. The larger the page size, the more data you can transfer in a single I/O operation.&lt;/p&gt;

&lt;p&gt;Assuming a 16 KB page size, suppose your database needed to read 102,400 KB (100 MB) of data every second. To achieve this level of performance, your database would have to be able to read 6,400 16 KB pages every second. Because each page read counts as one I/O operation, your storage and instance class would need to be able to sustain 6,400 IOPS. Notice the inverse relationship between IOPS and page size: the larger your page size, the fewer IOPS you need to achieve the same level of throughput.&lt;/p&gt;

&lt;p&gt;Things get interesting when you move beyond a 32 KB page size. If your database engine writes more than 32 KB in a single I/O operation, AWS counts that as more than one I/O operation. For example, reading or writing a 64 KB page would count as two I/O operations. A 128 KB page would count as four I/O operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#USER_PIOPS.Realized" rel="noopener noreferrer"&gt;Amazon RDS DB instance storage&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The number of IOPS you can achieve depends on the type of storage you select. RDS offers the following three different types of storage.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feiu9ahgsunaz71y2dntz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feiu9ahgsunaz71y2dntz.png" alt="Image description" width="800" height="396"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;General Purpose SSD&lt;/strong&gt; – &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#Concepts.Storage.GeneralSSD" rel="noopener noreferrer"&gt;General Purpose SSD&lt;/a&gt; volumes offer cost-effective storage that is ideal for a broad range of workloads running on medium-sized DB instances. General Purpose storage is best suited for development and testing environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Provisioned IOPS SSD&lt;/strong&gt; – &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#USER_PIOPS" rel="noopener noreferrer"&gt;Provisioned IOPS storage&lt;/a&gt; is designed to meet the needs of I/O-intensive workloads, particularly database workloads, that require low I/O latency and consistent I/O throughput. Provisioned IOPS storage is best suited for production environments.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Magnetic&lt;/strong&gt; – Amazon RDS also supports magnetic storage for backward compatibility. We recommend that you use General Purpose SSD or Provisioned IOPS SSD for any new storage needs. The maximum amount of storage allowed for DB instances on magnetic storage is less than that of the other storage types. For more information, see &lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Storage.html#CHAP_Storage.Magnetic" rel="noopener noreferrer"&gt;Magnetic storage&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Storage autoscaling
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.Autoscaling" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.Autoscaling&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1dm6pssu5l7vsts3r59.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv1dm6pssu5l7vsts3r59.png" alt="Image description" width="660" height="285"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If your workload is unpredictable, you can enable storage autoscaling for an Amazon RDS DB instance. &lt;/p&gt;

&lt;h3&gt;
  
  
  Using a dedicated log volume (DLV) (Only for Provisioned IOPS volume)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.dlv" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.dlv&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxomzmtq6xrgjc6ojw3e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvxomzmtq6xrgjc6ojw3e.png" alt="Image description" width="792" height="225"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can use a dedicated log volume (DLV) for a DB instance that uses Provisioned IOPS (PIOPS) storage. A DLV moves PostgreSQL database transaction logs and MySQL/MariaDB redo logs and binary logs to a storage volume that's separate from the volume containing the database tables. A DLV makes transaction write logging more efficient and consistent. DLVs are ideal for databases with large allocated storage, high I/O per second (IOPS) requirements, or latency-sensitive workloads.&lt;/p&gt;

&lt;p&gt;DLVs are supported for PIOPS storage (io1 and io2 Block Express) and are created with a fixed size of 1,000 GiB and 3,000 Provisioned IOPS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Availability &amp;amp; durability
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzzcfprpl86hn8uh1xp6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frzzcfprpl86hn8uh1xp6.png" alt="Image description" width="798" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Multi-AZ deployments in RDS provide improved availability and durability for database instances, making them an ideal choice for production database workloads.&lt;br&gt;
With Multi-AZ DB instances, RDS synchronously replicates data to a standby instance in a different Availability Zone (AZ) for enhanced resilience. You can change your environment from&lt;br&gt;
Single-AZ to Multi-AZ at any time. Each AZ runs on its own distinct, independent infrastructure and is built to be highly dependable.&lt;/p&gt;

&lt;p&gt;In the event of an infrastructure failure, RDS initiates an automatic failover to the standby instance, allowing you to resume database operations as soon as the failover is complete. Additionally, the endpoint for your DB instance remains the same after a failover, eliminating manual administrative intervention and enabling your application to resume database operations seamlessly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Connectivity
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc9w9wmltrd7v7dof4zf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnc9w9wmltrd7v7dof4zf.png" alt="Image description" width="773" height="125"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IPv4&lt;/strong&gt; Your resources can communicate with your databases only over the IPv4 addressing protocol. Resources include clients and AWS resources, such as EC2 instances.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dual-stack mode&lt;/strong&gt; Your resources can communicate over the IPv4 addressing protocol, the IPv6 addressing protocol, or both. If you have any resources that must communicate with your database over IPv6, use dual-stack mode.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30sbwd2j23a44ykxf9xt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30sbwd2j23a44ykxf9xt.png" alt="Image description" width="595" height="140"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Specify the TCP/IP port that the DB instance will use for application connections. The connection string of any application connecting to the DB instance must specify the port number of the DB instance. Both the security group applied to the DB instance and your company’s firewalls must allow connections to the port.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwo89q32jj8kom10bnp3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwo89q32jj8kom10bnp3.png" alt="Image description" width="597" height="209"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Password authentication&lt;/strong&gt; Manage your database user credentials through your DB engine's native password authentication features. To learn more, see the documentation for your DB engine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Password and IAM database authentication&lt;/strong&gt; Manage your database user credentials through your DB engine's native password authentication features and IAM users and roles. IAM helps an administrator securely control access to AWS resources. IAM administrators control who can be authenticated and authorized for RDS resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Monitoring
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PerfInsights.html" rel="noopener noreferrer"&gt;Monitoring DB load with Performance Insights on Amazon RDS&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6mk9axcxi9l3hcud5ur.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn6mk9axcxi9l3hcud5ur.png" alt="Image description" width="590" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Performance Insights is an advanced database performance monitoring feature that makes it easy to diagnose and solve performance challenges on Amazon RDS databases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax0oszd3xqd7o72f2fm7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fax0oszd3xqd7o72f2fm7.png" alt="Image description" width="696" height="292"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After you enable this feature for a database instance, you get access to over 50 new CPU, memory, file system, and disk I/O metrics. You can enable these features on a per-instance basis, and you can choose the granularity (all the way down to 1 second). &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30w9cddw6yted6s50jp0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F30w9cddw6yted6s50jp0.png" alt="Image description" width="800" height="837"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgt4cikrm8jfrq0fwxyk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxgt4cikrm8jfrq0fwxyk.png" alt="Image description" width="800" height="493"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Additional configuration
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/parameter-groups-overview.html" rel="noopener noreferrer"&gt;Overview of parameter groups&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;A DB parameter group acts as a container for engine configuration values that are applied to one or more DB instances.&lt;/p&gt;

&lt;p&gt;DB cluster parameter groups apply to Multi-AZ DB clusters only. In a Multi-AZ DB cluster, the settings in the DB cluster parameter group apply to all of the DB instances in the cluster. The default DB parameter group for the DB engine and DB engine version is used for each DB instance in the DB cluster.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qeae05izav0flvk5wai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4qeae05izav0flvk5wai.png" alt="Image description" width="580" height="277"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithOptionGroups.html" rel="noopener noreferrer"&gt;Working with option groups&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Different database engines offer various features or options to help you manage your databases and improve security. Option groups let you specify these features and apply them to one or more instances. Options require more memory, so make sure your instances have ample memory and enable only the options you need.&lt;br&gt;
The options available for a database option group depend on the engine. Oracle offers Amazon S3 integration. Both Microsoft SQL Server and Oracle offer transparent data encryption (TDE), which causes the engine to encrypt data before writing it to storage.&lt;br&gt;
MySQL and MariaDB offer an audit plug- in that lets you log user logons and queries run against your databases.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon3rbf78kg19nlc95tud.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fon3rbf78kg19nlc95tud.png" alt="Image description" width="453" height="423"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The backup retention period determines the period for which you can perform a point-in-time recovery&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1j8kl7cg9a3q876s5a4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa1j8kl7cg9a3q876s5a4.png" alt="Image description" width="621" height="583"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReplicateBackups.html" rel="noopener noreferrer"&gt;Replicating automated backups to another AWS Region&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;You can replicate automated backups to another AWS Region to help with disaster recovery. Snapshots and transaction logs are replicated immediately after they are available in the source.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatoym4bpj40f971nkew1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fatoym4bpj40f971nkew1.png" alt="Image description" width="738" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Choose to encrypt the given instance. Master key ids and aliases appear in the list after they have been created using the Key Management Service (KMS) console.&lt;/p&gt;

&lt;p&gt;The AWS KMS key is used to protect the encryption key that is used to encrypt this replicated automated backup in the destination AWS Region.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ljtkhruc8qcofpsssld.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5ljtkhruc8qcofpsssld.png" alt="Image description" width="457" height="248"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.Procedural.UploadtoCloudWatch.html#integrating_cloudwatchlogs.configure" rel="noopener noreferrer"&gt;Specifying the logs to publish to CloudWatch Logs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj38hbvkngkgdn4litxy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj38hbvkngkgdn4litxy.png" alt="Image description" width="788" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Specify Yes to enable automatic upgrades to new minor versions as they are released. The automatic upgrades occur during the maintenance window for the DB instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w98ndxikpd9uckt31yl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6w98ndxikpd9uckt31yl.png" alt="Image description" width="652" height="88"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Protects the database from being deleted accidentally. While this option is enabled, you can’t delete the database.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7ahu2dcnxfvfo8mipp4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq7ahu2dcnxfvfo8mipp4.png" alt="Image description" width="764" height="270"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://calculator.aws/#/?key=new" rel="noopener noreferrer"&gt;AWS Pricing Calculator&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/rds/pricing/" rel="noopener noreferrer"&gt;Amazon RDS pricing&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>rds</category>
      <category>cloud</category>
    </item>
    <item>
      <title>Amazon Route53</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Mon, 20 May 2024 09:43:15 +0000</pubDate>
      <link>https://dev.to/hung____/amazon-route53-2cc7</link>
      <guid>https://dev.to/hung____/amazon-route53-2cc7</guid>
      <description>&lt;h3&gt;
  
  
  DNS Basic
&lt;/h3&gt;

&lt;p&gt;The record type you enter in a zone file’s resource record will determine how the record’s data is formatted and how it should be used. There are currently around 40 types in active use. Here is some common DNS record types:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq0atdcv2ss4x9sm0lni.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq0atdcv2ss4x9sm0lni.png" alt="Image description" width="655" height="555"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An ALIAS record generally maps a name onto an AWS resource. &lt;/p&gt;

&lt;p&gt;You can use &lt;strong&gt;ALIAS records&lt;/strong&gt; to route traffic to a resource — such as an elastic load balancer - without specifying its IP address. Although the use of alias records has not yet been standardized across providers, Route 53 makes them available within record sets, allowing you to connect directly with network-facing resources running on AWS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Route53
&lt;/h3&gt;

&lt;p&gt;With those DNS basics out of the way, it’s time to turn our attention back to AWS. Route 53 provides more than just basic DNS services. In fact, it focuses on four distinct areas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Domain registration&lt;/li&gt;
&lt;li&gt;DNS management&lt;/li&gt;
&lt;li&gt;Availability monitoring (health checks)&lt;/li&gt;
&lt;li&gt;Traffic management (routing policies)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Route 53 now also provides an Application Recovery Controller through which you can configure recovery groups, readiness checks, and routing control.&lt;/p&gt;

&lt;p&gt;In case you’re curious, the “53” in Route 53 reflects the fact that DNS traffic uses TCP or UDP port 53.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyj1d83qfcw6r6akexipo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyj1d83qfcw6r6akexipo.png" alt="Image description" width="582" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;A user opens a web browser, enters &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt; in the address bar, and presses Enter.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The request for &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt; is routed to a DNS resolver, which is typically managed by the user's internet service provider (ISP), such as a cable internet provider, a DSL broadband provider, or a corporate network.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The DNS resolver for the ISP forwards the request for &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt; to a DNS root name server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The DNS resolver forwards the request for &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt; again, this time to one of the TLD name servers for .com domains. The name server for .com domains responds to the request with the names of the four Route 53 name servers that are associated with the example.com domain.&lt;br&gt;
The DNS resolver caches (stores) the four Route 53 name servers. The next time someone browses to example.com, the resolver skips steps 3 and 4 because it already has the name servers for example.com. The name servers are typically cached for two days.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The DNS resolver chooses a Route 53 name server and forwards the request for &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt; to that name server.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Route 53 name server looks in the example.com hosted zone for the &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt; record, gets the associated value, such as the IP address for a web server, 192.0.2.44, and returns the IP address to the DNS resolver.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The DNS resolver finally has the IP address that the user needs. The resolver returns that value to the web browser.&lt;/p&gt;

&lt;p&gt;The DNS resolver also caches the IP address for example.com for an amount of time that you specify so that it can respond more quickly the next time someone browses to example.com. For more information, see &lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/route-53-concepts.html#route-53-concepts-time-to-live" rel="noopener noreferrer"&gt;time to live (TTL)&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The web browser sends a request for &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt; to the IP address that it got from the DNS resolver. This is where your content is, for example, a web server running on an Amazon EC2 instance or an Amazon S3 bucket that's configured as a website endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The web server or other resource at 192.0.2.44 returns the web page for &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt; to the web browser, and the web browser displays the page.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7fobylfricaxtu6a0l8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7fobylfricaxtu6a0l8.png" alt="Image description" width="800" height="273"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In Route 53, AWS assigns four name servers for all domains, as shown in the screenshot: one for &lt;strong&gt;.com&lt;/strong&gt;, one for &lt;strong&gt;.net&lt;/strong&gt;, one for &lt;strong&gt;.co.uk&lt;/strong&gt;, and one for &lt;strong&gt;.org&lt;/strong&gt;. Why? For higher availability! If there is an issue with the &lt;strong&gt;.net&lt;/strong&gt; DNS services, the other three continue to provide high availability for your domains.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Route 53 health checks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmykavla590691ydzyg4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcmykavla590691ydzyg4.png" alt="Image description" width="393" height="345"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;You create a health check and specify values that define how you want the health check to work, such as the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The IP address or domain name of the endpoint, such as a web server, that you want Route 53 to monitor. (You can also monitor the status of other health checks, or the state of a CloudWatch alarm.)&lt;/li&gt;
&lt;li&gt;The protocol that you want Amazon Route 53 to use to perform the check: HTTP, HTTPS, or TCP.&lt;/li&gt;
&lt;li&gt;How often you want Route 53 to send a request to the endpoint. This is the request interval.&lt;/li&gt;
&lt;li&gt;How many consecutive times the endpoint must fail to respond to requests before Route 53 considers it unhealthy. This is the failure threshold.&lt;/li&gt;
&lt;li&gt;Optionally, how you want to be notified when Route 53 detects that the endpoint is unhealthy. When you configure notification, Route 53 automatically sets a CloudWatch alarm. CloudWatch uses Amazon SNS to notify users that an endpoint is unhealthy.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Route 53 starts to send requests to the endpoint at the interval that you specified in the health check.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the endpoint responds to the requests, Route 53 considers the endpoint to be healthy and takes no action.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If the endpoint doesn't respond to a request, Route 53 starts to count the number of consecutive requests that the endpoint doesn't respond to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If the count reaches the value that you specified for the failure threshold, Route 53 considers the endpoint unhealthy.&lt;/li&gt;
&lt;li&gt;If the endpoint starts to respond again before the count reaches the failure threshold, Route 53 resets the count to 0, and CloudWatch doesn't contact you.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If Route 53 considers the endpoint unhealthy and if you configured notification for the health check, Route 53 notifies CloudWatch.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you didn't configure notification, you can still see the status of your Route 53 health checks in the Route 53 console. For more information, see Monitoring health check status and getting notifications.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;If you configured notification for the health check, CloudWatch triggers an alarm and uses Amazon SNS to send notification to the specified recipients.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnw5hgyvm61z9w77gfs6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnw5hgyvm61z9w77gfs6.png" alt="Image description" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Public and private hosted zones
&lt;/h3&gt;

&lt;p&gt;Route 53 supports both &lt;strong&gt;public **and private **hosted zones&lt;/strong&gt;. &lt;strong&gt;Public hosted zones&lt;/strong&gt; have a route to internet-facing resources and resolve from the internet using global routing policies. Meanwhile, &lt;strong&gt;private hosted zones&lt;/strong&gt; have a route to VPC resources and resolve from inside the VPC. It helps to integrate with on-premises private zones using forwarding rules and endpoints.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/AboutHZWorkingWith.html" rel="noopener noreferrer"&gt;Working with public hosted zones&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zones-private.html" rel="noopener noreferrer"&gt;Working with private hosted zones&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Routing policy
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp11ji8ei5xu1u64cqerg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp11ji8ei5xu1u64cqerg.png" alt="Image description" width="776" height="814"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Route 53 provides the following seven types of routing policies for traffic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Simple routing policy&lt;/strong&gt; – This is used for a single resource (for example, a web server cre-
ated for the &lt;a href="http://www.example.com" rel="noopener noreferrer"&gt;www.example.com&lt;/a&gt; website). The limitation of simple routing is that it doesn’t support health checks. There are no checks that the resource being pointed out by the record is actually operational.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwkq8xe02rc2aebn6qps.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmwkq8xe02rc2aebn6qps.png" alt="Image description" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Failover routing policy&lt;/strong&gt; – This is used to configure active-passive failover. A failover routing policy will direct traffic to the resource you identify as primary as long as health checks confirm that the resource is running properly. Should the primary resource go offline, subsequent traffic will be sent to the resource defined within a second record set and designated as secondary. As with other policies, the desired relationship between record sets is established by using matching set ID values for each set.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiebs7ixem7e6h2i2niqe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiebs7ixem7e6h2i2niqe.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Geoproximity routing policy&lt;/strong&gt; – This is used for geolocation when users are shifting from one location to another. Geoproximity routing lets Amazon Route 53 route traffic to your resources based on the geographic location of your users and your resources. It routes traffic to the closest resource that is available. You can also optionally choose to route more traffic or less traffic to a given resource by specifying a value, known as a bias. A bias expands or shrinks the size of the geographic region from which traffic is routed to a resource.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43btte87chh1nz8k2xfo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F43btte87chh1nz8k2xfo.png" alt="Image description" width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Latency routing policy&lt;/strong&gt; – This optimizes the best latency for the resources deployed in multiple AWS Regions. Latency- based routing lets you leverage resources running in multiple AWS regions to provide service to clients from the instances that will deliver the best experience. Practically this means that, for an application used by clients in both Asia and Europe, you could place parallel resources in, say, the ap-southeast-1 and eu-west-1 regions. You then create a record set for each resource using latency- based policies in Route 53, with one pointing to your ap-southeast-1 resource and the other to eu-west-1.
Assuming you gave both record sets the same value for Set ID, Route 53 will know to direct requests for those resources to the instance that will provide the lowest latency for each client.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F161yndbnlpx2bmuj197w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F161yndbnlpx2bmuj197w.png" alt="Image description" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Geolocation routing policy&lt;/strong&gt; – This routes traffic based on the user’s location. Unlike latency policies that route traffic to answer data requests as quickly as possible, geolocation uses the continent, country, or U.S. state where the request originated to decide what resource to send. This can help you focus your content delivery, allowing you to deliver web pages in customer-appropriate languages, restrict content to regions where it’s legally permitted, or generate parallel sales campaigns.
You should be aware that Route 53 will sometimes fail to identify the origin of a requesting IP address (particularly when the requests come from VPN users). You might want to configure a default record to cover those cases.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjipc0thy5m76nendmpfi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjipc0thy5m76nendmpfi.png" alt="Image description" width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multivalue answer routing&lt;/strong&gt; policy – This is used to respond to DNS queries with up to eight healthy, randomly selected records. It’s possible to combine a health check configuration with multivalue routing to make a deployment more highly available. Each multivalue-based record set points to a single resource and can be associated with a health check. As many as 8 records can be pointed to parallel resources and connected to one another through matching set ID values. Route 53 will use the health checks to monitor resource status and randomly route traffic among the healthy resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;It’s not a substitute for a load balancer, which handles the actual connection process from a network perspective. But the ability to return multiple heal checkable IP addresses is a way to use DNS to improve availability of an application.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7uo92omvhj1nrua56nh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff7uo92omvhj1nrua56nh.png" alt="Image description" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Weighted routing policy&lt;/strong&gt; – This is used to route traffic to multiple resource properties as defined by you (for example, you want to say 80% traffic to site A and 20% to site B). A weighted policy will route traffic among multiple resources according to the ratio you set. 
To explain that better, imagine you have three servers (or load balancers representing three groups of servers), all hosting instances of the same web application. One of the servers (or server groups) has greater unused compute and memory capacity and can therefore handle far more traffic. It would be inefficient to send equal numbers of users to each of the servers.
Instead, you can assign the larger server a numeric weight of, say, 50 and then 25 to the other two. That would result in half of all requests being sent to the larger server and 25 percent to each of the others.
To configure a weighted policy in Route 53, you create a separate record set for each of your servers, give the same value for the set ID of each of the record sets, and then enter an instance- appropriate numeric value for the weight. The matching set IDs tell Route 53 that those record sets are meant to work together.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpw03trhzh80higopmrub.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpw03trhzh80higopmrub.png" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;IP-based routing&lt;/strong&gt; - With IP-based routing, you can create a series of Classless Inter-Domain Routing (CIDR) blocks that represent the client IP network range and associate these CIDR blocks with locations. IP-based routing gives you granular control to optimize performance or reduce network costs by uploading your data to Route 53 in the form of user-IP-to-endpoint mappings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Geolocation and latency-based routing is based on data that Route 53 collects and keeps up to date. This approach works well for the majority of customers, but IP-based routing offers you the additional ability to optimize routing based on specific knowledge of your customer base. For example, a global video content provider might want to route end users from a particular internet service provider (ISP).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Some common use cases for IP-based routing are the following:

&lt;ul&gt;
&lt;li&gt;You want to route end users from certain ISPs to specific endpoints so you can optimize network transit costs or performance.&lt;/li&gt;
&lt;li&gt;You want to add overrides to existing Route 53 routing types, such as geolocation routing, based on your knowledge of your clients' physical locations.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;h3&gt;
  
  
  Route 53 Traffic Flow
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/aws/new-route-53-traffic-flow/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/aws/new-route-53-traffic-flow/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Route 53 Traffic Flow is a console-based graphical interface that allows you to visualize complex combinations of routing policies as you build them. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F508m8ce2q817u6cqr6ua.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F508m8ce2q817u6cqr6ua.png" alt="Image description" width="800" height="497"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqrq0ayv6peczicha5ji.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqrq0ayv6peczicha5ji.png" alt="Image description" width="800" height="514"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because it integrates routing policies with all the possible resource endpoints associated with your AWS account, Traffic Flow can make it simple to quickly build a sophisticated routing structure. You can also use and customize routing templates.&lt;br&gt;
Traffic Flow offers geoproximity routing, which gives you the precision of geolocation routing but at a far finer level. Geoproximity routing rules can specify geographic areas by their relationship either to a particular longitude and latitude or to an AWS region. In both cases, setting a bias score will define how widely beyond your endpoint you want to apply your rule.&lt;/p&gt;

&lt;h3&gt;
  
  
  Route 53 Resolver
&lt;/h3&gt;

&lt;p&gt;You can now extend Route 53’s powerful routing tools across your hybrid infrastructure using Route 53 Resolver. Resolver can manage bidirectional address queries between servers running in your AWS account and on- premises resources. This can greatly simplify workloads that are meant to seamlessly span private and public platforms.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Comparing EC2 Purchasing Options</title>
      <dc:creator>Hung____</dc:creator>
      <pubDate>Fri, 17 May 2024 10:15:02 +0000</pubDate>
      <link>https://dev.to/hung____/comparing-ec2-purchasing-options-2ck5</link>
      <guid>https://dev.to/hung____/comparing-ec2-purchasing-options-2ck5</guid>
      <description>&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;On-Demand Instances&lt;/strong&gt; With On-Demand Instances, you pay for compute capacity by the hour or by the second depending on which instances you run. No longer-term commitments or upfront payments are needed. You can increase or decrease your compute capacity depending on the demands of your application and only pay the specified rates for the instance you use.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;On-Demand Instances are recommended for the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Users who prefer the low cost and flexibility of Amazon EC2 without any upfront payment or long-term commitment&lt;/li&gt;
&lt;li&gt;Applications with short-term, irregular, or unpredictable workloads that cannot be interrupted&lt;/li&gt;
&lt;li&gt;Applications being developed or tested on Amazon EC2 for the first time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sample use cases for On-Demand Instances include developing and testing applications and running applications that have unpredictable usage patterns. On-Demand Instances are not recommended for workloads that last a year or longer because these workloads can experience greater cost savings using Reserved Instances.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Savings Plans&lt;/strong&gt; are a flexible pricing model offering lower prices compared to On-Demand pricing, in exchange for a s*&lt;em&gt;pecific usage commitment&lt;/em&gt;* (measured in $/hour) for a 1- or 3-year period. Savings Plans offer the flexibility to evolve your usage and continue to save money. For example, if you have a Compute Savings Plan, lower prices will apply automatically when you take advantage of new instance types.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;AWS offers three types of Savings Plans:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Compute Savings Plans&lt;/strong&gt; apply to usage across Amazon EC2, AWS Lambda, and AWS Fargate.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EC2 Instance Savings Plans&lt;/strong&gt; apply to EC2 usage.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Amazon SageMaker Savings Plans&lt;/strong&gt; apply to Amazon SageMaker usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;With Amazon EC2 Savings Plans, you can reduce your compute costs by committing to a consistent amount of compute usage for a 1- or 3-year term. This term commitment results in savings of up to 66 percent over On-Demand costs.&lt;/p&gt;

&lt;p&gt;Any usage up to the commitment is charged at the discounted plan rate (for example, $10 an hour). Any usage beyond the commitment is charged at regular On-Demand rates.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Reserved Instances (RIs)&lt;/strong&gt; are a billing discount applied to the use of On-Demand Instances in your account. You can purchase Standard RIs and Convertible RIs for a 1- or 3-year term. You realize greater cost savings with the 3-year option.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;There are two types of RIs to choose from:&lt;/p&gt;

&lt;p&gt;-** Standard RIs**: These provide the most significant discount (up to 72 percent off On-Demand) and are best suited for steady-state usage.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Convertible RIs&lt;/strong&gt;: These provide a discount (up to 54 percent off On-Demand) and the capability to change instance families, OS types, and tenancies while benefitting from RI pricing. Like Standard RIs, Convertible RIs are best suited for steady-state usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;RIs are recommended for the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Steady-state loads and long-running systems&lt;/li&gt;
&lt;li&gt;Core components with minimal high peaks and valleys of usage&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At the end of an RI term, you can continue using the Amazon EC2 instance without interruption. However, you are charged On-Demand rates until you do one of the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Terminate the instance.&lt;/li&gt;
&lt;li&gt;Purchase a new RI that matches the instance attributes (instance type, Region, tenancy, and platform).&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Spot Instances&lt;/strong&gt; With Amazon EC2 Spot Instances, you can request spare Amazon EC2 computing capacity for up to 90 percent off the On-Demand price.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Spot Instances are recommended for the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Applications that have flexible start and end times and can tolerate interruptions&lt;/li&gt;
&lt;li&gt;Applications that you want to run or test only when the compute prices are in your price range&lt;/li&gt;
&lt;li&gt;Applications that are a lower priority in your environment&lt;/li&gt;
&lt;li&gt;Users with urgent computing needs for large amounts of additional capacity at a price they determine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Suppose that you have a background processing job that can start and stop as needed (such as the data processing job for a customer survey). You want to start and stop the processing job without affecting the overall operations of your business. If you make a Spot request and Amazon EC2 capacity is available, your Spot Instance launches. However, if you make a Spot request and Amazon EC2 capacity is unavailable, the request is not successful until capacity becomes available. The unavailable capacity might delay the launch of your background processing job.&lt;/p&gt;

&lt;p&gt;After you have launched a Spot Instance, if capacity is no longer available or demand for Spot Instances increases, your instance may be interrupted. This might not pose any issues for your background processing job.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;A Dedicated Host&lt;/strong&gt; is a physical EC2 server dedicated for your use. Dedicated Hosts can help you reduce costs by letting you use your existing server-bound software licenses, including Windows Server, SQL Server, and SUSE Linux Enterprise Server (subject to your license terms). They can also help you meet compliance requirements.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Features of Dedicated Hosts&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Can be purchased On-Demand (hourly)&lt;/li&gt;
&lt;li&gt;Can be purchased as a Reservation for up to 70 percent off the On-Demand price&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Dedicated Hosts are recommended for the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Workloads that require server-bound software licenses&lt;/li&gt;
&lt;li&gt;Security and regulatory compliance where your workload cannot share hardware with other tenants&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Dedicated Instances&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You can use Dedicated Hosts and Dedicated Instances to launch Amazon EC2 instances on physical servers that are dedicated for your use. With Dedicated Instances, all of your instances reside on one single host that is allocated to your AWS account so that no other AWS accounts can place instances on this host. &lt;/p&gt;

&lt;p&gt;An important difference between a Dedicated Host and a Dedicated instance is that a Dedicated Host gives you additional visibility and control over how instances are placed on a physical server and you can consistently deploy your instances to the same physical server over time. As a result, with Dedicated Hosts, you can use your existing server-bound software licenses and address corporate compliance and regulatory requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Clarify what a Spot Instance is
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruq6l30x1ml174pdfi3w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fruq6l30x1ml174pdfi3w.png" alt=" " width="800" height="264"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/ec2/spot/getting-started/#Use_Case_Examples" rel="noopener noreferrer"&gt;Use Case Examples&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Spot Instances are recommended for stateless, fault-tolerant, flexible applications. For example, Spot Instances work well for big data, containerized workloads,  continuous integration and continuous delivery (CI/CD), stateless web servers, high performance computing (HPC), and rendering workloads.&lt;/p&gt;

&lt;p&gt;While running, Spot Instances are exactly the same as On-Demand Instances. However, Spot does not guarantee that you can keep your running instances long enough to finish your workloads. Spot also does not guarantee that you can get immediate availability of the instances that you are looking for, or that you can always get the aggregate capacity that you requested. Moreover, Spot Instance interruptions and capacity can change over time because Spot Instance availability varies based on supply and demand, and past performance isn’t a guarantee of future results.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spot Instances are not suitable for workloads that are inflexible, stateful, fault intolerant, or tightly coupled between instance nodes.&lt;/li&gt;
&lt;li&gt;They're also not recommended for workloads that are intolerant of occasional periods when the target capacity is not completely available.&lt;/li&gt;
&lt;li&gt;We strongly warn against using Spot Instances for these workloads or attempting to fail over to On-Demand Instances to handle interruptions.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Ways to request Spot Instances
&lt;/h2&gt;

&lt;p&gt;To use Spot Instances, you create a Spot Instance request that includes the desired number of instances, the instance type, the Availability Zone, and the maximum price that you are willing to pay per instance hour. If your maximum price exceeds the current Spot price, Amazon EC2 fulfills your request immediately if capacity is available. Otherwise, Amazon EC2 waits until your request can be fulfilled or until you cancel the request. There are two request types: one-time and persistent.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;one-time Spot Instance request&lt;/strong&gt; remains active until Amazon EC2 launches the Spot Instance, the request expires, or you cancel the request. If the Spot price exceeds your maximum price or capacity is not available, your Spot Instance is terminated and the Spot Instance request is closed.&lt;/p&gt;

&lt;p&gt;A &lt;strong&gt;persistent Spot Instance request&lt;/strong&gt; remains active until it expires or you cancel it, even if the request is fulfilled. If the Spot price exceeds your maximum price or capacity is not available, your Spot Instance is interrupted. After your instance is interrupted, when your maximum price exceeds the Spot price or capacity becomes available again, the Spot Instance is started if stopped or resumed if hibernated. You can stop a Spot Instance and start it again if capacity is available and your maximum price exceeds the current Spot price. If the Spot Instance is terminated, the Spot Instance request is opened again and Amazon EC2 launches a new Spot Instance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4wg069hxj055zv5cton.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs4wg069hxj055zv5cton.png" alt=" " width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The previous illustration shows how Spot Instance requests work. Notice that the request type (one-time or persistent) determines whether the request is opened again when the instance is interrupted or if you stop a Spot Instance. If the request is persistent, the request is opened again after your Spot Instance is interrupted. If the request is persistent and you stop your Spot Instance, the request only opens after you start your Spot Instance. &lt;/p&gt;

&lt;h2&gt;
  
  
  Key takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;On-Demand Instances&lt;/strong&gt; – You pay full price by the second when you launch.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Savings Plans&lt;/strong&gt; – You commit to a certain amount of usage over a 1–3-year period.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reserved Instances&lt;/strong&gt; – You agree to a specific instance configuration for a period of 1–3 years.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spot Instances&lt;/strong&gt; – This refers to unused instance resources that you can bid on. Your price is determined by market availability.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dedicated Hosts&lt;/strong&gt; – You get a full physical server.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dedicated Instances&lt;/strong&gt; - Instances are placed on a single host with no access to the underlying host.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>aws</category>
      <category>ec2</category>
      <category>cloudcomputing</category>
    </item>
  </channel>
</rss>
