<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Gustavo Mainchein</title>
    <description>The latest articles on DEV Community by Gustavo Mainchein (@gugamainchein).</description>
    <link>https://dev.to/gugamainchein</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gugamainchein"/>
    <language>en</language>
    <item>
      <title>Orchestrating Multi-Agent Systems with AWS Bedrock: A Comprehensive Guide</title>
      <dc:creator>Gustavo Mainchein</dc:creator>
      <pubDate>Sat, 28 Jun 2025 16:46:59 +0000</pubDate>
      <link>https://dev.to/gugamainchein/orchestrating-multi-agent-systems-with-aws-bedrock-a-comprehensive-guide-3jfh</link>
      <guid>https://dev.to/gugamainchein/orchestrating-multi-agent-systems-with-aws-bedrock-a-comprehensive-guide-3jfh</guid>
      <description>&lt;p&gt;Hello, fellow developers and AI enthusiasts! 👋&lt;/p&gt;

&lt;p&gt;Today, I'm excited to dive deep into multi-agent orchestration using AWS Bedrock - a topic that's generating significant buzz in the AI community. While there are numerous approaches to implementing multi-agent systems, translating these concepts into production-ready solutions presents unique challenges that we'll address in this article.&lt;/p&gt;

&lt;h2&gt;
  
  
  Multi-Agent Orchestration: Powerful Use Cases
&lt;/h2&gt;

&lt;p&gt;Multi-agent systems can transform how we build AI solutions by leveraging specialized components working in harmony:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Domain-Specific Expert Agents&lt;/strong&gt;: Create agents with deep expertise in finance, healthcare, legal, or technical domains that collaborate to solve complex problems&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sequential Workflow Processing&lt;/strong&gt;: Implement step-by-step processing where each agent handles a specific part of a complex task (e.g., one agent extracts data, another analyzes it, and a third generates recommendations)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cognitive Diversity&lt;/strong&gt;: Deploy agents with different reasoning approaches to tackle problems from multiple angles, similar to human team collaboration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fallback and Redundancy Systems&lt;/strong&gt;: Build resilient systems where specialized agents can take over when primary agents fail or encounter edge cases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adaptive Customer Support&lt;/strong&gt;: Route customer inquiries through a network of specialized support agents based on query complexity and domain requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Real-World Example&lt;/strong&gt;: A financial advisory system where one agent specializes in investment analysis, another in tax implications, and a third in regulatory compliance - all orchestrated to provide comprehensive financial guidance.&lt;/p&gt;

&lt;h2&gt;
  
  
  AWS Bedrock Flow vs. Step Functions: Making the Right Choice
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Technical Differences
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS Bedrock Flow&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Purpose-built for AI agent orchestration with built-in context management&lt;/li&gt;
&lt;li&gt;Native integration with Bedrock models and agents&lt;/li&gt;
&lt;li&gt;Simplified prompt engineering and agent communication&lt;/li&gt;
&lt;li&gt;Optimized for conversational and generative AI workflows&lt;/li&gt;
&lt;li&gt;Limited to Bedrock ecosystem components&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Step Functions&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;General-purpose workflow orchestration service&lt;/li&gt;
&lt;li&gt;Supports integration with virtually any AWS service&lt;/li&gt;
&lt;li&gt;Provides robust error handling and retry mechanisms&lt;/li&gt;
&lt;li&gt;Better suited for complex business processes with diverse service requirements&lt;/li&gt;
&lt;li&gt;Requires custom development for context management between AI components&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Cost Considerations
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;AWS Bedrock Flow&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pricing based on the number of orchestration steps executed&lt;/li&gt;
&lt;li&gt;No upfront costs or minimum fees&lt;/li&gt;
&lt;li&gt;More cost-effective for pure AI orchestration scenarios&lt;/li&gt;
&lt;li&gt;Potentially higher costs for complex, long-running workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;AWS Step Functions&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pricing based on state transitions&lt;/li&gt;
&lt;li&gt;More economical for workflows with fewer transitions but complex logic&lt;/li&gt;
&lt;li&gt;Better cost optimization for hybrid workflows combining AI and non-AI services&lt;/li&gt;
&lt;li&gt;Additional savings through Express Workflows for high-volume, short-duration executions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Choose Bedrock Flow for AI-centric orchestration with simpler context management, and Step Functions for complex, hybrid workflows requiring extensive AWS service integration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4sy2gau35aqs10b8kyt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn4sy2gau35aqs10b8kyt.png" alt="Architecture" width="800" height="181"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda&lt;/strong&gt;: Powers backend logic processing and API integrations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bedrock Agent&lt;/strong&gt;: Hosts our specialized domain agents with custom knowledge bases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bedrock Prompt Management&lt;/strong&gt;: Intelligently routes user queries to appropriate specialized agents&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bedrock Flow&lt;/strong&gt;: Orchestrates the communication flow between agents, maintaining conversation context&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Critical Role of Agents in Bedrock Flow Implementation
&lt;/h2&gt;

&lt;p&gt;Using agents to invoke Bedrock Flow addresses several critical challenges in multi-agent systems:&lt;/p&gt;

&lt;h3&gt;
  
  
  Session History and Context Management
&lt;/h3&gt;

&lt;p&gt;When orchestrating multiple specialized agents, maintaining conversation context becomes exponentially complex. Each agent interaction builds upon previous exchanges, requiring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Unified Context Store&lt;/strong&gt;: Bedrock Flow maintains a centralized conversation history accessible to all agents, preventing fragmented context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State Persistence&lt;/strong&gt;: User session state persists across the entire agent network, ensuring continuity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Memory Management&lt;/strong&gt;: The system intelligently manages what information to retain or discard as conversations evolve&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cross-Agent References&lt;/strong&gt;: Agents can reference information discovered by other agents without redundant user questioning&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without proper context management, multi-agent systems risk creating disjointed experiences where users must repeatedly provide the same information or where agents contradict each other due to inconsistent context understanding.&lt;/p&gt;

&lt;h3&gt;
  
  
  Conversation Coherence
&lt;/h3&gt;

&lt;p&gt;Bedrock Flow ensures that despite involving multiple specialized agents, the conversation maintains a natural, coherent flow from the user's perspective. This prevents the jarring experience of obviously switching between different AI personalities.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Guide: Build Your Multi-Agent System
&lt;/h2&gt;

&lt;p&gt;Ready to create your own multi-agent orchestration? Follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Access the Project Repository&lt;/strong&gt;: Clone the code to get started with the foundation
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   git clone https://github.com/gugamainchein/bedrock-multi-agent-orchestration
   &lt;span class="nb"&gt;cd &lt;/span&gt;bedrock-multi-agent-orchestration
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Customize Your Agents&lt;/strong&gt;: Personalize each agent based on your specific use case requirements&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Define clear domains of expertise for each agent&lt;/li&gt;
&lt;li&gt;Create specialized knowledge bases&lt;/li&gt;
&lt;li&gt;Configure agent behaviors and response styles&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deploy Using Serverless Framework&lt;/strong&gt;: Set up your infrastructure with minimal effort&lt;br&gt;
&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; serverless
   sls deploy &lt;span class="nt"&gt;--stage&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;your-stage-name]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Configure Orchestration Components&lt;/strong&gt;: Set up the coordination layer

&lt;ul&gt;
&lt;li&gt;Create your prompt management router with clear routing rules&lt;/li&gt;
&lt;li&gt;Design your Bedrock Flow to manage agent interactions&lt;/li&gt;
&lt;li&gt;Test the system with various user scenarios to ensure proper routing&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By following this approach, you'll create a sophisticated multi-agent system that leverages the strengths of specialized AI components while maintaining a coherent user experience.&lt;/p&gt;

&lt;p&gt;Have you implemented multi-agent systems before? What challenges did you face? I'd love to hear about your experiences in the comments!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>bedrock</category>
      <category>multiagent</category>
    </item>
    <item>
      <title>Comparing OCR Capabilities in Amazon Bedrock LLMs: Claude 3.7 Sonnet vs. Amazon Nova Pro</title>
      <dc:creator>Gustavo Mainchein</dc:creator>
      <pubDate>Thu, 01 May 2025 19:18:46 +0000</pubDate>
      <link>https://dev.to/gugamainchein/comparing-ocr-capabilities-in-amazon-bedrock-llms-claude-37-sonnet-vs-amazon-nova-pro-a6</link>
      <guid>https://dev.to/gugamainchein/comparing-ocr-capabilities-in-amazon-bedrock-llms-claude-37-sonnet-vs-amazon-nova-pro-a6</guid>
      <description>&lt;p&gt;Hey there, tech enthusiasts! 👋 Ever wanted to extract text from PDF documents but found traditional OCR solutions lacking in accuracy and context understanding? That's exactly the challenge I decided to tackle in my recent project. In this article, I'll take you through my journey of comparing the OCR capabilities of two powerhouse Large Language Models available through Amazon Bedrock: Claude 3.7 Sonnet and Amazon's own Nova Pro.&lt;/p&gt;

&lt;h2&gt;
  
  
  The PDF Challenge: Beyond Traditional OCR
&lt;/h2&gt;

&lt;p&gt;PDF documents present a unique challenge for text extraction. While they may look like simple text documents to human eyes, they're actually complex containers that can include various elements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Text layers that may or may not be selectable&lt;/li&gt;
&lt;li&gt;Images with embedded text&lt;/li&gt;
&lt;li&gt;Complex layouts with tables and multi-column formats&lt;/li&gt;
&lt;li&gt;Mixed font styles and sizes&lt;/li&gt;
&lt;li&gt;Potential scanning artifacts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Traditional OCR tools like Tesseract often struggle with maintaining the original formatting, understanding tables, or handling lower quality scans. This is where modern multimodal LLMs enter the picture, offering a more interpretative approach to text extraction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project Overview: A Bedrock-Powered PDF Reader
&lt;/h2&gt;

&lt;p&gt;My &lt;a href="https://github.com/gugamainchein/llms-ocr-comparation" rel="noopener noreferrer"&gt;llms-ocr-comparation&lt;/a&gt; project aims to answer a specific question: how do two of Amazon Bedrock's most capable models—Claude 3.7 Sonnet and Amazon Nova Pro—compare when extracting text from PDF documents?&lt;/p&gt;

&lt;p&gt;The project structure is straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── documents.ipynb   # The main notebook with all code
├── documents/        # Input PDF files
├── images/           # Converted PDF pages as images
├── texts/            # Extracted text results
└── README.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How It Works: The Technical Deep Dive
&lt;/h2&gt;

&lt;p&gt;Looking at the code in &lt;code&gt;documents.ipynb&lt;/code&gt;, we can see a well-structured pipeline for PDF text extraction:&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: PDF to Image Conversion
&lt;/h3&gt;

&lt;p&gt;The first step uses the PyMuPDF (fitz) library to convert each page of a PDF into a high-resolution image:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;document&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fitz&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./documents/sample.pdf&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;page_number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;page&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;document&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;document_image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;./images/page_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;page_number&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.jpeg&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;pix&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_pixmap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;alpha&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dpi&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;pix&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;document_image&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This conversion is crucial because it normalizes the input for both models—whether the original PDF had selectable text or not, we're converting everything to an image to test the pure OCR capabilities of these LLMs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Setting Up the Models
&lt;/h3&gt;

&lt;p&gt;The notebook defines two functions, one for each model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;extract_text_with_claude_3_7_sonnet&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base64_image&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;start_time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;model_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us.anthropic.claude-3-7-sonnet-20250219-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="c1"&gt;# Function code...
&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;extract_text_with_nova_pro&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;base64_image&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;start_time&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="n"&gt;model_id&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us.amazon.nova-pro-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="c1"&gt;# Function code...
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;What's particularly interesting is the carefully crafted prompt used for both models:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;instructions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Please extract and format the readable text from the provided image, respecting the original structure as much as possible. Follow these instructions:

- For continuous text, keep the original separation by line breaks.
- If there are tables, use Markdown syntax to present them in an organized way:

Example of expected output:

- For plain text: [Extracted text with line breaks as necessary]

- For documents with tables:
| Header1 | Header2 | Header3 |
|---------|---------|---------|
|  Data1  |  Value1 |  Value1 |
|  Data2  |  Value2 |  Value2 |

Note: Avoid adding additional interpretations or comments to the extracted content.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This prompt does something crucial that traditional OCR tools can't do: it provides context and instructions about how to interpret and format the extracted text, particularly for tables.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Parallel Processing with asyncio
&lt;/h3&gt;

&lt;p&gt;One of the clever aspects of this implementation is the use of asyncio to process both models concurrently:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;parallel_process&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;gather&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;loop&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_in_executor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;executor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;extract_text_with_claude_3_7_sonnet&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;document_image&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;loop&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_in_executor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;executor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;extract_text_with_nova_pro&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;document_image&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;claude_3_7_sonnet_result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;nova_pro_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;parallel_process&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This approach maximizes efficiency by sending the same image to both models simultaneously, rather than waiting for one to complete before starting the next.&lt;/p&gt;

&lt;h2&gt;
  
  
  The LLM Advantage in OCR: Beyond Character Recognition
&lt;/h2&gt;

&lt;p&gt;Looking at the code and the README, it's clear that this project is exploring how modern LLMs are transforming what we traditionally think of as OCR. While traditional OCR tools focus on character recognition, these LLMs are doing something much more sophisticated:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Contextual Understanding
&lt;/h3&gt;

&lt;p&gt;Traditional OCR operates on a character-by-character or word-by-word basis. LLMs, however, can "read" the document more like a human would, using context to improve accuracy. If a character is partially obscured or ambiguous, the model can make an educated guess based on surrounding words and the overall context of the document.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Format Preservation
&lt;/h3&gt;

&lt;p&gt;The prompt specifically instructs the models to preserve formatting, including tables. This is evident in how the models are asked to convert tables to Markdown format, maintaining the relationships between data cells—something traditional OCR often fails at.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Intelligent Interpretation
&lt;/h3&gt;

&lt;p&gt;LLMs can distinguish between different document elements—headings, body text, tables, etc.—and format them appropriately. This level of document understanding goes well beyond simple text extraction.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Results: Claude 3.7 Sonnet vs. Nova Pro
&lt;/h2&gt;

&lt;p&gt;Looking at the output metrics from the notebook:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Claude 3.7 Sonnet:
  Input Tokens  : 1666
  Output Tokens : 1036
  Start Time    : 1746124263.978323
  End Time      : 1746124292.999416

Amazon Nova Pro:
  Input Tokens  : 2223
  Output Tokens : 971
  Start Time    : 1746124263.98382
  End Time      : 1746124279.478841
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We can observe some interesting differences:&lt;/p&gt;

&lt;h3&gt;
  
  
  Performance Metrics
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Processing Speed&lt;/strong&gt;: Nova Pro completed the task about 13 seconds faster (15.5 seconds vs. 29 seconds for Claude)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token Efficiency&lt;/strong&gt;: Claude used fewer input tokens (1666 vs. 2223) but produced slightly more output tokens (1036 vs. 971)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;While the README doesn't include qualitative comparisons of the actual text extraction results, these metrics alone highlight an interesting tradeoff: Nova Pro offers faster processing, while Claude appears to be more token-efficient on the input side.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Speed vs. Accuracy Tradeoff
&lt;/h3&gt;

&lt;p&gt;Based on the implementation and metrics, we can infer that there's likely a speed vs. accuracy tradeoff between these models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Nova Pro&lt;/strong&gt; appears optimized for speed, processing the same image in roughly half the time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Claude 3.7 Sonnet&lt;/strong&gt; takes longer but might be doing more thorough analysis of the content&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This type of comparison is exactly what makes this project valuable—understanding these tradeoffs is crucial for developers choosing the right model for their specific use case.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Applications and Use Cases
&lt;/h2&gt;

&lt;p&gt;The ability to accurately extract and interpret text from PDFs has numerous applications across industries:&lt;/p&gt;

&lt;h3&gt;
  
  
  Document Processing Automation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Legal Document Analysis&lt;/strong&gt;: Extract clauses, terms, and key information from contracts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Financial Document Processing&lt;/strong&gt;: Parse statements, invoices, and reports&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Healthcare Records Management&lt;/strong&gt;: Extract patient information and medical data from forms&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Knowledge Management
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Research Paper Analysis&lt;/strong&gt;: Extract text and data from academic papers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Technical Documentation&lt;/strong&gt;: Convert PDF manuals into searchable knowledge bases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Archival Digitization&lt;/strong&gt;: Make historical documents accessible and searchable&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Data Entry and Form Processing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Automated Form Data Extraction&lt;/strong&gt;: Pull information from filled forms into databases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Receipt and Invoice Processing&lt;/strong&gt;: Extract line items, totals, and vendor information&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business Card Information Extraction&lt;/strong&gt;: Populate CRM systems from scanned business cards&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Beyond Basic OCR: The Future is Interpretative
&lt;/h2&gt;

&lt;p&gt;What makes this approach revolutionary is the shift from character recognition to document understanding. These LLMs aren't just identifying letters and words—they're interpreting documents holistically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Intelligent Document Processing (IDP)
&lt;/h3&gt;

&lt;p&gt;As mentioned in the README's contributing section, this project could be extended to include IDP comparison. This is where the real power of LLM-based OCR shines—not just extracting text but understanding document types, identifying key fields, and extracting structured information without predefined templates.&lt;/p&gt;

&lt;p&gt;For example, given an invoice, these models could:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Recognize it as an invoice (document classification)&lt;/li&gt;
&lt;li&gt;Extract structured data (invoice number, date, line items)&lt;/li&gt;
&lt;li&gt;Identify relationships between data elements&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Handling Edge Cases
&lt;/h3&gt;

&lt;p&gt;Traditional OCR systems often fail with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Handwritten notes&lt;/li&gt;
&lt;li&gt;Low-quality scans&lt;/li&gt;
&lt;li&gt;Unusual layouts&lt;/li&gt;
&lt;li&gt;Mixed languages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LLMs excel in these scenarios because they bring human-like interpretative capabilities to the task. They can fill in gaps using context and make educated guesses about unclear content.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation Best Practices from the Project
&lt;/h2&gt;

&lt;p&gt;Looking at the code implementation, there are several best practices worth highlighting:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Well-Crafted Prompts
&lt;/h3&gt;

&lt;p&gt;The detailed instructions given to both models demonstrate the importance of clear, specific prompting. By explicitly asking for table formatting in Markdown, the prompt guides the models toward a specific output format.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. High-Resolution Image Processing
&lt;/h3&gt;

&lt;p&gt;Using a 300 DPI setting for PDF page conversion ensures that the models have high-quality images to work with, improving extraction accuracy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;pix&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_pixmap&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;alpha&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dpi&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;300&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. Parallel Processing for Efficiency
&lt;/h3&gt;

&lt;p&gt;The asyncio implementation allows both models to run concurrently, making the comparison more efficient:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;parallel_process&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;asyncio&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;gather&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;loop&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_in_executor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;executor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;extract_text_with_claude_3_7_sonnet&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;document_image&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="n"&gt;loop&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_in_executor&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;executor&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;extract_text_with_nova_pro&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;document_image&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Comprehensive Metrics Tracking
&lt;/h3&gt;

&lt;p&gt;The project tracks not just the extracted text but also performance metrics like processing time and token usage, enabling quantitative comparison.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future Extensions and Improvements
&lt;/h2&gt;

&lt;p&gt;As mentioned in the README's contributing section, this project lays the groundwork for several potential improvements:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Evaluation Metrics
&lt;/h3&gt;

&lt;p&gt;Adding formal evaluation metrics like BLEU or ROUGE would provide quantitative measures of extraction quality, especially when ground truth is available.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Post-Processing Optimization
&lt;/h3&gt;

&lt;p&gt;The extracted text could be further processed to improve formatting, correct common OCR errors, or extract structured data into specific formats like JSON.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Expanded Model Comparison
&lt;/h3&gt;

&lt;p&gt;Testing against other models like GPT-4 Vision or Gemini would provide a more comprehensive comparison across the LLM landscape.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Specialized Fine-Tuning
&lt;/h3&gt;

&lt;p&gt;For specific document types or domains, fine-tuning these models could yield even better results.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: The New Frontier of Document Understanding
&lt;/h2&gt;

&lt;p&gt;This project demonstrates that we're entering a new era of document processing—one where AI doesn't just recognize text but truly understands documents. The comparison between Claude 3.7 Sonnet and Amazon Nova Pro highlights the impressive capabilities of modern LLMs in this space, while also revealing the tradeoffs developers need to consider.&lt;/p&gt;

&lt;p&gt;For those working with document processing pipelines, the message is clear: traditional OCR is being rapidly surpassed by these more interpretative, context-aware approaches. By leveraging the document understanding capabilities of LLMs, we can create more accurate, more resilient text extraction systems.&lt;/p&gt;

&lt;p&gt;Whether you're working with legal contracts, financial statements, or research papers, this LLM-powered approach to document processing offers significant advantages over traditional OCR. And as these models continue to improve, so too will their ability to understand and extract information from the documents that power our businesses and institutions.&lt;/p&gt;

&lt;p&gt;Want to try it yourself? Check out the &lt;a href="https://github.com/gugamainchein/llms-ocr-comparation" rel="noopener noreferrer"&gt;full project on GitHub&lt;/a&gt; and see how these powerful Amazon Bedrock models compare on your own PDFs!&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>bedrock</category>
      <category>llm</category>
    </item>
    <item>
      <title>Building an AI Voice Assistant Using AWS Serverless and Bedrock Nova</title>
      <dc:creator>Gustavo Mainchein</dc:creator>
      <pubDate>Wed, 02 Apr 2025 11:51:47 +0000</pubDate>
      <link>https://dev.to/gugamainchein/building-an-ai-voice-assistant-using-aws-serverless-and-bedrock-nova-2mi3</link>
      <guid>https://dev.to/gugamainchein/building-an-ai-voice-assistant-using-aws-serverless-and-bedrock-nova-2mi3</guid>
      <description>&lt;h2&gt;
  
  
  General Context
&lt;/h2&gt;

&lt;p&gt;The growing interest in natural language interfaces has made voice assistants more relevant than ever. With the rise of tools like Amazon Bedrock and the introduction of generative voices in Amazon Polly, it’s now possible to create sophisticated voice applications using entirely serverless infrastructure.&lt;/p&gt;

&lt;p&gt;In this article, I’ll walk you through the architecture and implementation of an AI voice assistant that listens to your voice, transcribes your question, uses a generative model to understand and respond, and finally speaks the answer back to you. The whole solution is built using AWS serverless services, making it scalable, cost-effective, and easy to deploy.&lt;/p&gt;

&lt;p&gt;You can find the complete project repository on GitHub.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deep Dive on AWS Resources
&lt;/h2&gt;

&lt;p&gt;To create this voice assistant, I used a combination of AWS services that seamlessly interact:&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon S3:
&lt;/h3&gt;

&lt;p&gt;Stores the audio files and acts as the glue for processing. It temporarily holds the user's recorded voice (WebM format) and also the synthesized audio response.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Transcribe:
&lt;/h3&gt;

&lt;p&gt;Transcribes the uploaded audio into text using its real-time transcription API. It supports various languages and accents, and it integrates well with other AWS services in the workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Bedrock (Nova Micro):
&lt;/h3&gt;

&lt;p&gt;This is the brain of the application. The transcribed text is sent to a foundation model hosted on Amazon Bedrock—in this case, Nova Micro. It generates a coherent and human-like response based on the user’s input.&lt;/p&gt;

&lt;h3&gt;
  
  
  Amazon Polly (Generative Voice):
&lt;/h3&gt;

&lt;p&gt;Once the response text is generated, Amazon Polly converts it into audio using one of the new generative voices, delivering a more natural, expressive tone compared to traditional TTS.&lt;/p&gt;

&lt;h3&gt;
  
  
  AWS Lambda:
&lt;/h3&gt;

&lt;p&gt;Orchestrates the entire process:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Triggered by S3 uploads&lt;/li&gt;
&lt;li&gt;Calls Amazon Transcribe and waits for transcription&lt;/li&gt;
&lt;li&gt;Sends prompt to Bedrock and receives the response&lt;/li&gt;
&lt;li&gt;Converts response text to speech with Polly&lt;/li&gt;
&lt;li&gt;Returns a signed URL to the audio for playback&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Amazon API Gateway:
&lt;/h3&gt;

&lt;p&gt;Exposes the backend as a secure REST API. It allows the front-end to send the user’s voice and receive the audio response.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deep Dive on Application
&lt;/h2&gt;

&lt;p&gt;The front-end is a simple JavaScript application that records the user’s voice via the browser, sends it to the backend, and plays the response. It includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Audio recording using MediaRecorder&lt;/li&gt;
&lt;li&gt;File upload to an S3 presigned URL&lt;/li&gt;
&lt;li&gt;Asynchronous polling until the processed voice response is ready&lt;/li&gt;
&lt;li&gt;Playback of Polly's generative voice output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The back-end architecture follows an event-driven pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User speaks a question.&lt;/li&gt;
&lt;li&gt;Audio is uploaded to S3.&lt;/li&gt;
&lt;li&gt;Lambda is triggered on object creation.&lt;/li&gt;
&lt;li&gt;Audio is transcribed using Amazon Transcribe.&lt;/li&gt;
&lt;li&gt;Text is sent to Bedrock Nova Micro for a response.&lt;/li&gt;
&lt;li&gt;The response is synthesized into speech using Amazon Polly generative voices.&lt;/li&gt;
&lt;li&gt;A signed URL is returned to the front-end.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All services are defined in infrastructure-as-code via the AWS Serverless Framework, making the deployment repeatable and easy to manage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Estimate
&lt;/h2&gt;

&lt;p&gt;Here’s a rough cost estimate based on moderate usage (100 requests/day):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Amazon S3 | 1 GB storage, 5K PUT/GET | ~$0,14&lt;/li&gt;
&lt;li&gt;Amazon Transcribe | 10 hours/month | ~$0,24&lt;/li&gt;
&lt;li&gt;Amazon Bedrock (Nova Micro) | 500K input/output tokens | ~$0,09&lt;/li&gt;
&lt;li&gt;Amazon Polly (Generative) | 10 hours of audio | ~$0,00&lt;/li&gt;
&lt;li&gt;Lambda (1M requests + 5000 ms + 128 MB) | ~$3.00&lt;/li&gt;
&lt;li&gt;API Gateway | 1M calls/month | ~$3.50&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://calculator.aws/#/estimate?id=b2c5d8ad202d2a697977cf9d0c45b8a6dda13813" rel="noopener noreferrer"&gt;💡 Total Estimated Monthly Cost: ~$9,56&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can tweak your usage and run your own estimate using the AWS Pricing Calculator.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final Considerations
&lt;/h2&gt;

&lt;p&gt;This project showcases how powerful AWS serverless technologies can be when building modern, AI-powered voice interfaces. By leveraging Amazon Polly's new generative voices, Bedrock’s advanced language models, and an event-driven architecture, you can create a seamless voice assistant with very low overhead.&lt;/p&gt;

&lt;p&gt;The best part? It scales effortlessly—whether you're running one or 10,000 daily conversations.&lt;/p&gt;

&lt;p&gt;I encourage you to explore the GitHub repo, fork it, and make it your own. You can easily swap out the voice model, add authentication with Cognito, or even extend it to support multi-turn conversations.&lt;/p&gt;

&lt;p&gt;Feel free to leave questions or feedback in the comments!&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;p&gt;🔗 Project GitHub Repository: &lt;a href="https://github.com/gugamainchein/ai-voice-assistance" rel="noopener noreferrer"&gt;https://github.com/gugamainchein/ai-voice-assistance&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🧠 Amazon Bedrock Nova: &lt;a href="https://docs.aws.amazon.com/nova/latest/userguide/what-is-nova.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/nova/latest/userguide/what-is-nova.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;🔊 Amazon Polly (Generative Voices): &lt;a href="https://docs.aws.amazon.com/polly/latest/dg/generative-voices.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/polly/latest/dg/generative-voices.html&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;📝 Amazon Transcribe: &lt;a href="https://aws.amazon.com/transcribe/" rel="noopener noreferrer"&gt;https://aws.amazon.com/transcribe/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>cloud</category>
      <category>novamicro</category>
      <category>serverless</category>
    </item>
    <item>
      <title>GitHub - Improve your Code with DeepSeek and Serverless App</title>
      <dc:creator>Gustavo Mainchein</dc:creator>
      <pubDate>Mon, 17 Mar 2025 01:56:21 +0000</pubDate>
      <link>https://dev.to/gugamainchein/github-code-improve-with-deepseek-and-serverless-app-3d91</link>
      <guid>https://dev.to/gugamainchein/github-code-improve-with-deepseek-and-serverless-app-3d91</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;As software development grows in complexity, maintaining clean, efficient, and well-structured code becomes a major challenge. Poorly written code can lead to increased maintenance costs, decreased readability, and technical debt. With generative AI, developers now have access to intelligent code analysis tools that provide feedbacks and actionable improvements.&lt;/p&gt;

&lt;p&gt;This post is about a serverless application that use AWS Bedrock with DeepSeek R1 LLM to analyze GitHub commits, detect potential issues, and suggest improvements based on Domain-Driven Design (DDD) and Clean Code principles. This approach ensures that code remains scalable, maintainable, and aligned with best practices.&lt;/p&gt;

&lt;p&gt;By leveraging AWS Bedrock, this project achieves seamless AI integration within a serverless architecture, ensuring cost-effectiveness, performance, and reliability. &lt;/p&gt;




&lt;h2&gt;
  
  
  About This Project
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Overview
&lt;/h3&gt;

&lt;p&gt;This project is designed to automate code analysis in a GitHub repository. Every time a developer pushes a commit, a GitHub webhook triggers an AWS Lambda function, which then analyzes the commit and provides feedback using DeepSeek R1 LLM.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Technologies
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Serverless Framework&lt;/strong&gt;: Manages and deploys AWS resources, including Lambda, API Gateway, and DynamoDB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AWS Lambda&lt;/strong&gt;: Processes GitHub webhooks and triggers AI-based code validation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DeepSeek R1 LLM&lt;/strong&gt;: A generative AI model that analyzes commit changes and suggests improvements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DynamoDB&lt;/strong&gt;: Stores commit metadata and AI-generated recommendations for future reference.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CloudWatch&lt;/strong&gt;: Provides logging and monitoring for the Lambda functions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Webhooks&lt;/strong&gt;: Automatically sends commit data to the API for processing.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Project Structure Breakdown
&lt;/h3&gt;

&lt;p&gt;The project is structured into several key components, each responsible for handling a specific part of the process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── layers/                 # Shared dependencies for Lambda functions
│   ├── common/
│   │   ├── requirements.txt # Defines Python dependencies
├── src/
│   ├── functions/          # Lambda functions that process GitHub webhooks
│   │   ├── commit_analyzer.py # Analyzes commit data and sends it to DeepSeek AI
│   ├── helpers/            # Utility classes for handling Lambda payloads and responses
│   │   ├── lambda_payload.py
│   │   ├── lambda_response.py
│   ├── infrastructure/     # YAML configurations for AWS resource deployment
│   │   ├── resources.yml
│   ├── services/           # External service integrations (GitHub, DynamoDB, AI model)
│   │   ├── bedrock.py      # Handles AWS Bedrock interactions
│   │   ├── dynamodb.py     # Interfaces with DynamoDB to store results
│   │   ├── github.py       # Manages GitHub structure returns
│   ├── __init__.py
├── .env.example            # Example environment configuration file
├── deploy-example.sh       # Deployment script for easy setup
├── README.md               # Documentation for the project
├── serverless.yml          # Serverless Framework configuration file
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  How It Works
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;A developer commits changes to the repository.&lt;/li&gt;
&lt;li&gt;GitHub sends a webhook event to the API Gateway.&lt;/li&gt;
&lt;li&gt;The API Gateway triggers an AWS Lambda function.&lt;/li&gt;
&lt;li&gt;The Lambda function analyzes the commit and extracts relevant code changes.&lt;/li&gt;
&lt;li&gt;The extracted code is processed by DeepSeek R1 LLM for improvement suggestions.&lt;/li&gt;
&lt;li&gt;The AI-generated recommendations are stored in DynamoDB for reference.&lt;/li&gt;
&lt;li&gt;The developer receives feedback on code quality and best practices.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  API Testing Example
&lt;/h2&gt;

&lt;p&gt;To test the AI-based code analysis manually, you can use the following cURL request:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;--location&lt;/span&gt; &lt;span class="nt"&gt;--request&lt;/span&gt; POST &lt;span class="s1"&gt;'https://example.com/commit/analyze'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--header&lt;/span&gt; &lt;span class="s1"&gt;'Content-Type: application/json'&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
&lt;span class="nt"&gt;--data-raw&lt;/span&gt; &lt;span class="s1"&gt;'{
  "ref": "refs/heads/develop",
  "before": "a1b2c3d4e5f678901234567890abcdef12345678",
  "after": "f1e2d3c4b5a678901234567890abcdef98765432",
  "repository": {
    "id": 123456789,
    "node_id": "R_kgDOPQRSTU",
    "name": "fake-repo",
    "full_name": "johnDoe/fake-repo",
    "private": false,
    "owner": {
      "name": "johnDoe",
      "email": "johndoe@users.noreply.github.com",
      "login": "johnDoe",
      "id": 987654321,
      "node_id": "MDQ6VXNlcjk4NzY1NDMyMQ==",
      "avatar_url": "https://avatars.githubusercontent.com/u/987654321?v=4",
      "gravatar_id": "",
      "url": "https://api.github.com/users/johnDoe",
      "html_url": "https://github.com/johnDoe",
      "followers_url": "https://api.github.com/users/johnDoe/followers",
      "following_url": "https://api.github.com/users/johnDoe/following{/other_user}",
      "gists_url": "https://api.github.com/users/johnDoe/gists{/gist_id}",
      "starred_url": "https://api.github.com/users/johnDoe/starred{/owner}{/repo}",
      "subscriptions_url": "https://api.github.com/users/johnDoe/subscriptions",
      "organizations_url": "https://api.github.com/users/johnDoe/orgs",
      "repos_url": "https://api.github.com/users/johnDoe/repos",
      "events_url": "https://api.github.com/users/johnDoe/events{/privacy}",
      "received_events_url": "https://api.github.com/users/johnDoe/received_events",
      "type": "User",
      "user_view_type": "public",
      "site_admin": false
    },
    "html_url": "https://github.com/johnDoe/fake-repo",
    "description": "A fake repository for testing purposes",
    "fork": false,
    "url": "https://github.com/johnDoe/fake-repo",
    "created_at": 1742164973,
    "updated_at": "2025-03-16T22:42:57Z",
    "pushed_at": 1742165040,
    "git_url": "git://github.com/johnDoe/fake-repo.git",
    "ssh_url": "git@github.com:johnDoe/fake-repo.git",
    "clone_url": "https://github.com/johnDoe/fake-repo.git",
    "svn_url": "https://github.com/johnDoe/fake-repo",
    "size": 100,
    "stargazers_count": 10,
    "watchers_count": 10,
    "language": "JavaScript",
    "has_issues": true,
    "has_projects": true,
    "has_downloads": true,
    "has_wiki": true,
    "has_pages": false,
    "has_discussions": false,
    "forks_count": 2,
    "archived": false,
    "disabled": false,
    "open_issues_count": 1,
    "license": "MIT",
    "allow_forking": true,
    "is_template": false,
    "visibility": "public",
    "default_branch": "develop"
  },
  "pusher": {
    "name": "johnDoe",
    "email": "johndoe@users.noreply.github.com"
  },
  "sender": {
    "login": "johnDoe",
    "id": 987654321,
    "node_id": "MDQ6VXNlcjk4NzY1NDMyMQ==",
    "avatar_url": "https://avatars.githubusercontent.com/u/987654321?v=4",
    "url": "https://api.github.com/users/johnDoe",
    "html_url": "https://github.com/johnDoe",
    "followers_url": "https://api.github.com/users/johnDoe/followers",
    "following_url": "https://api.github.com/users/johnDoe/following{/other_user}",
    "gists_url": "https://api.github.com/users/johnDoe/gists{/gist_id}",
    "starred_url": "https://api.github.com/users/johnDoe/starred{/owner}{/repo}",
    "subscriptions_url": "https://api.github.com/users/johnDoe/subscriptions",
    "organizations_url": "https://api.github.com/users/johnDoe/orgs",
    "repos_url": "https://api.github.com/users/johnDoe/repos",
    "events_url": "https://api.github.com/users/johnDoe/events{/privacy}",
    "received_events_url": "https://api.github.com/users/johnDoe/received_events",
    "type": "User",
    "user_view_type": "public",
    "site_admin": false
  },
  "created": false,
  "deleted": false,
  "forced": false,
  "base_ref": null,
  "compare": "https://github.com/johnDoe/fake-repo/compare/a1b2c3d4e5f6...f1e2d3c4b5a6",
  "commits": [
    {
      "id": "f1e2d3c4b5a678901234567890abcdef98765432",
      "tree_id": "5a6b7c8d9e0f1234567890abcdef987654321234",
      "distinct": true,
      "message": "fix: updated authentication logic",
      "timestamp": "2025-03-16T19:44:00-03:00",
      "url": "https://github.com/johnDoe/fake-repo/commit/f1e2d3c4b5a678901234567890abcdef98765432",
      "author": {
        "name": "John Doe",
        "email": "johndoe@users.noreply.github.com",
        "username": "johnDoe"
      },
      "committer": {
        "name": "GitHub",
        "email": "noreply@github.com",
        "username": "web-flow"
      },
      "added": ["src/auth.js"],
      "removed": [],
      "modified": ["src/index.js"]
    }
  ],
  "head_commit": {
    "id": "f1e2d3c4b5a678901234567890abcdef98765432",
    "tree_id": "5a6b7c8d9e0f1234567890abcdef987654321234",
    "distinct": true,
    "message": "fix: updated authentication logic",
    "timestamp": "2025-03-16T19:44:00-03:00",
    "url": "https://github.com/johnDoe/fake-repo/commit/f1e2d3c4b5a678901234567890abcdef98765432",
    "author": {
      "name": "John Doe",
      "email": "johndoe@users.noreply.github.com",
      "username": "johnDoe"
    },
    "committer": {
      "name": "GitHub",
      "email": "noreply@github.com",
      "username": "web-flow"
    },
    "added": ["src/auth.js"],
    "removed": [],
    "modified": ["src/index.js"]
  }
}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upon execution, the request triggers the AI-powered analysis, and you will receive a detailed JSON response with suggested improvements for your committed code.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Configure GitHub Webhooks
&lt;/h2&gt;

&lt;p&gt;To enable automated AI code validation in your repository, follow these steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Navigate to your repository on GitHub.&lt;/li&gt;
&lt;li&gt;Click on Settings &amp;gt; Webhooks.&lt;/li&gt;
&lt;li&gt;Click "Add Webhook".&lt;/li&gt;
&lt;li&gt;Set the Payload URL: &lt;code&gt;https://example.com/commit/analyze&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Choose Content Type: &lt;code&gt;application/json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Select the events that trigger the webhook: Push events.&lt;/li&gt;
&lt;li&gt;Click "Add Webhook" to save the configuration.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, every new commit automatically triggers the AI analysis, and improvement suggestions will be generated.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project showcases how generative AI can revolutionize software development by providing real-time feedback on code quality. By using AWS Bedrock with DeepSeek R1 LLM, this solution ensures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Consistent adherence to best coding practices&lt;/li&gt;
&lt;li&gt;Improved readability and maintainability of code&lt;/li&gt;
&lt;li&gt;Automated, AI-driven reviews that enhance development workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Developers can now spend less time debugging and more time building high-quality software.&lt;/p&gt;

&lt;p&gt;To explore the full project, visit: &lt;a href="https://github.com/gugamainchein/github-ia-code-validation" rel="noopener noreferrer"&gt;GitHub IA - Code Validation&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Note: The DeepSeek R1 LLM is a recent model in the AWS marketplace and in the market, so its use may have quality issues, especially for languages ​​other than English and Chinese.&lt;/p&gt;

</description>
      <category>cloud</category>
      <category>aws</category>
      <category>deepseek</category>
      <category>serverless</category>
    </item>
    <item>
      <title>Serverless App - Extração de Textos com Exibição de Layouts com Textract</title>
      <dc:creator>Gustavo Mainchein</dc:creator>
      <pubDate>Mon, 07 Oct 2024 12:19:35 +0000</pubDate>
      <link>https://dev.to/gugamainchein/serverless-app-extracao-de-textos-com-exibicao-de-layouts-com-textract-455o</link>
      <guid>https://dev.to/gugamainchein/serverless-app-extracao-de-textos-com-exibicao-de-layouts-com-textract-455o</guid>
      <description>&lt;p&gt;&lt;strong&gt;Entendendo o Textract:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;O Amazon Textract é um serviço avançado de Machine Learning (ML) da AWS projetado para extrair automaticamente textos impressos ou manuscritos, além de identificar elementos de layout e dados estruturados a partir de documentos digitalizados. Ele é capaz de processar diversos tipos de documentos, como formulários, relatórios e recibos, facilitando a automação de tarefas que exigem a extração e organização de informações. A tecnologia é particularmente útil em cenários onde grandes volumes de documentos precisam ser analisados, permitindo uma leitura precisa e eficiente dos conteúdos, sejam eles simples ou complexos.&lt;/p&gt;

&lt;p&gt;A base desse serviço é a tecnologia de reconhecimento óptico de caracteres (OCR), que utiliza algoritmos sofisticados de correspondência de padrões para analisar imagens de texto. O OCR realiza uma comparação detalhada, caractere por caractere, entre o conteúdo visualizado e um banco de dados interno, decodificando a imagem para gerar um texto digital legível. No entanto, o OCR convencional pode ser limitado quando se trata de interpretar variações complexas de escrita, especialmente manuscrita. Para superar esses desafios, o Amazon Textract adota o reconhecimento inteligente de caracteres (ICR), uma evolução do OCR. O ICR utiliza técnicas avançadas de machine learning que treinam o sistema para reconhecer caracteres da mesma forma que um humano faria, aprimorando a precisão na leitura de diferentes estilos de escrita, mesmo em formatos menos padronizados.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;"Antes de prosseguirmos, é fundamental esclarecer o objetivo desta publicação. Vamos apresentar um exemplo prático de como desenvolver tanto o back-end quanto o front-end para integrar o Amazon Textract, com o foco específico em destacar informações importantes (highlights) em documentos PDF. Isso será feito utilizando o recurso de Layout do serviço, que permite identificar e manipular a estrutura visual dos documentos, como tabelas, parágrafos e outras áreas de interesse. Vale ressaltar que, neste conteúdo, não exploraremos outras funcionalidades do Textract, concentrando-se exclusivamente na extração de layout e destaques em PDFs.”&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Como funciona a integração com o Textract:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A AWS possui diversos portais que contém documentações completas sobre o processo de integração com cada serviço, de acordo com sua linguagem. No nosso caso, iremos utilizar o &lt;a href="https://nodejs.org/pt" rel="noopener noreferrer"&gt;Node.js&lt;/a&gt;, que é um software de código aberto, multiplataforma, baseado no interpretador V8 do Google e que permite a execução de códigos JavaScript fora de um navegador web.&lt;/p&gt;

&lt;p&gt;No caso do Javascript, a &lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/" rel="noopener noreferrer"&gt;AWS possui um hub grande de integrações&lt;/a&gt;, onde você pode realizar a integração com os serviços por meio de módulos. Pensando na integração com o Textract, você pode seguir a &lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/textract/" rel="noopener noreferrer"&gt;documentação&lt;/a&gt; e executar os seguintes comandos de instalação:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6u6ujqyjyi3d1ima7y56.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6u6ujqyjyi3d1ima7y56.png" alt=" " width="800" height="353"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Nesta publicação, iremos utilizar o método "AnalyzeDocumentCommand” da API do Textract, cuja documentação deixo ao lado: &lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/textract/command/AnalyzeDocumentCommand/" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/textract/command/AnalyzeDocumentCommand/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aplicação Back-End:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Para estruturarmos a aplicação back-end, iremos contar com a utilização do &lt;a href="https://www.serverless.com/" rel="noopener noreferrer"&gt;Serverless Framework&lt;/a&gt; como biblioteca e framework de projeto, pois essa solução irá nos apoiar nas configurações dos recursos da Infraestrutura e publicação das funções Lambda e API Gateway.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Começando pelo arquivo serverless.yml, temos:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Nome da organização da conta to Serverless Framework
org: publicacao
# Nome da aplicação presente na organização
app: documents-analyze
# Nome do serviço pertencente à aplicação
service: back-end

provider:
  # Nome do provider de infraestrutura
  name: aws
  # Linguagem e versão aceita pelo Lambda
  runtime: nodejs20.x
  # Timeout default das funções Lambda
  timeout: 30
  # Estrutura da role de IAM para permissionamento das funções Lambda
  iamRoleStatements:
    - Resource: "*"
      Effect: Allow
      Action:
        - s3:*
        - textract:*

plugins:
  # Plugin (módulo NPM) para apoiar na execução em ambiente de desenvolvimento
  - serverless-offline

# Estruturação das funções Lambda
functions:
  # Nome único da função Lambda
  extractText:
    # Caminho de pastas que a função se encontra
    handler: src/extractText.handler
    # Eventos que serão o gatilho para triggar a função, no caso aqui é o API Gateway
    events:
      - httpApi:
          path: /{documentName}
          method: get
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Seguindo para configuração do arquivo package.json (arquivo padrão para execução de aplicações Node.js):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "name": "back-end",
  "version": "1.0.0",
  "main": "src/extractText.mjs",
  "license": "ISC",
  "type": "module",
  "dependencies": {
    "@aws-sdk/client-s3": "^3.658.1",
    "@aws-sdk/client-textract": "^3.658.1",
    "@aws-sdk/s3-request-presigner": "^3.658.1"
  },
  "devDependencies": {
    "serverless-offline": "^14.3.2"
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Agora, no arquivo src/extractText.mjs, temos:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;```import import {&lt;br&gt;
  TextractClient,&lt;br&gt;
  AnalyzeDocumentCommand,&lt;br&gt;
} from "@aws-sdk/client-textract";&lt;br&gt;
import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3";&lt;br&gt;
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";&lt;/p&gt;

&lt;p&gt;// Infrastructure Layer&lt;br&gt;
const region = "us-east-1";&lt;br&gt;
const textractClient = new TextractClient({ region });&lt;br&gt;
const s3Client = new S3Client({ region });&lt;/p&gt;

&lt;p&gt;const BUCKET_NAME = "";&lt;br&gt;
const DOCUMENTS_FOLDER = "";&lt;br&gt;
const SIGNED_URL_EXPIRATION = 3600;&lt;/p&gt;

&lt;p&gt;// Service Layer: Handles Textract document analysis&lt;br&gt;
const analyzeDocument = async (bucketName, documentPath) =&amp;gt; {&lt;br&gt;
  const command = new AnalyzeDocumentCommand({&lt;br&gt;
    Document: {&lt;br&gt;
      S3Object: {&lt;br&gt;
        Bucket: bucketName,&lt;br&gt;
        Name: documentPath,&lt;br&gt;
      },&lt;br&gt;
    },&lt;br&gt;
    FeatureTypes: ["LAYOUT"],&lt;br&gt;
  });&lt;/p&gt;

&lt;p&gt;const response = await textractClient.send(command);&lt;br&gt;
  return {&lt;br&gt;
    blocks: response.Blocks,&lt;br&gt;
    pages: response.DocumentMetadata?.Pages,&lt;br&gt;
  };&lt;br&gt;
};&lt;/p&gt;

&lt;p&gt;// Service Layer: Generates signed URL for S3 object&lt;br&gt;
const generateSignedUrl = async (bucketName, documentPath) =&amp;gt; {&lt;br&gt;
  const command = new GetObjectCommand({&lt;br&gt;
    Bucket: bucketName,&lt;br&gt;
    Key: documentPath,&lt;br&gt;
  });&lt;/p&gt;

&lt;p&gt;return getSignedUrl(s3Client, command, { expiresIn: SIGNED_URL_EXPIRATION });&lt;br&gt;
};&lt;/p&gt;

&lt;p&gt;// Domain Layer: Main handler function&lt;br&gt;
const handler = async (event) =&amp;gt; {&lt;br&gt;
  try {&lt;br&gt;
    const { documentName } = event.pathParameters;&lt;br&gt;
    const documentParsedName = decodeURIComponent(documentName) + ".pdf";&lt;br&gt;
    const documentPath = &lt;code&gt;${DOCUMENTS_FOLDER}/${documentParsedName}&lt;/code&gt;;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;const documentData = await analyzeDocument(BUCKET_NAME, documentPath);
const signedUrl = await generateSignedUrl(BUCKET_NAME, documentPath);

return createSuccessResponse({ documentData, signedUrl });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;} catch (error) {&lt;br&gt;
    console.error("Error processing document:", error);&lt;br&gt;
    return createErrorResponse("Failed to process document.");&lt;br&gt;
  }&lt;br&gt;
};&lt;/p&gt;

&lt;p&gt;// Helper functions: Response formatting&lt;br&gt;
const createSuccessResponse = (data) =&amp;gt; ({&lt;br&gt;
  statusCode: 200,&lt;br&gt;
  body: JSON.stringify(data),&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;const createErrorResponse = (message) =&amp;gt; ({&lt;br&gt;
  statusCode: 500,&lt;br&gt;
  body: JSON.stringify({ error: message }),&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;export { handler };&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


Com isso, no back-end está devidamente estruturado e para você executá-lo localmente, siga os comandos abaixo:

![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/al1kr2fth75x5i1jbsfa.png)

A partir de então, você estará apto a fazer requisições na sua rota local a partir de qualquer navegador ou plataforma de API, como [Postman](https://www.postman.com/) ou [Apidog](https://apidog.com/).

**Aplicação Front-End:**

Para estruturação da aplicação Front-End, utilizamos o Tailwindcss + Vite + React TS, onde você pode encontrar o tutorial de inicialização do projeto na seguinte documentação: https://tailwindcss.com/docs/guides/vite

Após o passo-a-passo acima executado, seu projeto precisará de algumas dependências para exibir os PDFs em tela, assim como os highlights nos textos. Pensando nisso, utilizaremos a biblioteca react-pdf para podermos fazer esse processo no Front-End.

![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gf7afqr4xwg7i6dda6ii.png)

Lembre-se de instalar a biblioteca como super usuário, pois ela utiliza configurações de sistema para realizar a exibição do documento.

Com isso feito, você precisará apenas criar um arquivo de componente e alterar o App.tsx, conforme orientações abaixo:

- Começando pela criação do src/components/TextDetection.tsx, que será o responsável pela exibição do PDF e marcação das caixas de posição das extrações:



```import import React, { useEffect, useRef, useState } from "react";
import { Props } from "../@types/blocks";
import { pdfjs } from "react-pdf";
import "react-pdf/dist/esm/Page/AnnotationLayer.css";
import "react-pdf/dist/esm/Page/TextLayer.css";

// Configuração do worker do PDF.js
pdfjs.GlobalWorkerOptions.workerSrc = new URL(
  "pdfjs-dist/build/pdf.worker.min.mjs",
  import.meta.url
).toString();

// Função para detectar cliques em caixas delimitadoras
const handleBoxClick = (
  e: MouseEvent,
  block: any,
  width: number,
  height: number,
  setModalText: (text: string) =&amp;gt; void
) =&amp;gt; {
  const canvas = e.target as HTMLCanvasElement;
  const rect = canvas.getBoundingClientRect();

  const x = e.clientX - rect.left;
  const y = e.clientY - rect.top;

  const box = block.Geometry.BoundingBox;
  const left = width * box.Left;
  const top = height * box.Top;
  const boxWidth = width * box.Width;
  const boxHeight = height * box.Height;

  if (x &amp;gt;= left &amp;amp;&amp;amp; x &amp;lt;= left + boxWidth &amp;amp;&amp;amp; y &amp;gt;= top &amp;amp;&amp;amp; y &amp;lt;= top + boxHeight) {
    setModalText(block.Text);
  }
};

// Função para desenhar as caixas delimitadoras
const drawBoundingBoxes = (
  ctx: CanvasRenderingContext2D,
  width: number,
  height: number,
  canvas: HTMLCanvasElement,
  response: Props["response"],
  setModalText: (text: string) =&amp;gt; void
) =&amp;gt; {
  response.blocks.forEach((block) =&amp;gt; {
    if (block.BlockType === "LINE") {
      const box = block.Geometry.BoundingBox;
      const left = width * box.Left;
      const top = height * box.Top;
      ctx.strokeStyle = "red";
      ctx.lineWidth = 2;
      ctx.strokeRect(left, top, width * box.Width, height * box.Height);

      canvas.addEventListener("click", (e) =&amp;gt;
        handleBoxClick(e, block, width, height, setModalText)
      );
    }
  });
};

// Serviço para carregar o PDF e desenhar caixas delimitadoras
const loadPdfAndDraw = async (
  documentUrl: string,
  canvasRefs: React.MutableRefObject&amp;lt;HTMLCanvasElement[]&amp;gt;,
  response: Props["response"],
  setModalText: (text: string) =&amp;gt; void
) =&amp;gt; {
  try {
    const pdf = await pdfjs.getDocument(documentUrl).promise;

    for (let pageNumber = 1; pageNumber &amp;lt;= pdf.numPages; pageNumber++) {
      const page = await pdf.getPage(pageNumber);
      const viewport = page.getViewport({ scale: 1 });

      const canvas = canvasRefs.current[pageNumber - 1];
      if (!canvas) continue;

      const ctx = canvas.getContext("2d");
      if (!ctx) continue;

      canvas.width = viewport.width;
      canvas.height = viewport.height;

      await page.render({ canvasContext: ctx, viewport }).promise;
      drawBoundingBoxes(
        ctx,
        viewport.width,
        viewport.height,
        canvas,
        response,
        setModalText
      );
    }
  } catch (error) {
    console.error("Erro ao carregar o PDF:", error);
  }
};

// Componente principal
const TextDetectionCanvas: React.FC&amp;lt;Props&amp;gt; = ({
  response,
  documentUrl,
  qtdPages,
}) =&amp;gt; {
  const canvasRefs = useRef&amp;lt;HTMLCanvasElement[]&amp;gt;([]);
  const [modalText, setModalText] = useState&amp;lt;string | null&amp;gt;(null);

  const closeModal = () =&amp;gt; setModalText(null);

  useEffect(() =&amp;gt; {
    if (documentUrl &amp;amp;&amp;amp; response) {
      loadPdfAndDraw(documentUrl, canvasRefs, response, setModalText);
    }
  }, [documentUrl, response]);

  return (
    &amp;lt;div&amp;gt;
      {/* Renderizar os canvas */}
      {Array.from({ length: qtdPages }).map((_, index) =&amp;gt; (
        &amp;lt;canvas key={index} ref={(el) =&amp;gt; (canvasRefs.current[index] = el!)} /&amp;gt;
      ))}

      {/* Modal de texto */}
      {modalText &amp;amp;&amp;amp; &amp;lt;TextModal modalText={modalText} closeModal={closeModal} /&amp;gt;}
    &amp;lt;/div&amp;gt;
  );
};

// Componente para o modal de texto
const TextModal: React.FC&amp;lt;{ modalText: string; closeModal: () =&amp;gt; void }&amp;gt; = ({
  modalText,
  closeModal,
}) =&amp;gt; (
  &amp;lt;div
    style={{
      position: "fixed",
      top: 0,
      left: 0,
      width: "100vw",
      height: "100vh",
      backgroundColor: "rgba(0, 0, 0, 0.5)",
      display: "flex",
      alignItems: "center",
      justifyContent: "center",
      zIndex: 1000,
    }}
  &amp;gt;
    &amp;lt;div
      style={{
        backgroundColor: "white",
        padding: "20px",
        borderRadius: "8px",
        boxShadow: "0 2px 10px rgba(0, 0, 0, 0.3)",
      }}
    &amp;gt;
      &amp;lt;div className="font-bold text-2xl flex gap-10"&amp;gt;
        &amp;lt;h1&amp;gt;Veja os Detalhes&amp;lt;/h1&amp;gt;
        &amp;lt;button onClick={closeModal}&amp;gt;X&amp;lt;/button&amp;gt;
      &amp;lt;/div&amp;gt;
      &amp;lt;p className="mt-5"&amp;gt;Texto: {modalText}&amp;lt;/p&amp;gt;
    &amp;lt;/div&amp;gt;
  &amp;lt;/div&amp;gt;
);

export default TextDetectionCanvas;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Agora finalizando com a alteração do App.tsx, que será o responsável por fazer a integração com o Back-End a partir de uma lista de documentos:
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;```import import React, { useEffect, useState } from "react";&lt;br&gt;
import TextDetectionCanvas from "./components/TextDetection";&lt;br&gt;
import { Response } from "./@types/blocks";&lt;/p&gt;

&lt;p&gt;// Constants&lt;br&gt;
const DOCUMENTS = [&lt;br&gt;
  "LISTA DE DOCUMENTOS"&lt;br&gt;
];&lt;/p&gt;

&lt;p&gt;// Service Layer: Handles document fetching&lt;br&gt;
const fetchDocumentData = async (&lt;br&gt;
  documentName: string,&lt;br&gt;
  setDocumentData: (data: Response) =&amp;gt; void,&lt;br&gt;
  setDocumentUrl: (url: string) =&amp;gt; void,&lt;br&gt;
  setPageNumber: (page: number) =&amp;gt; void&lt;br&gt;
) =&amp;gt; {&lt;br&gt;
  try {&lt;br&gt;
    const response = await fetch(&lt;code&gt;http://localhost:3000/${documentName}&lt;/code&gt;);&lt;br&gt;
    const data = await response.json();&lt;br&gt;
    setDocumentData(data.documentData);&lt;br&gt;
    setPageNumber(data.documentData.pages);&lt;br&gt;
    setDocumentUrl(data.signedUrl);&lt;br&gt;
  } catch (error) {&lt;br&gt;
    console.error("Failed to fetch document data", error);&lt;br&gt;
  }&lt;br&gt;
};&lt;/p&gt;

&lt;p&gt;// Domain Layer: Main Application Component&lt;br&gt;
const App: React.FC = () =&amp;gt; {&lt;br&gt;
  const [documentData, setDocumentData] = useState();&lt;br&gt;
  const [documentUrl, setDocumentUrl] = useState("");&lt;br&gt;
  const [pageNumber, setPageNumber] = useState(1);&lt;br&gt;
  const [selectedDocument, setSelectedDocument] = useState(&lt;br&gt;
    DOCUMENTS[0]&lt;br&gt;
  );&lt;/p&gt;

&lt;p&gt;// Fetch document data whenever the selected document changes&lt;br&gt;
  useEffect(() =&amp;gt; {&lt;br&gt;
    setPageNumber(0);&lt;br&gt;
    setDocumentUrl("");&lt;br&gt;
    fetchDocumentData(&lt;br&gt;
      selectedDocument,&lt;br&gt;
      setDocumentData,&lt;br&gt;
      setDocumentUrl,&lt;br&gt;
      setPageNumber&lt;br&gt;
    );&lt;br&gt;
  }, [selectedDocument]);&lt;/p&gt;

&lt;p&gt;return (&lt;br&gt;
    &lt;/p&gt;
&lt;br&gt;
      
        documents={DOCUMENTS}&lt;br&gt;
        selectedDocument={selectedDocument}&lt;br&gt;
        onDocumentSelect={setSelectedDocument}&lt;br&gt;
      /&amp;gt;&lt;br&gt;
      {documentUrl === "" ? (&lt;br&gt;
        &lt;p&gt;Carregando...&lt;/p&gt;
&lt;br&gt;
      ) : (&lt;br&gt;
        
          documentData={documentData}&lt;br&gt;
          documentUrl={documentUrl}&lt;br&gt;
          pageNumber={pageNumber}&lt;br&gt;
        /&amp;gt;&lt;br&gt;
      )}&lt;br&gt;
    &lt;br&gt;
  );&lt;br&gt;
};

&lt;p&gt;// UI Layer: Document Selector Component&lt;br&gt;
const DocumentSelector: React.FC&amp;lt;{&lt;br&gt;
  documents: string[];&lt;br&gt;
  selectedDocument: string;&lt;br&gt;
  onDocumentSelect: (doc: string) =&amp;gt; void;&lt;br&gt;
}&amp;gt; = ({ documents, selectedDocument, onDocumentSelect }) =&amp;gt; (&lt;br&gt;
  &lt;/p&gt;
&lt;br&gt;
    
      value={selectedDocument}&lt;br&gt;
      onChange={(e) =&amp;gt; onDocumentSelect(e.target.value)}&lt;br&gt;
    &amp;gt;&lt;br&gt;
      {documents.map((document, index) =&amp;gt; (&lt;br&gt;
        &lt;br&gt;
          {document}&lt;br&gt;
        &lt;br&gt;
      ))}&lt;br&gt;
    &lt;br&gt;
  &lt;br&gt;
);

&lt;p&gt;// UI Layer: Document Viewer Component&lt;br&gt;
const DocumentViewer: React.FC&amp;lt;{&lt;br&gt;
  documentData: Response | undefined;&lt;br&gt;
  documentUrl: string;&lt;br&gt;
  pageNumber: number;&lt;br&gt;
}&amp;gt; = ({ documentData, documentUrl, pageNumber }) =&amp;gt; (&lt;br&gt;
  &lt;/p&gt;
&lt;br&gt;
    
      response={documentData as Response}&lt;br&gt;
      documentUrl={documentUrl}&lt;br&gt;
      qtdPages={pageNumber}&lt;br&gt;
    /&amp;gt;&lt;br&gt;
  &lt;br&gt;
);

&lt;p&gt;export default App;&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


Com as configurações realizadas, basta iniciar seu projeto por meio dos comandos abaixo:

![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2rjgbynl3gs0trjzr0zq.png)

**Resultado Final:**

Após a execução de todos os passos acima, você irá ter uma aplicação funcional que estará analisando os documentos, extraindo os textos e informando as posições onde encontram-se cada marcação daquele texto no PDF.

Exemplo do resultado final:

![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c4efvwo0ageyidi2s1c3.png)

![ ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7a5k09uto60g27igemee.png)

Pensando até mesmo em evoluções futuras, como trata-se de uma solução de OCR e sabemos que erros podem ocorrer, você pode contar com a utilização de um LLM e uma base de conhecimento para poder indexar esses textos, corrigi-los e gerar respostas inteligentes a partir de determinado assunto.

Com isso, notamos que a aplicação *serverless* desenvolvida para extração de textos e exibição de layouts com Amazon Textract demonstrou uma solução eficaz para automatizar a análise de documentos em larga escala. Utilizando OCR avançado e ICR, a ferramenta não apenas extrai informações textuais, mas também identifica e organiza estruturas complexas, como tabelas e parágrafos, diretamente em PDFs. Com a integração entre o *back-end* (Node.js e Serverless Framework) e o *front-end* (React, Vite e TailwindCSS), a aplicação permite uma visualização intuitiva das marcações e dos textos extraídos.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
      <category>amazontextract</category>
      <category>aws</category>
      <category>serverless</category>
      <category>community</category>
    </item>
    <item>
      <title>Amazon Monitron - Monitoramento Inteligente para Indústria</title>
      <dc:creator>Gustavo Mainchein</dc:creator>
      <pubDate>Mon, 02 Sep 2024 11:56:09 +0000</pubDate>
      <link>https://dev.to/gugamainchein/amazon-monitron-monitoramento-de-motores-para-industria-29me</link>
      <guid>https://dev.to/gugamainchein/amazon-monitron-monitoramento-de-motores-para-industria-29me</guid>
      <description>&lt;h3&gt;
  
  
  &lt;strong&gt;Sobre o Serviço:&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;O Amazon Monitron é uma solução avançada de monitoramento que utiliza sensores inteligentes para acompanhar o desempenho de motores e outros equipamentos industriais. Esses sensores são especialmente eficazes em máquinas com componentes rotativos, como motores elétricos e bombas, onde o movimento rotacional é um fator crítico. O sistema coleta dados detalhados sobre a vibração e a temperatura do equipamento em tempo real, permitindo a análise contínua do seu estado de funcionamento. Com base nesses dados, o Amazon Monitron emprega algoritmos de aprendizado de máquina para realizar análises preditivas, identificando possíveis falhas ou a necessidade de manutenções preventivas antes que ocorram problemas graves. Dessa forma, a solução ajuda a minimizar o tempo de inatividade não planejado e a otimizar o desempenho operacional dos equipamentos.&lt;/p&gt;

&lt;p&gt;Cada kit do Amazon Monitron conta com:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;5 Sensores de coleta;&lt;/li&gt;
&lt;li&gt;1 Central de comunicação com a cloud AWS;&lt;/li&gt;
&lt;li&gt;1 Fonte com 3 adaptadores de entrada.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5pall5nj43uo0t1w9vd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5pall5nj43uo0t1w9vd.jpg" alt="Demonstração do kit do Amazon Monitron" width="500" height="500"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Como funciona:
&lt;/h3&gt;

&lt;p&gt;Após receber o kit do Amazon Monitron, o processo de instalação é bastante simples e intuitivo. Primeiro, faça o download do aplicativo do Amazon Monitron a partir da loja de aplicativos do seu dispositivo móvel, disponível na &lt;a href="https://apps.apple.com/br/app/amazon-monitron/id1563396065" rel="noopener noreferrer"&gt;Apple&lt;/a&gt; ou no &lt;a href="https://play.google.com/store/apps/details?id=aws.monitron.app&amp;amp;hl=pt_BR&amp;amp;pli=1" rel="noopener noreferrer"&gt;Android&lt;/a&gt;. Em seguida, autentique-se usando sua conta AWS e escolha o site (ou local / grupo) de instalação onde os sensores serão utilizados. A partir deste ponto, você pode começar a cadastrar os diferentes equipamentos que deseja monitorar, especificando também as diversas posições dentro desses equipamentos onde os sensores serão instalados. O aplicativo facilita a sincronização dos sensores com a central de coleta de dados, garantindo que todas as informações sejam capturadas e transmitidas corretamente para análise.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/pt_br/Monitron/latest/getting-started-guide/step-2.html" rel="noopener noreferrer"&gt;Demonstração da instalação dos sensores do Amazon Monitron&lt;/a&gt;, você encontrará um esquema detalhado de cada passo do processo de instalação.&lt;/p&gt;

&lt;p&gt;Antes de finalizar a instalação do Amazon Monitron, é crucial definir uma estratégia clara para o posicionamento dos sensores em cada motor. A configuração inicial é extremamente importante, pois qualquer alteração na posição de um sensor após a instalação pode resultar em desvios nos dados coletados e falta de padronização, comprometendo a precisão da análise. Portanto, escolha cuidadosamente os pontos de instalação para garantir que os sensores capturem informações relevantes e consistentes sobre o desempenho do equipamento.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fev2eo1iafoi315qugh98.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fev2eo1iafoi315qugh98.jpg" alt="Demonstração do Painel do Amazon Monitron" width="684" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Uma vez que os sensores estejam instalados e sincronizados corretamente, o painel da AWS começará a receber dados de vibração e temperatura dos equipamentos a cada uma hora, um intervalo fixo e não personalizável. Para obter resultados precisos na análise preditiva de manutenções e falhas, é ideal que a solução permaneça instalada nos motores por um período prolongado. Esse tempo é essencial para acumular uma quantidade significativa de dados, permitindo a identificação de padrões de comportamento normal e anormal dos motores. Com isso, a solução pode gerar alertas e avisos mais precisos, contribuindo para uma manutenção mais eficiente e redução de paradas não planejadas.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd4wg38ruec98bp3q8zg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd4wg38ruec98bp3q8zg.png" alt=" " width="800" height="786"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Peculiaridades do serviço:
&lt;/h3&gt;

&lt;p&gt;O Amazon Monitron foi projetado para ter uma longa duração, com qualidade de coleta e resistencia à diversos ambientes. Contudo, para atender tal finalidade, o serviço conta com as seguintes peculiaridades dos sensores:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Intervalo de coleta: a cada 1h&lt;/li&gt;
&lt;li&gt;Temperatura suportada: Intervalo entre -20ºC / +80ºC&lt;/li&gt;
&lt;li&gt;Sensor de vibração: Acelerômetro MEMS de 3 eixos, faixa +/-16g, frequência de  resposta de até 1KHz, taxa de dados de saída de 6,6KHz&lt;/li&gt;
&lt;li&gt;Tempo de vida da bateria: Estimado em até 5 anos (dependendo do ambiente em que o sensor se encontra)&lt;/li&gt;
&lt;li&gt;Dimensões: 52.8 x 43.0 x 24.9 mm / 2.08 x 1.69 x 0.98 inch&lt;/li&gt;
&lt;li&gt;Peso: 55g&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Além dos pontos citados acima, é muito importante que você mantenha a rede, em que o serviço irá operar, de forma segura, tendo como apoio a seguinte configuração de liberação necessária para o correto funcionamento da comunicação entre o Gateway e a AWS: &lt;a href="https://docs.aws.amazon.com/Monitron/latest/user-guide/network-secure.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/Monitron/latest/user-guide/network-secure.html&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Possibilidades de integrações:
&lt;/h3&gt;

&lt;p&gt;Como um serviço gerenciado da AWS, o Amazon Monitron oferece integrações facilitadas com diversos recursos da AWS, como o Kinesis Data Stream e o Firehose. Essas integrações permitem que os dados capturados pelos sensores sejam transmitidos e armazenados no Amazon S3 no formato &lt;a href="https://jsonlines.org/" rel="noopener noreferrer"&gt;JSON Lines (JSONL)&lt;/a&gt;, um formato eficiente para o processamento e análise de grandes volumes de dados.&lt;/p&gt;

&lt;p&gt;Com os dados armazenados no S3, as possibilidades de integração são amplas e versáteis, permitindo a criação de soluções personalizadas para diferentes necessidades de análise e visualização. Algumas das integrações possíveis incluem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Disponibilização de APIs&lt;/strong&gt;: Utilizando o AWS Athena, é possível criar APIs para consulta e consumo dos dados diretamente no S3, facilitando o acesso e a análise dos dados de monitoramento.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integração com Amazon Bedrock Agents e Action Groups&lt;/strong&gt;: Essa integração permite a execução de consultas no AWS Athena e a construção de respostas inteligentes baseadas em dados, otimizando a tomada de decisões e o monitoramento automatizado.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Construção de soluções de Business Intelligence (BI)&lt;/strong&gt;: Com o Amazon QuickSight, é possível desenvolver dashboards e relatórios interativos para visualização e análise dos dados, transformando informações em insights acionáveis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Outras integrações&lt;/strong&gt;: Além das mencionadas, há diversas outras possibilidades de integração com serviços AWS e ferramentas de terceiros, oferecendo flexibilidade para adaptar a solução às necessidades específicas de cada negócio.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Conclusão:
&lt;/h3&gt;

&lt;p&gt;O Amazon Monitron representa uma solução inovadora e eficiente para o monitoramento preditivo de equipamentos industriais, especialmente aqueles com componentes rotativos, como motores e bombas. Sua capacidade de coletar dados em tempo real sobre vibração e temperatura, combinada com a aplicação de algoritmos de aprendizado de máquina, permite que empresas antecipem falhas e planejem manutenções de maneira proativa, reduzindo o tempo de inatividade e otimizando o desempenho operacional.&lt;/p&gt;

&lt;p&gt;A facilidade de instalação e configuração, aliada às múltiplas possibilidades de integração com outros serviços da AWS, como o Kinesis Data Stream, Firehose, e AWS Athena, torna o Amazon Monitron uma ferramenta versátil e poderosa para uma ampla gama de aplicações industriais. Essas integrações possibilitam não apenas o armazenamento e análise de dados em tempo real, mas também a criação de soluções personalizadas de Business Intelligence, utilizando ferramentas como o Amazon QuickSight, para uma visualização clara e precisa dos dados.&lt;/p&gt;

&lt;p&gt;Em resumo, o Amazon Monitron não apenas melhora a eficiência e a manutenção dos equipamentos, mas também oferece às empresas a oportunidade de transformar dados brutos em insights acionáveis, permitindo uma gestão mais inteligente e eficaz dos recursos industriais. A solução é ideal para organizações que buscam inovar e maximizar a produtividade, mantendo seus ativos em condições ideais de operação e minimizando interrupções inesperadas. Com sua abordagem abrangente e flexível, o Amazon Monitron se destaca como um recurso essencial para o futuro da manutenção industrial preditiva.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Bedrock - Conhecendo o recurso de Knowledge Bases</title>
      <dc:creator>Gustavo Mainchein</dc:creator>
      <pubDate>Tue, 23 Jul 2024 11:57:38 +0000</pubDate>
      <link>https://dev.to/gugamainchein/bedrock-conhecendo-o-recurso-de-knowledge-bases-27h3</link>
      <guid>https://dev.to/gugamainchein/bedrock-conhecendo-o-recurso-de-knowledge-bases-27h3</guid>
      <description>&lt;p&gt;&lt;strong&gt;Sobre o Recurso&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Com as bases de conhecimento (Knowledge Bases) do Amazon Bedrock, você pode fornecer informações contextualizadas aos Modelos Fundacionais (FMs) e aos atendentes, extraídas de fontes de dados privadas da empresa. Isso permite que a Recuperação e Geração de Respostas (RAG) entregue respostas mais relevantes, precisas e personalizadas.&lt;/p&gt;

&lt;p&gt;Na prática, o recurso de Knowledge Bases funciona integrado com um banco de vetores, como o OpenSearch, que armazena informações em um formato que facilita a busca e a comparação de distância entre os vetores de entrada e os armazenados. Quando um agente precisa responder a uma pergunta do usuário, ele consulta esse banco de vetores para encontrar informações relevantes. Essa consulta é feita através de busca semântica, um conjunto de recursos do mecanismo de busca que inclui a compreensão de palavras a partir da intenção e do contexto de quem está fazendo a busca. Isso permite que o agente identifique a informação privada que melhor corresponde à necessidade específica do usuário. Assim, o agente pode fornecer respostas mais precisas e relevantes, utilizando dados contextualizados e específicos da empresa.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Banco de Vetores&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As informações que as empresas precisam armazenar, possuem diversos formatos. Algumas são não estruturadas, como documentos de texto, mídia avançada e áudio, enquanto outras são estruturadas, como logs de aplicações, tabelas e gráficos. Inovações em inteligência artificial e machine learning (IA/ML) nos permitiram criar um tipo de modelo de ML chamado modelos de incorporação. Incorporações codificam todos os tipos de dados em vetores que capturam o significado e o contexto de um ativo. Isso nos permite encontrar ativos semelhantes pesquisando pontos de dados vizinhos. Métodos de pesquisa de vetores possibilitam experiências exclusivas, como tirar uma fotografia com seu smartphone e pesquisar por imagens semelhantes.&lt;/p&gt;

&lt;p&gt;Os bancos de dados de vetores oferecem a capacidade de armazenar e recuperar vetores como pontos de alta dimensão. Eles adicionam recursos para uma pesquisa rápida e eficiente dos vizinhos mais próximos no espaço N-dimensional. Em geral, são alimentados por índices k-Nearest Neighbor (k-NN) e construídos com algoritmos como Hierarchical Navigable Small World (HNSW) e Inverted File Index (IVF). Além disso, os bancos de dados de vetores fornecem funcionalidades adicionais, como gerenciamento de dados, tolerância a falhas, autenticação, controle de acesso e um mecanismo de consulta.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vetorização na Prática&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Para exemplificar o processo de vetorização de uma palavra, vamos utilizar como exemplo o seguinte texto: Inteligência Artificial.&lt;/p&gt;

&lt;p&gt;Para transformar a palavra "Inteligência Artificial" em um vetor, utilizamos técnicas de embeddings, que convertem palavras em representações numéricas multidimensionais. Aqui está um exemplo de como isso é feito:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Tokenização&lt;/strong&gt;: A frase "Inteligência Artificial" é dividida em tokens, geralmente palavras ou sub-palavras. Neste caso, temos "Inteligência" e "Artificial".&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embedding&lt;/strong&gt;: Cada token é então convertido em um vetor usando um modelo de embedding pré-treinado, como Word2Vec, GloVe, BERT, etc. Estes vetores são de alta dimensão e capturam o significado semântico das palavras.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Por exemplo, utilizando um modelo como Word2Vec, a palavra "Inteligência" pode ser representada por um vetor de 300 dimensões:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[0.15, -0.23, 0.45, ..., 0.33]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;E "Artificial" por outro vetor de 300 dimensões:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[-0.12, 0.29, -0.34, ..., 0.18]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Combinação de Vetores&lt;/strong&gt;: Se desejarmos representar a frase inteira "Inteligência Artificial" como um único vetor, podemos combinar os vetores das palavras individuais. Uma abordagem comum é calcular a média dos vetores:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"Inteligência Artificial" = media([0.15, -0.23, 0.45, ..., 0.33], [-0.12, 0.29, -0.34, ..., 0.18])
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Vetor Final&lt;/strong&gt;: O vetor resultante da média ou combinação das palavras individuais representa a frase inteira no espaço vetorial:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[0.015, 0.03, 0.055, ..., 0.255]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Esse vetor capturará a semântica da frase "Inteligência Artificial" e poderá ser utilizado em várias aplicações de IA/ML, como busca semântica, classificação de texto, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Knowledge Bases na Prática&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pensando no contexto acima, vamos para a parte prática de criação e publicação de um exemplo do recurso de Knowledge Bases.&lt;/p&gt;

&lt;p&gt;1º Passo: Criação do bucket S3 que será utilizado para armazenar os dados não estruturados:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46lnwerk9pr2k34qporo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F46lnwerk9pr2k34qporo.png" alt="Criação do Bucket" width="800" height="494"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Observação: O Knowledge Bases possui formatos específicos que consegue vetorizar. Qualquer documento que não esteja dentro de seu formato aceito, será necessário criar uma camada de aplicação para realizar a conversão em um dos formatos aceitos. Os formatos de arquivos suportados pelo Knowledge Bases são:&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Plain text&lt;/th&gt;
&lt;th&gt;.txt&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Markdown&lt;/td&gt;
&lt;td&gt;.md&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;HyperText Markup Language&lt;/td&gt;
&lt;td&gt;.html&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Microsoft Word document&lt;/td&gt;
&lt;td&gt;.doc/.docx&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Comma-separated values&lt;/td&gt;
&lt;td&gt;.csv&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Microsoft Excel spreadsheet&lt;/td&gt;
&lt;td&gt;.xls/.xlsx&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Portable Document&lt;/td&gt;
&lt;td&gt;.pdf&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;2º Passo: Acesso ao serviço do Bedrock e seleção do menu de Builder tools &amp;gt; Knowledge Bases:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11fzqjcpifu94pbzuu9z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11fzqjcpifu94pbzuu9z.png" alt="Acesso ao Recurso de Knowledge Bases" width="800" height="496"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;3º Passo: Preencha os campos padrões da primeira tela da forma como preferir e, na segunda tela, selecione o bucket criado no 1º passo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmzaymauabgfebvt0l83.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwmzaymauabgfebvt0l83.png" alt="Seleção do bucket do S3 no Knowledge Base" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;4º Passo: Selecione o modelo de sua preferência para fazer o processo de embeddings (explicado anteriormente) dos documentos inseridos no bucket e deixe a opção de “Quick create a new vector store - Recommended” selecionada para criação do OpenSearch como banco de vetores:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovwh0062fgxhuihxa81o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovwh0062fgxhuihxa81o.png" alt="Seleção do modelo de embedding e o banco de vetores" width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;1ª Observação: Caso você já possua um banco de vetores, selecione a opção “Choose a vector store you have created” e preencha as informações do seu banco atual;&lt;/li&gt;
&lt;li&gt;2ª Observação: Para ambientes produtivos é recomendado que você ative as opções de redundância e criptografias abaixo da seleção do “Vector database”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzk6dx2117iqk620z23dn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzk6dx2117iqk620z23dn.png" alt="Seleção de opções adicionais aos bancos de vetores" width="800" height="386"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;5º Passo: Após executar os passos acima, seu recurso de Knowledge Bases estará criado e irá faltar apenas executar o “Sync” dos documentos presentes em seu “Data Source”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjitx4c6iqf2reu44anxg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjitx4c6iqf2reu44anxg.png" alt="Sincronização do Data Source" width="800" height="143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusão&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;O recurso de Knowledge Bases do Amazon Bedrock proporciona uma maneira poderosa e eficiente de integrar informações contextuais em sistemas de atendimento e Modelos Fundacionais (FMs). Ao utilizar bancos de vetores, como o OpenSearch, para armazenar e recuperar dados em formatos estruturados e não estruturados, a busca semântica é potencializada, permitindo que agentes encontrem as informações mais relevantes de forma rápida e precisa. Através de técnicas de embeddings, esses dados são convertidos em vetores que capturam significado e contexto, facilitando a comparação e a recuperação de informações semelhantes. Essa abordagem inovadora não só melhora a precisão das respostas fornecidas aos usuários, mas também possibilita novas experiências de pesquisa e interação com dados. Com um processo claro de criação, configuração e sincronização de dados, as empresas podem aproveitar ao máximo as capacidades das Knowledge Bases para otimizar seus fluxos de trabalho e proporcionar um atendimento mais personalizado e eficiente.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>bedrock</category>
      <category>knowledgebases</category>
      <category>vectordatabase</category>
    </item>
    <item>
      <title>AWS IoT Core - Integração</title>
      <dc:creator>Gustavo Mainchein</dc:creator>
      <pubDate>Tue, 19 Mar 2024 11:27:37 +0000</pubDate>
      <link>https://dev.to/gugamainchein/aws-iot-core-integracao-4987</link>
      <guid>https://dev.to/gugamainchein/aws-iot-core-integracao-4987</guid>
      <description>&lt;p&gt;&lt;strong&gt;Introdução&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Neste guia, exploraremos a integração de dispositivos com o serviço AWS IoT Core utilizando Node.js como uma "thing” que irá enviar dados e também uma stack de data stream para processá-los, com Kinesis (Data Stream e Firehose) + S3. Pensando nesse cenário, começaremos a abordar e entender o serviço de IoT Core da AWS.&lt;/p&gt;

&lt;p&gt;A sigla &lt;strong&gt;IoT&lt;/strong&gt; significa &lt;strong&gt;Internet das Coisas&lt;/strong&gt; (em inglês, &lt;strong&gt;Internet of Things&lt;/strong&gt;) e se refere à rede de objetos físicos que são conectados à internet e podem coletar / trocar dados. Essa conexão permite que os objetos sejam monitorados e controlados remotamente, abrindo um mundo de possibilidades para diversas áreas.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Como funciona o IoT?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;O funcionamento da IoT pode ser dividido em quatro etapas principais:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Coleta de dados:&lt;/strong&gt; Sensores (equipamentos como Arduíno, MTJ e outros) presentes nos objetos físicos coletam dados sobre o ambiente em que estão inseridos. Esses dados podem ser, como por exemplo, temperatura, umidade, localização, movimento, entre outros.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Transmissão de dados:&lt;/strong&gt; Os dados coletados são então transmitidos para a nuvem através de uma conexão Wi-Fi, Bluetooth, celular ou via satélite. Geralmente, é muito comum encontrarmos transmissão via Wi-Fi e/ou Bluetooth.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Armazenamento de dados:&lt;/strong&gt; Na nuvem, os dados são processados, por recursos exclusivos que conseguem entender as conexões que utiliza, como MQTT e HTTPS, e armazenados em storages, de forma particinada, para uma análise / tomada de ação futura.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Análise e uso dos dados:&lt;/strong&gt; Os dados armazenados são analisados e podem ser utilizados para diversas finalidades.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;O que é o AWS IoT Core?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;O AWS IoT Core é um serviço integralmente gerenciado pela AWS, criado para simplificar e fortalecer as operações relacionadas à Internet das Coisas (IoT) em escala massiva. Sua utilização está fundamentada em 3 principais pilares:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Conectividade Avançada:&lt;/strong&gt;
O AWS IoT Core é projetado para suportar múltiplos protocolos de comunicação, incluindo MQTT (Message Queuing Telemetry Transport), HTTP (Hypertext Transfer Protocol) e WebSockets. Essa capacidade é crucial para garantir a interoperabilidade e a flexibilidade necessárias em ambientes IoT complexos e heterogêneos.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Gerenciamento Abrangente:&lt;/strong&gt;
O serviço oferece ferramentas e funcionalidades para simplificar e automatizar o gerenciamento de dispositivos IoT em larga escala. Além disso, o AWS IoT Core oferece recursos avançados de monitoramento e diagnóstico, permitindo que os operadores identifiquem e resolvam problemas de forma proativa, minimizando interrupções no fluxo de dados e maximizando a eficiência operacional.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Segurança Robusta:&lt;/strong&gt;
Em um cenário onde a segurança dos dispositivos IoT e dos dados transmitidos é uma preocupação constante, o AWS IoT Core oferece um conjunto abrangente de recursos para proteger a integridade e a confidencialidade das operações IoT. Isso inclui mecanismos avançados de autenticação e autorização, garantindo que apenas dispositivos autorizados tenham acesso aos recursos da nuvem IoT. Além disso, o serviço utiliza técnicas de criptografia para proteger a comunicação entre dispositivos e a nuvem.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Casos de Uso do AWS IoT Core&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;O AWS IoT Core pode ser utilizado em diversos casos de uso, como:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Monitoramento ambiental:&lt;/strong&gt; Coleta de dados de sensores para monitorar temperatura, umidade, qualidade do ar etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automação residencial:&lt;/strong&gt; Controle de dispositivos domésticos inteligentes, como lâmpadas, termostatos e eletrodomésticos.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rastreamento de ativos:&lt;/strong&gt; Monitoramento da localização e do status de ativos em tempo real.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manutenção preditiva:&lt;/strong&gt; Análise de dados de sensores para prever falhas de equipamentos e realizar manutenções preventivas.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Integração de Dispositivos com Node.js&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;O Node.js é uma plataforma ideal para desenvolver aplicações de IoT devido à sua natureza leve, escalável e assíncrona. A integração de dispositivos com o AWS IoT Core utilizando Node.js pode ser realizada através do seguinte processo:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;a. Instalar as bibliotecas necessárias:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;mqtt&lt;/code&gt;: Biblioteca para comunicação com o protocolo MQTT.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fs&lt;/code&gt;: Biblioteca para interpretação dos arquivos de certificados que a aplicação irá utilizar para realizar a comunicação com as things do IoT Core.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;b. Criar um cliente Node.js:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;O cliente Node.js é responsável por se conectar ao AWS IoT Core, publicar e receber mensagens. Trata-se de uma aplicação Node.js que irá embarcada nos “sensores”, que são os equipamentos que coletam dados, como por exemplo um Arduíno, MTJ e/ou outros.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;c. Configurar o cliente com os dados de conexão do IoT Core:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Endpoint do broker MQTT:&lt;/strong&gt; Endereço do broker MQTT do AWS IoT Core, localizado na aba de "Connect” &amp;gt; "Connect one device” do serviço.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Credenciais do dispositivo:&lt;/strong&gt; Chave privada e certificado do dispositivo, geradas no momento da criação de um certificado ou de uma thing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tópico MQTT:&lt;/strong&gt; Tópico utilizado para comunicação entre o dispositivo e o AWS IoT Core, configurado na aba de "Message routing” &amp;gt; "Rules”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;d. Conectar o cliente ao AWS IoT Core:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;O cliente Node.js se conecta ao broker MQTT utilizando as credenciais do dispositivo, publicar mensagens no tópico MQTT configurado e receber mensagens de outros dispositivos que estão inscritos no mesmo tópico MQTT.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Protocolos Envolvidos na Integração&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Para uma integração de dispositivos com o AWS IoT Core usando Node.js, é essencial entender os principais protocolos envolvidos. Aqui estão alguns deles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;MQTT (Message Queuing Telemetry Transport):&lt;/strong&gt;
O MQTT é um protocolo leve e altamente eficiente projetado para facilitar a comunicação em tempo real entre dispositivos IoT e a nuvem. Ele é particularmente adequado para ambientes onde largura de banda e recursos computacionais são limitados, pois minimiza a sobrecarga de rede e oferece uma troca de mensagens assíncrona e confiável.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTTPS (Hypertext Transfer Protocol Secure):&lt;/strong&gt;
O HTTPS é um protocolo amplamente utilizado para comunicação segura entre clientes web e servidores. No contexto da integração de dispositivos com o AWS IoT Core usando Node.js, o HTTPS é empregado como uma camada adicional de segurança para proteger a comunicação entre o cliente Node.js e a plataforma AWS IoT Core. Ao utilizar HTTPS, todas as comunicações são criptografadas, garantindo a confidencialidade e a integridade dos dados transmitidos. Isso é especialmente importante em cenários onde a segurança é uma prioridade, como na transmissão de dados sensíveis ou na execução de operações críticas.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Arquitetura Envolvida na Integração&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pensando nos tópicos documentados acima, temos a seguinte arquitetura que é muito comum de ser utilizada quando nos tratamos de um projeto que envolva IoT, pois é capaz de processar uma massa gigantesca de dados do seu dispositivo:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6hmq4vswu1fxl0jlpjn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft6hmq4vswu1fxl0jlpjn.png" alt="Arquitetura AWS" width="800" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sobre a arquitetura acima, temos:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Os dispositivos IoT (Arduíno rodando uma aplicação Node.js) enviam dados para o &lt;strong&gt;IoT Core&lt;/strong&gt; usando o protocolo MQTT, que é leve e altamente eficiente quando precisamos enviar grande quantidade de dados.&lt;/li&gt;
&lt;li&gt;O &lt;strong&gt;IoT Core&lt;/strong&gt; roteia os dados para o &lt;strong&gt;Kinesis Data Streams&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;O &lt;strong&gt;Kinesis Data Streams&lt;/strong&gt; processa os dados em tempo real e os envia para o &lt;strong&gt;Kinesis Firehose&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;O &lt;strong&gt;Kinesis Firehose&lt;/strong&gt; transforma os dados e os armazena no &lt;strong&gt;S3&lt;/strong&gt;, de forma particionada entre /ano/mes/dia/hora.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Script para Envio de Dados ao IoT Core&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Pensando no script para conexão MQTT e envio de dados ao IoT Core da AWS, temos:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv85bv1g61zbkc57xst7c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv85bv1g61zbkc57xst7c.png" alt="Script Node.js" width="800" height="923"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Importação de Bibliotecas:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;mqtt:&lt;/strong&gt; Utilizada para comunicação com o protocolo MQTT, padrão na comunicação com a Internet das Coisas (IoT).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;fs:&lt;/strong&gt; Fornece funções para interagir com o sistema de arquivos, permitindo a leitura de arquivos de configuração e certificados.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;dotenv:&lt;/strong&gt; Permite carregar variáveis de ambiente a partir de um arquivo &lt;code&gt;.env&lt;/code&gt;, facilitando a configuração sem a necessidade de expor dados sensíveis no código.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Configuração de Variáveis:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;iotMqttEndpoint:&lt;/strong&gt; URL do endpoint MQTT do serviço AWS IoT Core, composto pelo endpoint da região, porta e protocolo seguro (mqtts).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;iotKeyFile, iotCertFile, iotCaFile:&lt;/strong&gt; Caminhos para os arquivos de chave privada, certificado e CA (Autoridade Certificadora) necessários para autenticação no AWS IoT Core.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;iotTopic:&lt;/strong&gt; Nome do tópico MQTT para troca de mensagens.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;message:&lt;/strong&gt; Mensagem a ser publicada no tópico MQTT, contendo o tipo da mensagem ("String") e os dados ("Send message with successfully!"), junto com um timestamp.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;3. Conexão ao AWS IoT Core:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cria um cliente MQTT usando a biblioteca &lt;code&gt;mqtt&lt;/code&gt; e a URL do endpoint configurado.&lt;/li&gt;
&lt;li&gt;Fornece as credenciais (chave, certificado e CA) para autenticação.&lt;/li&gt;
&lt;li&gt;Especifica o protocolo MQTT e a versão 5.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;4. Eventos do Cliente:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;connect:&lt;/strong&gt; Ao conectar-se com sucesso, o cliente se inscreve no tópico MQTT. Em caso de falha na inscrição, um erro é registrado.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;subscribe:&lt;/strong&gt; Ao inscrever-se no tópico com sucesso, uma mensagem é publicada nesse tópico.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;message:&lt;/strong&gt; Ao receber uma mensagem no tópico, o conteúdo da mensagem é exibido no console.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;error:&lt;/strong&gt; Se ocorrer algum erro durante a comunicação com o AWS IoT Core, o erro é registrado no console.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;O script estabelece uma conexão segura com o AWS IoT Core utilizando o protocolo MQTT. Ele se inscreve em um tópico específico e publica uma mensagem nesse tópico. Também é capaz de receber mensagens publicadas por outros dispositivos ou aplicações no mesmo tópico. Esse tipo de script é comum na comunicação com dispositivos IoT e na interação com serviços de nuvem relacionados à Internet das Coisas.&lt;/p&gt;

&lt;p&gt;Acesse o respositório do link ao lado, baixe e crie o seu: &lt;a href="https://github.com/gugamainchein/aws-iot-core-publish-messages" rel="noopener noreferrer"&gt;https://github.com/gugamainchein/aws-iot-core-publish-messages&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Considerações Finais&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Neste guia, exploramos a integração de dispositivos com o serviço AWS IoT Core utilizando Node.js. Abordamos os principais tópicos, como:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;O que é IoT e como funciona;&lt;/li&gt;
&lt;li&gt;O que é o AWS IoT Core e seus casos de uso;&lt;/li&gt;
&lt;li&gt;Como integrar dispositivos com Node.js;&lt;/li&gt;
&lt;li&gt;Os protocolos envolvidos (MQTT e HTTPS);&lt;/li&gt;
&lt;li&gt;Arquitetura para processamento de dados (Kinesis, Firehose e S3);&lt;/li&gt;
&lt;li&gt;Script para envio de dados ao IoT Core.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Fornecemos um resumo detalhado do funcionamento, dos componentes e da comunicação entre eles na arquitetura proposta. Também destacamos as vantagens e aplicações da integração com o AWS IoT Core.&lt;/p&gt;

&lt;p&gt;Para aprofundar seus conhecimentos, acesse o repositório com o script completo: &lt;a href="https://github.com/gugamainchein/aws-iot-core-publish-messages" rel="noopener noreferrer"&gt;https://github.com/gugamainchein/aws-iot-core-publish-messages&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Este guia serve como base sólida para iniciar o desenvolvimento de suas aplicações de IoT utilizando Node.js e o AWS IoT Core.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iot</category>
      <category>awsiot</category>
    </item>
  </channel>
</rss>
