<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Matias Kreder</title>
    <description>The latest articles on DEV Community by Matias Kreder (@mkreder).</description>
    <link>https://dev.to/mkreder</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mkreder"/>
    <language>en</language>
    <item>
      <title>OpenClaw on AWS Bedrock AgentCore: Secure and Serverless</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Sun, 08 Mar 2026 13:54:36 +0000</pubDate>
      <link>https://dev.to/aws-builders/openclaw-on-aws-agentcore-secure-serverless-production-ready-i8n</link>
      <guid>https://dev.to/aws-builders/openclaw-on-aws-agentcore-secure-serverless-production-ready-i8n</guid>
      <description>&lt;h2&gt;
  
  
  Intro
&lt;/h2&gt;

&lt;p&gt;This week AWS released an offering to run &lt;a href="https://aws.amazon.com/blogs/aws/introducing-openclaw-on-amazon-lightsail-to-run-your-autonomous-private-ai-agents/" rel="noopener noreferrer"&gt;OpenClaw in Lightsail&lt;/a&gt;&lt;br&gt;
That was a great announcement, and the community was very excited about this feature (Including myself). &lt;/p&gt;

&lt;p&gt;While this is super easy to deploy, it requires the instance to be up and running (spending) the entire month. OpenClaw is a great open source product, but its security posture is a concern for many users. AWS Hero Gerardo Castro Arica did a great analysis on the &lt;a href="https://dev.to/aws-heroes/i-deployed-openclaw-on-aws-and-heres-what-i-found-as-a-cloud-security-engineer-3p9i"&gt;security aspects&lt;/a&gt; of this Lightsail implementation. Bottom line, it is heading in the right direction, but it needs some extra steps to secure it. &lt;/p&gt;

&lt;p&gt;But guess what? Lightsail is not the only way to run OpenClaw in AWS, there is another "more secure/serverless"  way to run OpenClaw in AWS: Using Bedrock AgentCore. &lt;/p&gt;

&lt;p&gt;A team of AWS Engineers released &lt;a href="https://github.com/aws-samples/sample-host-openclaw-on-amazon-bedrock-agentcore" rel="noopener noreferrer"&gt;this repo&lt;/a&gt; that contains the resources to set up everything. It uses CDK to deploy the entire thing. This is not as simple as the Lightsail approach, but ok for users who are familiar with CLI tools (Yes, Kiro/Claude Code can read the instructions and do the deploy themselves) &lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f502n78p9l04lq4453y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1f502n78p9l04lq4453y.png" alt="OpenClaw on AgentCore architecture" width="800" height="439"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;S3 Workspace Sync&lt;/strong&gt;&lt;br&gt;
AgentCore microVMs are ephemeral by design (They spin up on demand and disappear when idle) &lt;br&gt;
The problem is that OpenClaw stores everything it knows about a user on the local filesystem under .openclaw/ (conversation memory, user profiles, agent configuration, tool outputs). Without a persistence strategy, all of that evaporates the moment a session ends.&lt;br&gt;
The solution is a lightweight S3-backed sync layer built into the container:&lt;br&gt;
When the session starts, the contract server restores the user's .openclaw/ directory from S3 before OpenClaw initializes, giving the agent full context as if the previous session never ended.&lt;br&gt;
Every 5 minutes: a background timer pushes the workspace back to S3, protecting against unexpected failures mid-session.&lt;br&gt;
On shutdown (SIGTERM): a final save runs within AgentCore's 15-second grace window, capturing everything from the session before the microVM terminates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Network:&lt;/strong&gt; AgentCore containers run in private VPC subnets with no direct internet exposure. All AWS service traffic routes through VPC endpoints (S3, Bedrock, Secrets Manager, ECR, DynamoDB, STS, CloudWatch). The only public entry point is the API Gateway HTTP API.&lt;br&gt;
&lt;strong&gt;Webhook authenticatin:&lt;/strong&gt; Every inbound webhook is cryptographically validated before any processing occurs. Telegram uses a secret token registered via setWebhook; Slack uses HMAC-SHA256 signature validation with a 5-minute replay window. Both are fail-closed — requests are rejected if secrets aren't configured.&lt;br&gt;
&lt;strong&gt;Per-user isolation:&lt;/strong&gt; Each user runs in their own AgentCore microVM with a dedicated S3 namespace. There is no shared state between users, and namespace assignment is system-controlled — it cannot be influenced by user input.&lt;br&gt;
&lt;strong&gt;STS session-scoped credentials:&lt;/strong&gt; The container assumes its IAM role with a session policy that restricts S3 and DynamoDB access to the current user's namespace and records. Even if a user somehow gained shell access to the container, they couldn't read another user's data.&lt;br&gt;
&lt;strong&gt;Secret management:&lt;/strong&gt; All sensitive values (bot tokens, webhook secrets, Cognito credentials) live in Secrets Manager encrypted with a customer-managed KMS key, fetched at runtime into process memory.&lt;br&gt;
&lt;strong&gt;Tool hardening:&lt;/strong&gt; OpenClaw's read tool is blocked to prevent credential access via /proc or local file reads. The exec tool is allowed for skill management, but the scoped STS credentials limit blast radius. The proxy is bound to loopback only, and security group egress is restricted to HTTPS.&lt;br&gt;
&lt;strong&gt;Container hardening:&lt;/strong&gt; The bridge runs as a non-root user (openclaw, uid 1001). Request bodies are capped at 1 MB. Internal error details and stack traces are never surfaced in API responses.&lt;br&gt;
&lt;strong&gt;Encryption:&lt;/strong&gt; Everything is encrypted at rest (S3 and Secrets Manager with CMK, DynamoDB with AWS-managed keys) and in transit (TLS for all AWS API calls, HTTPS on API Gateway).&lt;br&gt;
&lt;strong&gt;Least-privilege IAM:&lt;/strong&gt; Each component has tightly scoped permissions. The Router Lambda can only invoke its specific AgentCore Runtime.&lt;/p&gt;

&lt;h2&gt;
  
  
  Instructions
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Clone repo
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;git clone https://github.com/aws-samples/sample-host-openclaw-on-amazon-bedrock-agentcore.git
cd sample-host-openclaw-on-amazon-bedrock-agentcore

# Set your AWS account and region
export CDK_DEFAULT_ACCOUNT=$(aws sts get-caller-identity --query Account --output text)
export CDK_DEFAULT_REGION=us-east-1  # change to your preferred region
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Install Python dependencies
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Bootstrap CDK
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cdk bootstrap aws://$CDK_DEFAULT_ACCOUNT/$CDK_DEFAULT_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Deploy all stacks
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cdk synth          
cdk deploy --all --require-approval never 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Build OpenClaw container image
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Authenticate Docker to ECR
aws ecr get-login-password --region $CDK_DEFAULT_REGION | \
  docker login --username AWS --password-stdin \
  $CDK_DEFAULT_ACCOUNT.dkr.ecr.$CDK_DEFAULT_REGION.amazonaws.com

# Read version from cdk.json for versioned image tags
VERSION=$(python3 -c "import json; print(json.load(open('cdk.json'))['context']['image_version'])")

# Build ARM64 image (required by AgentCore Runtime)
docker build --platform linux/arm64 -t openclaw-bridge:v${VERSION} bridge/

# Tag and push
docker tag openclaw-bridge:v${VERSION} \
  $CDK_DEFAULT_ACCOUNT.dkr.ecr.$CDK_DEFAULT_REGION.amazonaws.com/openclaw-bridge:v${VERSION}
docker push \
  $CDK_DEFAULT_ACCOUNT.dkr.ecr.$CDK_DEFAULT_REGION.amazonaws.com/openclaw-bridge:v${VERSION}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Create a bot on Telegram
a. Message &lt;a class="mentioned-user" href="https://dev.to/botfather"&gt;@botfather&lt;/a&gt; on Telegram
b. Create a new bot with /newbot
c. Copy the bot token&lt;/li&gt;
&lt;li&gt;Store your telegram token
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;aws secretsmanager update-secret \
  --secret-id openclaw/channels/telegram \
  --secret-string 'YOUR_TELEGRAM_BOT_TOKEN' \
  --region $CDK_DEFAULT_REGION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;Run telegram setup script to add yourself to the allowlist:&lt;/li&gt;
&lt;li&gt;Start using openclaw, the first run takes a few minutes, but subsequent iterations are faster. If something fails, check AgentCore logs.
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;./scripts/setup-telegram.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The solution is nice, it works pretty well. However, I found a few things that you may need to consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;This solution deploys an entirely new VPC including a NAT gateway. The NAT gateway cost (around 32 USD/month) is pricier than a Lightsail OpenClaw instance, but this architecture could be modified to reuse an existing VPC, or it could host multiple agents. (Not a great deal for a single instance)&lt;/li&gt;
&lt;li&gt;There are a few things that didn't work for me, and I had to fix them: Hardcoded region configurations, DynamoDB API deprecation warnings, an IAM role circular dependency issue, and availability zones need to be configurable as some AZs don't support AgentCore yet (surprisingly). Submitted a &lt;a href="https://github.com/aws-samples/sample-host-openclaw-on-amazon-bedrock-agentcore/pull/30" rel="noopener noreferrer"&gt;PR&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;You can change the solution to use Nova Pro (I have some AWS credits that don't cover Claude spend, so I had to switch, and it works pretty well)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmq8ua502lfhmb26vetej.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmq8ua502lfhmb26vetej.png" alt="Telegram integration" width="800" height="871"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Want to try it? This is the &lt;a href="https://github.com/aws-samples/sample-host-openclaw-on-amazon-bedrock-agentcore?tab=readme-ov-file" rel="noopener noreferrer"&gt;GitHub Repo&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>aws</category>
      <category>security</category>
      <category>serverless</category>
    </item>
    <item>
      <title>How I used Amazon Q CLI to fix Amazon Q CLI error "Amazon Q is having trouble responding right now"</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Thu, 17 Jul 2025 17:57:41 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-i-used-amazon-q-cli-to-fix-amazon-q-cli-error-amazon-q-is-having-trouble-responding-right-now-90i</link>
      <guid>https://dev.to/aws-builders/how-i-used-amazon-q-cli-to-fix-amazon-q-cli-error-amazon-q-is-having-trouble-responding-right-now-90i</guid>
      <description>&lt;p&gt;I know what you are thinking. I've just used Amazon Q CLI in the title 3 times. No regrets!&lt;/p&gt;

&lt;p&gt;I recently ran into an annoying issue while using the Amazon Q Developer CLI. Every now and then, I'd get this frustrating error message over and over: "Amazon Q is having trouble responding right now.". This errors got more frequent after the Kiro announcement. The CLI would just give up immediately, forcing me to manually retry the command.&lt;/p&gt;

&lt;p&gt;The error (technically a ModelOverloadedError) occurs when there's high traffic or resource constraints on AWS's end. There is a &lt;a href="https://github.com/aws/amazon-q-developer-cli/issues/2315" rel="noopener noreferrer"&gt;GitHub issue&lt;/a&gt; already reported. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Easy Solution&lt;/strong&gt;&lt;br&gt;
Just use /model to switch to Claude 3.7 or 3.5, which may not have capacity constraints. If you still need to use Claude 4, keep reading. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Solution&lt;/strong&gt;&lt;br&gt;
I cloned the Amazon Q CLI repo and asked Q CLI to implement a retry mechanism with exponential backoff to automatically handle these temporary overloads:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Attempts the request up to 10 times before giving up&lt;/li&gt;
&lt;li&gt;Uses exponential backoff starting at 500ms (doubling with each retry)&lt;/li&gt;
&lt;li&gt;Adds random jitter to prevent "thundering herd" problems&lt;/li&gt;
&lt;li&gt;Provides debug logs to track retry attempts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The core implementation looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;is_model_unavailable&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;attempt&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;MAX_RETRIES&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nd"&gt;debug!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Model Overloaded: Maximum retry attempts ({}) reached"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MAX_RETRIES&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;Err&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nn"&gt;ApiClientError&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;ModelOverloadedError&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="n"&gt;request_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;
                &lt;span class="nf"&gt;.as_service_error&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
                &lt;span class="nf"&gt;.and_then&lt;/span&gt;&lt;span class="p"&gt;(|&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="nf"&gt;.meta&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;&lt;span class="nf"&gt;.request_id&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
                &lt;span class="nf"&gt;.map&lt;/span&gt;&lt;span class="p"&gt;(|&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="p"&gt;|&lt;/span&gt; &lt;span class="n"&gt;s&lt;/span&gt;&lt;span class="nf"&gt;.to_string&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt;
            &lt;span class="n"&gt;status_code&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="p"&gt;});&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="c1"&gt;// Calculate exponential backoff with jitter&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;backoff_ms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;INITIAL_BACKOFF_MS&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="mi"&gt;2u64&lt;/span&gt;&lt;span class="nf"&gt;.pow&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;attempt&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;jitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;rand&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;random&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;%&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;backoff_ms&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// Add up to 25% jitter&lt;/span&gt;
    &lt;span class="k"&gt;let&lt;/span&gt; &lt;span class="n"&gt;sleep_duration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nn"&gt;Duration&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;from_millis&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;backoff_ms&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;jitter&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nd"&gt;debug!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="s"&gt;"Model overloaded. Retrying attempt {}/{} after {}ms"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;attempt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MAX_RETRIES&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;sleep_duration&lt;/span&gt;&lt;span class="nf"&gt;.as_millis&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nn"&gt;tokio&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;time&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;sleep_duration&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="k"&gt;.await&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;continue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I reviewed each line of the modified code, ran all the automated tests and tested the new version myself. The retry mechanism works. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
This fix significantly improves the user experience by reducing frustration: Users don't have to manually retry when the service is temporarily overloaded. I've submitted this as a &lt;a href="https://github.com/aws/amazon-q-developer-cli/pull/2330" rel="noopener noreferrer"&gt;PR&lt;/a&gt; to the Amazon Q Developer CLI repository, and I hope it gets merged soon. In the meantime, if you're experiencing this issue, you can clone &lt;a href="https://github.com/mkreder/amazon-q-developer-cli.git" rel="noopener noreferrer"&gt;my repo&lt;/a&gt; and build Q cli with the fix. This is not a definitive fix. AWS needs to probably work on their model availability so it doesn't continue to occur, but at least the retry mechanism makes the developer experience a lot better. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Three Ways to Build Multi-Agent Systems on AWS</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Sun, 06 Jul 2025 17:05:04 +0000</pubDate>
      <link>https://dev.to/aws-builders/three-ways-to-build-multi-agent-systems-on-aws-3h8p</link>
      <guid>https://dev.to/aws-builders/three-ways-to-build-multi-agent-systems-on-aws-3h8p</guid>
      <description>&lt;p&gt;When building multi-agent AI systems, the architectural choices you make can dramatically impact performance, maintainability, and scalability. While Strands Agent is the "shiny new thing", we need to ask whether it’s always the best choice.&lt;/p&gt;

&lt;p&gt;I set out to test different multi-agent architectural patterns by building the same system three different ways on AWS. I chose an HR Agent to evaluate resumes because it provides a perfect multi-step workflow that requires different types of AI reasoning and coordination.&lt;/p&gt;

&lt;p&gt;The goal wasn’t to build the perfect HR system, but to understand the trade-offs between different multi-agent orchestration approaches.&lt;br&gt;
I needed a use case that would effectively showcase different multi-agent coordination patterns. HR resume evaluation turned out to be ideal because it requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multiple specialized tasks&lt;/strong&gt; that benefit from different AI reasoning approaches (we could even use different LLM versions/providers)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sequential and parallel processing&lt;/strong&gt; opportunities&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex data transformation&lt;/strong&gt; from unstructured to structured formats&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Coordination between agents&lt;/strong&gt; with different areas of expertise&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-world complexity&lt;/strong&gt; without being overly domain-specific&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The system needs to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Parse resumes&lt;/strong&gt; and extract structured information&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Analyze job requirements&lt;/strong&gt; and match them against candidates&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Identify skill gaps&lt;/strong&gt; and areas for development&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rate candidates&lt;/strong&gt; numerically with detailed justification&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate interview questions&lt;/strong&gt; tailored to each candidate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store everything&lt;/strong&gt; in a structured, queryable format&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Think of it as multiple AI specialists collaborating like a hiring committee, but the real focus is on how they coordinate and communicate.&lt;/p&gt;
&lt;h2&gt;
  
  
  Architecture #1: Step Functions – The Orchestrated Pipeline
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xydbhhabdzrf7is787k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0xydbhhabdzrf7is787k.png" alt="Step Functions Architecture Diagram" width="800" height="469"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for: Complex workflows with detailed monitoring needs and low latency&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Step Functions approach treats resume evaluation like a manufacturing pipeline. Each step is a specialized Lambda function that performs one specific task, with AWS Step Functions orchestrating the entire workflow. &lt;/p&gt;
&lt;h3&gt;
  
  
  Why I Love This Approach
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Low Latency&lt;/strong&gt;: Provides is a tiny lightweight agent management layer that reduces complexity and processing time. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Crystal clear workflow&lt;/strong&gt;: You can see each step executing in the AWS console&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Easy debugging&lt;/strong&gt;: When something breaks, you know exactly which step failed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Granular monitoring&lt;/strong&gt;: Each function can be optimized independently&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Familiar patterns&lt;/strong&gt;: While most developers wont consider StepFunctions to build an AI agent, it is a good fit because many of them are familiar with this tool. &lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  The Trade-offs
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;More infrastructure&lt;/strong&gt;: 6+ Lambda functions to manage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State management&lt;/strong&gt;: Data flows between functions via JSON&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Less flexibility&lt;/strong&gt;: The workflow is relatively rigid&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Perfect for&lt;/strong&gt;: Low latency workflows and teams that want predictable, monitorable workflows and don't mind managing multiple functions.&lt;/p&gt;
&lt;h2&gt;
  
  
  Architecture #2: Bedrock Agents – The AI-Native Approach
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsj23ycgbm5cibulnezbt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsj23ycgbm5cibulnezbt.png" alt="Bedrock Agents Architecture Diagram" width="800" height="446"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for: AI-first teams who want Amazon's managed AI collaboration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This approach uses Amazon Bedrock Agents with a supervisor-collaborator pattern. A supervisor agent coordinates with specialized agents (Resume Parser, Job Analyzer, Skills Evaluator, etc.) to complete the evaluation.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why This Feels Like the Future
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI-native design&lt;/strong&gt;: Built specifically for multi-agent AI workflows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Managed complexity&lt;/strong&gt;: Amazon handles agent coordination and communication&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rich agent interactions&lt;/strong&gt;: Agents can have sophisticated conversations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Built-in monitoring&lt;/strong&gt;: Bedrock console shows agent traces and interactions&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  The Reality Check
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AWS-specific&lt;/strong&gt;: You're locked into Amazon's agent framework&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning curve&lt;/strong&gt;: New concepts and debugging approaches&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost considerations&lt;/strong&gt;: Bedrock usage can add up with complex workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Perfect for&lt;/strong&gt;: Teams building AI-first applications that want to leverage Amazon's managed AI services.&lt;/p&gt;
&lt;h2&gt;
  
  
  Architecture #3: Strands Agents – The Powerhouse Framework
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnmawp7ajbo13eem6wze.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpnmawp7ajbo13eem6wze.png" alt="Bedrock Agentcore + Strands Agent Architecture Diagram" width="800" height="388"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Best for: Maximum flexibility and advanced multi-agent capabilities&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Strands approach uses the open-source Strands Agents SDK, where agents communicate in natural language and dynamically adapt their collaboration patterns. While it takes longer to process, this is where the real multi-agent magic happens. I initially used Lambda to deploy this agent by I recently updated it to Bedrock Agentcore. &lt;/p&gt;
&lt;h3&gt;
  
  
  What Makes This Special
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Natural communication&lt;/strong&gt;: Agents talk to each other like humans&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adaptive workflows&lt;/strong&gt;: The system adjusts based on what it finds&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deep dives&lt;/strong&gt;: Extended processing time enables sophisticated reasoning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Framework agnostic&lt;/strong&gt;: Not tied to any specific cloud provider&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simplified architecture&lt;/strong&gt;: Typically runs on a Lambda function or an ECS task&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  The Hidden Power
&lt;/h3&gt;

&lt;p&gt;We're only using a fraction of Strands' capabilities in this implementation. The framework supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic tool integration&lt;/strong&gt; during runtime&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex multi-agent negotiations&lt;/strong&gt; and decision-making&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Adaptive workflow modification&lt;/strong&gt; based on intermediate results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Advanced memory and context management&lt;/strong&gt; across agent interactions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom agent personalities&lt;/strong&gt; and specialized reasoning patterns&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  The Considerations
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Longer processing&lt;/strong&gt;: Takes 5–15 minutes, but enables deeper analysis. Not a good choice if you want to build a low-latency agent. &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More complex setup&lt;/strong&gt;: Requires proper dependency management and layer configuration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource intensive&lt;/strong&gt;: Needs more memory and processing time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning curve&lt;/strong&gt;: Understanding the full framework takes time&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Perfect for&lt;/strong&gt;: Teams that want to push the boundaries of multi-agent systems and are comfortable with complexity.&lt;/p&gt;
&lt;h2&gt;
  
  
  The Results: Architecture Matters More Than You Think
&lt;/h2&gt;

&lt;p&gt;What surprised me most: &lt;strong&gt;all three approaches produce identical evaluation quality&lt;/strong&gt; when using the same prompts and AI model.&lt;/p&gt;

&lt;p&gt;The real differences are in:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Development experience and debugging&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Operational complexity and monitoring&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Processing time and resource utilization&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Flexibility for future changes and integrations&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Vendor lock-in and LLM portability&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  LLM Flexibility: A Critical Consideration
&lt;/h2&gt;

&lt;p&gt;One of the most important components is &lt;strong&gt;LLM portability&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Strands and Step Functions: LLM Agnostic
&lt;/h3&gt;

&lt;p&gt;Both implementations can easily pivot to different LLMs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI GPT models&lt;/strong&gt; via API calls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Anthropic Claude&lt;/strong&gt; via direct API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Open-source models&lt;/strong&gt; like Llama, Mistral, or custom fine-tuned models&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Local models&lt;/strong&gt; running on your infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This flexibility allows you to choose the best model for your use case, optimize costs, or even run workloads offline.&lt;/p&gt;
&lt;h3&gt;
  
  
  Bedrock Agents: AWS Ecosystem Limitation
&lt;/h3&gt;

&lt;p&gt;The Bedrock Agents implementation is tied to AWS-supported models:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Limited to the Bedrock catalog&lt;/strong&gt; – can't use models AWS doesn't support&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regional limitations&lt;/strong&gt; based on model availability&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Manual guardrail configuration&lt;/strong&gt; you must explicitly set up guardrails, unlike other platforms which enforce some defaults automatically&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This flexibility gap becomes crucial for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cost optimization&lt;/strong&gt; across providers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance tuning&lt;/strong&gt; with specialized models&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Future-proofing&lt;/strong&gt; against vendor changes or pricing shifts&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Performance Comparison
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Step Functions&lt;/th&gt;
&lt;th&gt;Bedrock Agents&lt;/th&gt;
&lt;th&gt;Strands Agents&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Processing Time&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&amp;lt; 1 minute&lt;/td&gt;
&lt;td&gt;2–5 minutes&lt;/td&gt;
&lt;td&gt;2-5 minutes minutes*&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Setup Complexity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Debugging&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Excellent&lt;/td&gt;
&lt;td&gt;Not Great&lt;/td&gt;
&lt;td&gt;Good&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Multi-Agent Flexibility&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Very High&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LLM Portability&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;High&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;High&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Architecture Complexity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;*Strands' longer processing time enables sophisticated multi-agent reasoning that we're not fully utilizing in this implementation.&lt;/p&gt;
&lt;h2&gt;
  
  
  Real-World Lessons Learned
&lt;/h2&gt;
&lt;h3&gt;
  
  
  1. Architecture choice impacts more than performance
&lt;/h3&gt;

&lt;p&gt;The biggest differences aren't in output quality, but in operational characteristics, vendor flexibility, and future extensibility.&lt;/p&gt;
&lt;h3&gt;
  
  
  2. Strands is a sleeping giant
&lt;/h3&gt;

&lt;p&gt;We’re using maybe 30% of what Strands can do. The framework supports dynamic tool integration, complex agent negotiations, and adaptive workflows.&lt;/p&gt;
&lt;h3&gt;
  
  
  3. Bedrock Agents trade flexibility for simplicity
&lt;/h3&gt;

&lt;p&gt;Great for getting started quickly, but debugging issues when something fails is hard.&lt;/p&gt;
&lt;h3&gt;
  
  
  4. Step Functions remain the reliable choice
&lt;/h3&gt;

&lt;p&gt;When you need low-latency, predictable, debuggable workflows and don't mind managing multiple functions, it's hard to beat.&lt;/p&gt;
&lt;h2&gt;
  
  
  Which Multi-Agent Architecture Should You Choose?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Choose Step Functions if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You want low-latency, predictable, monitorable workflows&lt;/li&gt;
&lt;li&gt;Your team is comfortable with traditional serverless patterns&lt;/li&gt;
&lt;li&gt;You need fast processing times and clear debugging&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM flexibility&lt;/strong&gt; is important for your use case&lt;/li&gt;
&lt;li&gt;You prefer proven, stable architectural patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Choose Bedrock Agents if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You're building AI-first applications within the AWS ecosystem&lt;/li&gt;
&lt;li&gt;You want Amazon to handle multi-agent complexity&lt;/li&gt;
&lt;li&gt;You're already using other Bedrock services&lt;/li&gt;
&lt;li&gt;You prefer managed services over custom implementations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Choose Strands Agents if:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You need &lt;strong&gt;maximum multi-agent capabilities&lt;/strong&gt; and flexibility&lt;/li&gt;
&lt;li&gt;You want to explore cutting-edge AI coordination patterns&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM portability&lt;/strong&gt; and vendor independence are priorities&lt;/li&gt;
&lt;li&gt;You're okay with longer processing times for deeper reasoning&lt;/li&gt;
&lt;li&gt;You just want your agent to run with no cloud configuration using an AWS cli tool (agentcore)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  The Code
&lt;/h2&gt;

&lt;p&gt;All three implementations are available in my &lt;a href="https://github.com/mkreder/aws-agents" rel="noopener noreferrer"&gt;AWS Agents repository&lt;/a&gt;, including complete deployment scripts, sample data, and documentation. Each approach includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complete SAM templates for one-click deployment&lt;/li&gt;
&lt;li&gt;Sample resumes and job descriptions for testing&lt;/li&gt;
&lt;li&gt;Comprehensive monitoring and logging&lt;/li&gt;
&lt;li&gt;Identical evaluation quality across all approaches&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Sample Output
&lt;/h2&gt;

&lt;p&gt;All three implementations store the outputs in DynamoDB. A sample output looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "id": "a5ec67f5-3e3f-4db3-8e68-d7127b9131f3",
  "name": "Sarah Smith Data Scientist.Txt",
  "resume_key": "resumes/sarah_smith_data_scientist.txt",
  "status": "completed",
  "job_title": "AI Engineer Position",
  "completed_at": "2025-06-29T18:05:58.708418",

  "candidate_rating": {
    "rating": 2,
    "job_fit": "Sarah would be a fair fit for a junior or mid-level AI engineering role but does not meet the requirements for the senior position. Her statistical background and basic ML experience provide a foundation to build upon, but she would need significant mentoring and development in deep learning, MLOps, containerization, and production deployment before being ready for a senior AI Engineer role.",
    "strengths": [
      "Strong educational background in Statistics and Mathematics with ML coursework",
      "Solid foundation in Python programming and data analysis",
      "Experience with basic ML algorithms and statistical modeling",
      "Some database knowledge (PostgreSQL) as required",
      "Collaborative experience working with product teams",
      "Good data visualization skills that would be useful for stakeholder communication"
    ],
    "weaknesses": [
      "Insufficient experience (2 years vs. required 3+ years)",
      "Limited experience with required deep learning frameworks (only project-level TensorFlow, no PyTorch)",
      "No experience with containerization (Docker, Kubernetes) or MLOps practices",
      "Limited cloud platform expertise beyond basic AWS knowledge",
      "No production-level AI system deployment experience",
      "Lack of experience with big data technologies (Spark, Hadoop)",
      "No demonstrated experience in model monitoring or CI/CD pipelines"
    ]
  },

  "evaluation_results": {
    "job_match_analysis": {
      "overall_fit": "Partial match - Junior to mid-level candidate applying for senior role",
      "recommendation": "Consider for a mid-level AI Engineer position rather than senior role. The candidate shows promise but lacks the depth of experience and technical breadth required for a senior position. Would benefit from mentorship and exposure to production ML systems, containerization, and MLOps practices."
    },
    "technical_expertise": {
      "programming": {
        "alignment": "Partial match - Strong in Python and SQL as required, but no Java",
        "depth": "Moderate - 2 years professional experience with Python"
      },
      "ml_frameworks": {
        "alignment": "Partial match - Experience with Scikit-learn but limited exposure to TensorFlow and no PyTorch mentioned",
        "depth": "Basic - Primary experience with traditional ML algorithms rather than deep learning"
      },
      "cloud_platforms": {
        "alignment": "Minimal match - Only basic AWS knowledge mentioned",
        "depth": "Limited - Only mentions S3 and EC2, no SageMaker or other ML-specific services"
      }
    }
  },

  "gaps_analysis": {
    "skill_mismatches": {
      "issues": [
        "Claims to be a Machine Learning Engineer but experience seems more aligned with Data Analyst/Scientist role",
        "Lists TensorFlow in project but not in skills section",
        "Claims 'Basic AWS knowledge' but doesn't demonstrate cloud implementation experience"
      ]
    },
    "overall_concerns": {
      "potential_under_qualification": "Limited professional experience (2 years) for roles requiring more extensive background. Experience appears more aligned with junior data scientist rather than machine learning engineer positions."
    }
  },

  "interview_notes": {
    "technical_questions": [
      "Can you walk me through your experience with TensorFlow beyond the stock price prediction project? What specific neural network architectures have you implemented?",
      "How would you approach deploying a machine learning model to a production environment? What tools and practices would you use for model monitoring and maintenance?",
      "What experience do you have with containerization technologies like Docker and Kubernetes? How have you used them in ML workflows?"
    ],
    "concerns_to_address": [
      "Experience gap (2 years vs. required 3+ years for senior role)",
      "Limited experience with deep learning frameworks beyond project work",
      "No demonstrated experience with containerization or MLOps",
      "Limited cloud platform expertise beyond basic AWS services"
    ],
    "general_notes": [
      "Candidate has strong educational background but lacks senior-level experience",
      "Consider for mid-level position rather than senior role",
      "Strong in statistics and traditional ML but gaps in deep learning and MLOps",
      "Would benefit from mentorship in production ML systems and DevOps practices"
    ]
  }
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The future of multi-agent AI systems is incredibly exciting. After testing these three approaches, I'm convinced that the choice of coordination pattern matters more than the specific use case. Whether you're building HR automation, document processing, or any other multi-step AI workflow, these patterns offer strong foundations for production-ready systems.&lt;/p&gt;

&lt;p&gt;The key takeaway? Each architecture comes with trade-offs that go far beyond performance. Step Functions is not an obvious choice, but it is a highly reliable approach for orchestrating multi-agent flows, especially when clarity and debugging are important. Bedrock Agents provide a managed experience that is great for fast prototyping, although troubleshooting can be difficult when issues arise. Strands offers unmatched reasoning and flexibility, but its longer processing time and higher resource requirements often lead to running it in ECS, where scaling may become a different challenge.  &lt;/p&gt;

&lt;p&gt;Bottom line, choosing the right approach is not just about the technology itself; it is about how much control, flexibility, and complexity your team is prepared to handle.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>genai</category>
      <category>stepfunctions</category>
      <category>bedrock</category>
    </item>
    <item>
      <title>How to Solve dependencyFailedException on a Multi-Agent in AWS Bedrock</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Sat, 28 Jun 2025 21:38:41 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-to-solve-dependencyfailedexception-on-a-multi-agent-in-aws-bedrock-3fdi</link>
      <guid>https://dev.to/aws-builders/how-to-solve-dependencyfailedexception-on-a-multi-agent-in-aws-bedrock-3fdi</guid>
      <description>&lt;p&gt;When working with AWS Bedrock Multi-Agent configurations, you might encounter an error message similar to this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;An error occurred (dependencyFailedException) when calling the InvokeAgent operation: Dependency resource: received model timeout/error exception from Bedrock. Try the request again.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This occurred when invoking the agent with the following call:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response = bedrock_agent_runtime.invoke_agent(
    agentId=agent_id,
    agentAliasId=agent_alias_id,
    sessionId=session_id,
    inputText=agent_input,
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To investigate this issue, I added enableTrace=True to the previous call. This provided deeper insights into the problem:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
    "agentAliasId": "BJIGGB72UG",
    "agentId": "DMMRLXVFLT",
    "agentVersion": "5",
    "callerChain": [
        {
            "agentAliasArn": "arn:aws:bedrock:us-east-1:479047237979:agent-alias/DMMRLXVFLT/BJIGGB72UG"
        }
    ],
    "sessionId": "89ec67ae-132a-4ecd-9182-ffef774cea68",
    "trace": {
        "failureTrace": {
            "failureReason": "Dependency resource: received model timeout/error exception from Bedrock. Try the request again.",
            "traceId": "4d5612ec-5f45-4797-94bb-c6f2b6276de6-0"
        }
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;While this trace didn’t explicitly identify the root cause, it became clear that the issue related to model performance and the specific Bedrock foundational model used in a collaborator agent within the multi-agent setup.&lt;/p&gt;

&lt;p&gt;According to the AWS documentation, the &lt;a href="https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-client-bedrock-agent-runtime/Class/DependencyFailedException/" rel="noopener noreferrer"&gt;dependencyFailedException&lt;/a&gt; means:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;"There was an issue with a dependency. Check the resource configurations and retry the request."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This definition is somewhat unclear and generic, and in this specific scenario, it wasn't actually a configuration issue but rather a performance limitation in the selected foundational model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Switching from Amazon Nova Pro to Claude Sonnet 3.7 resolved the  issue entirely. The request started working smoothly after this change. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Steps to Fix:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enable tracing in your agent invocation to gather detailed error context:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;response = bedrock_agent_runtime.invoke_agent(
    agentId=agent_id,
    agentAliasId=agent_alias_id,
    sessionId=session_id,
    inputText=agent_input,
    enableTrace=True
)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Analyze the trace logs provided by the Bedrock runtime.&lt;/p&gt;

&lt;p&gt;Change the underlying foundation model used by the collaborator agent (e.g., switch from Amazon Nova Pro to Claude Sonnet 3.7).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This experience highlights how model selection within a multi-agent configuration can significantly impact stability and performance. Importantly, even though the error indicates a dependencyFailedException, it doesn't necessarily mean a collaborator agent dependency is misconfigured. Instead, it often points to one of the models being unable to generate a timely response or another non-obvious issue.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Building a Multi-Language Image Description API with Amazon Nova Lite and Polly</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Sat, 21 Jun 2025 16:08:52 +0000</pubDate>
      <link>https://dev.to/aws-builders/building-a-multi-language-image-description-api-with-amazon-nova-lite-and-polly-54dk</link>
      <guid>https://dev.to/aws-builders/building-a-multi-language-image-description-api-with-amazon-nova-lite-and-polly-54dk</guid>
      <description>&lt;p&gt;I've always been passionate about technology and how it can transform lives. I occasionally deal with small visual challenges. This has made me deeply interested in exploring ways technology can improve accessibility, particularly for those with greater visual difficulties.&lt;/p&gt;

&lt;p&gt;For that reason, I recently embarked on a project to build an API that converts images into descriptive text (Image to Text) or audio (Image to Speech). Using AWS services like Bedrock and Nova Sonic. I built an open-source serverless API that generates image descriptions in multiple languages with optional audio output. &lt;/p&gt;

&lt;h2&gt;
  
  
  What Does This API Do?
&lt;/h2&gt;

&lt;p&gt;The API accepts base64-encoded images and returns detailed descriptions in over 10 languages, with optional audio narration. It's designed for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Accessibility applications&lt;/strong&gt;: Generate alt-text for images in multiple languages or integrate with screen readers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content management&lt;/strong&gt;: Automatically describe uploaded images&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;E-commerce&lt;/strong&gt;: Create product descriptions from images&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Social media&lt;/strong&gt;: Generate captions in different languages&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Architecture Overview
&lt;/h2&gt;

&lt;p&gt;The solution uses a serverless architecture built entirely on AWS:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6p475tk7mwx2uewi2mu8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6p475tk7mwx2uewi2mu8.png" alt="Image description" width="659" height="237"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Key AWS Services Used
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Amazon Bedrock with Nova Lite Model
&lt;/h3&gt;

&lt;p&gt;The heart of the application uses Amazon's Nova Lite model: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal understanding&lt;/strong&gt;: Processes both text prompts and images&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost efficiency&lt;/strong&gt;: Optimized for high-volume applications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fast inference&lt;/strong&gt;: Low latency responses&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-language support&lt;/strong&gt;: Native understanding of multiple languages&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. AWS Lambda
&lt;/h3&gt;

&lt;p&gt;The serverless compute layer handles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Image processing&lt;/strong&gt;: Base64 decoding and validation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Bedrock integration&lt;/strong&gt;: Model invocation and response handling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Polly integration&lt;/strong&gt;: Audio generation for accessibility&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error handling&lt;/strong&gt;: Comprehensive error responses&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Amazon Polly
&lt;/h3&gt;

&lt;p&gt;Provides text-to-speech capabilities with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Multiple voices per language&lt;/strong&gt;: Natural-sounding speech&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSML support&lt;/strong&gt;: Enhanced audio control&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MP3 output&lt;/strong&gt;: Compressed audio for web delivery&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. API Gateway
&lt;/h3&gt;

&lt;p&gt;Creates a production-ready API with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;REST endpoints&lt;/strong&gt;: Clean API interface&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSL termination&lt;/strong&gt;: Secure HTTPS connections&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Request/response transformation&lt;/strong&gt;: Clean JSON interfaces&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Request/response transformation&lt;/strong&gt;: Clean JSON interfaces&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Implementation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Lambda Function Structure
&lt;/h3&gt;

&lt;p&gt;The Python Lambda function is organized into clear components:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;lambda_handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Parse request and validate input
&lt;/span&gt;    &lt;span class="c1"&gt;# Invoke Bedrock Nova Lite model
&lt;/span&gt;    &lt;span class="c1"&gt;# Generate audio with Polly (if requested)
&lt;/span&gt;    &lt;span class="c1"&gt;# Return structured JSON response
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Bedrock Integration
&lt;/h3&gt;

&lt;p&gt;The Nova Lite model is invoked with carefully crafted prompts:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Describe this image in &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;language_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;. Be descriptive and detailed.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;bedrock_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;modelId&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;amazon.nova-lite-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;dumps&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
            &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                    &lt;span class="p"&gt;{&lt;/span&gt;
                        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;image&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;format&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;image_format&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;source&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bytes&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;b64encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_bytes&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;utf-8&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                            &lt;span class="p"&gt;}&lt;/span&gt;
                        &lt;span class="p"&gt;}&lt;/span&gt;
                    &lt;span class="p"&gt;},&lt;/span&gt;
                    &lt;span class="p"&gt;{&lt;/span&gt;
                        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;
                    &lt;span class="p"&gt;}&lt;/span&gt;
                &lt;span class="p"&gt;]&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;inferenceConfig&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;maxTokens&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deployment with AWS SAM
&lt;/h2&gt;

&lt;p&gt;The project uses AWS SAM (Serverless Application Model) for infrastructure as code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# template.yaml&lt;/span&gt;
&lt;span class="na"&gt;AWSTemplateFormatVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;2010-09-09'&lt;/span&gt;
&lt;span class="na"&gt;Transform&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Serverless-2016-10-31&lt;/span&gt;

&lt;span class="na"&gt;Resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;ImageDescriptionFunction&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;Type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;AWS::Serverless::Function&lt;/span&gt;
    &lt;span class="na"&gt;Properties&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;Runtime&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;python3.9&lt;/span&gt;
      &lt;span class="na"&gt;Handler&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;app.lambda_handler&lt;/span&gt;
      &lt;span class="na"&gt;Policies&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Statement&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow&lt;/span&gt;
            &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;bedrock:InvokeModel&lt;/span&gt;
            &lt;span class="na"&gt;Resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
          &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;Effect&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Allow&lt;/span&gt;
            &lt;span class="na"&gt;Action&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;polly:SynthesizeSpeech&lt;/span&gt;
            &lt;span class="na"&gt;Resource&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'&lt;/span&gt;&lt;span class="s"&gt;*'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Deployment Commands
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;sam build
sam deploy &lt;span class="nt"&gt;--guided&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  API Endpoints
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Text Description Endpoint
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;POST&lt;/strong&gt; &lt;code&gt;/describe/text&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"base64_encoded_image_data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"language"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Response:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"A detailed description of the image in the requested language"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"format"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"text"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"language"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Audio Description Endpoint
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;POST&lt;/strong&gt; &lt;code&gt;/describe/audio&lt;/code&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"image"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"base64_encoded_image_data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"language"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"voice"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Joanna"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Response:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Text description"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"audio"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"base64_encoded_mp3_data"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"format"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"audio"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"voice"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Joanna"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"language"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"en"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Clone the repository&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Enable Nova Lite access&lt;/strong&gt; in Amazon Bedrock console&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy with SAM&lt;/strong&gt;: &lt;code&gt;sam build &amp;amp;&amp;amp; sam deploy --guided&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Test the endpoints&lt;/strong&gt; with your images&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;This project demonstrates how modern AWS services can be combined to create powerful, cost-effective AI applications. The combination of Nova Lite's multimodal capabilities, Lambda's serverless compute, and Polly's text-to-speech creates a comprehensive solution for image accessibility.&lt;/p&gt;

&lt;p&gt;The entire codebase is open-source and production-ready, making it easy for developers to deploy their own instance or extend the functionality for specific use cases.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Repository&lt;/strong&gt;: &lt;a href="https://github.com/mkreder/image-to-speech-api" rel="noopener noreferrer"&gt;https://github.com/mkreder/image-to-speech-api&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Using a custom domain name in a Private REST API Gateway</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Thu, 06 Jun 2024 02:54:39 +0000</pubDate>
      <link>https://dev.to/aws-builders/using-a-custom-domain-name-in-a-private-rest-api-gateway-1c2h</link>
      <guid>https://dev.to/aws-builders/using-a-custom-domain-name-in-a-private-rest-api-gateway-1c2h</guid>
      <description>&lt;p&gt;When working on internal networks, particularly within a VPC, developers often encounter the need to interact with a private API gateway. A common scenario is when a network resource, which must make non-internet HTTPS calls without involving the AWS API, requires access to a specific lambda function. While using the API gateway assigned hostname is an option, opting for a private DNS name can provide a more consistent approach across environments. &lt;/p&gt;

&lt;p&gt;According to the &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html" rel="noopener noreferrer"&gt;AWS Documentation:&lt;/a&gt; "Custom domain names are not supported for private APIs."&lt;br&gt;
However, there is a simple hack to get this to work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR; Architecture&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0irexl5e0os58w5l5gw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv0irexl5e0os58w5l5gw.png" alt="Image description" width="640" height="149"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Full Solution&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;On the VPC, create a &lt;a href="https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html" rel="noopener noreferrer"&gt;"execute-api" VPC endpoint for API Gateway&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;On API Gateway, create a private REST API and all necessary/ resource methods. Create a resource policy &lt;a href="https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-resource-policies-examples.html#apigateway-resource-policies-source-vpc-example" rel="noopener noreferrer"&gt;that only allow access through the VPC Endpoint &lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;On the VPC Endpoints, explore the Subnets section of the VPC endpoint created in step 1 and grab the IPs&lt;/li&gt;
&lt;li&gt;Create a TLS target group using the IPs from step 3.&lt;/li&gt;
&lt;li&gt;Create a TLS internal NLB, using the target group from step 4.&lt;/li&gt;
&lt;li&gt;Create a custom domain name in API Gateway (Regional type) but point it to the private API gateway. &lt;/li&gt;
&lt;li&gt;On Route53, configure a private zone attached to the same VPC with a CNAME record that points to the NLB DNS address. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Once this is done, it should work. I have done this many times in different projects but keep forgetting about it, so I figured it was a good time to document it to be useful for someone else. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>apigateway</category>
      <category>route53</category>
      <category>lambda</category>
    </item>
    <item>
      <title>Accelerating to Las Vegas: Inside the AWS DeepRacer 2023 Finals at reInvent</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Thu, 29 Feb 2024 19:48:53 +0000</pubDate>
      <link>https://dev.to/aws-builders/accelerating-to-las-vegas-inside-the-aws-deepracer-2023-finals-at-reinvent-2inm</link>
      <guid>https://dev.to/aws-builders/accelerating-to-las-vegas-inside-the-aws-deepracer-2023-finals-at-reinvent-2inm</guid>
      <description>&lt;p&gt;&lt;strong&gt;Intro&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In 2023, I was a finalist in the AWS DeepRacer competition and had the chance to travel to AWS reInvent (all expenses paid by AWS). The experience was amazing; from start to finish, the organizers treated me as a VIP, and I met racers from around the world. I thought about writing this post to describe my experience as a racer, the final's perks, and how the final was organized.&lt;br&gt;
This will be useful for future finalists who want to understand what they are about to live. I will also provide some tips at the end.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Story&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I had the chance to attend AWS reInvent three previous times (2017, 2018, 2019). I was even there when DeepRacer was announced, but since I was always busy with something work-related, I only had a chance to get involved with it in May 2020, during the pandemic. That month, there was an F1 contest where two F1 professional racers created their own DeepRacer models and participated against developers to see who was best at virtual racing (the F1 racers didn't win). &lt;br&gt;
Because of this contest, all DeepRacer training was free of charge, pushing me to try it. Since then, it has been my hobby. I could make it to the finals in 2020, but there was no physical reinvent that year, so the final was virtual. It was fun; I participated in several rounds and learned a lot from it. I ended in the top 30. &lt;br&gt;
Then, in 2021 and 2022, the competition was tough, and I couldn't qualify. &lt;br&gt;
The rules slightly changed in 2023; the qualification was per geographic region, and because of that, I could qualify quickly since only a few people were participating in LATAM. But that also pushed me to create content and give workshops and talks about DeepRacer; I wanted this technology to become more popular in my region. &lt;br&gt;
At the same time, I also wanted to get experience in physical racing, but there are very few AWS Summits in the region. That's why I started collaborating with a university, and we started building a Community and hosting DeepRacer events and workshops. &lt;a href="https://dev.to/aws-builders/aws-deepracer-activities-in-buenos-aires-2023-o9c"&gt;You can read more about this here.&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;League Prizes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As in any good competition, DeepRacer has excellent prizes. The prizes vary from year to year, but during 2023, the prizes were:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;$50 for those ending among the top 10% on the virtual race (per region)&lt;/li&gt;
&lt;li&gt;$400 for those ending among the top 3 on the virtual race
(per region)&lt;/li&gt;
&lt;li&gt;Trip to the finals for those that win the virtual race (per region)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The cash prizes could be redeemed via PayPal or Amazon credits.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfh8rssgsloifcvjrm7q.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfh8rssgsloifcvjrm7q.png" alt="Image description" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The trip prize included:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Hotel Accommodations (All finalists stayed at the MGM Grand)&lt;/li&gt;
&lt;li&gt;ReInvent ticket (Costs $2100)&lt;/li&gt;
&lt;li&gt;Flight ticket from your nearest airport&lt;/li&gt;
&lt;li&gt;Transportation to/from the airport&lt;/li&gt;
&lt;li&gt;$400 pre-paid card for food or whatever you want to spend on (can't be used for gambling).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This was awarded to 48 racers (8 racers per region)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzmt6r7rv4btsxzlua7w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhzmt6r7rv4btsxzlua7w.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Prizes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once you make it to the finals in Las Vegas, there are great prizes if you end up with the top racers. In 2023, the top 6 racers won:&lt;/p&gt;

&lt;p&gt;1st place: $20,000 and a trophy.&lt;br&gt;
2nd place: $10,000 and a trophy.&lt;br&gt;
3rd place: $5,000 and a trophy.&lt;br&gt;
4th to 6th place: $3,000 each.&lt;/p&gt;

&lt;p&gt;These prizes are excellent, but even if you make it to the finals and don't end in the top 6, you still won a fantastic trip you will always remember. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Getting There&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I flew from Buenos Aires to Las Vegas with a layover in Dallas, Texas. They put me on a convenient American Airlines flight with a 2-hour stopover. When I arrived in Las Vegas, I was greeted by a driver waiting for me to take me to my hotel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqolka96nadsi1cr2shw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqqolka96nadsi1cr2shw.png" alt="Image description" width="800" height="1067"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;PromoVeritas organizes the whole trip, and they do a great job making sure you feel like a VIP. And even the driver made me feel that way. When I asked her how busy Las Vegas was and if she thought the hotel would let me do an early check-in at 1 pm, she said: "I'm sure they will; You are a VIP". I felt that way during the whole trip. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Schedule&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We had three rounds this week, including the final spread across the first three days of the week and an open race for everyone in the conference on Thursday. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4qve5phlhe19yll2nad.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff4qve5phlhe19yll2nad.png" alt="Image description" width="800" height="384"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;First Round&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Round 1 was a Virtual Round. All racers met at the Brooklyn Bowl and submitted their best models. Each region had its virtual race where racers from the region competed against each other. The top 5 from each region passed to round 2. &lt;br&gt;
There were two chances per racer where. They could submit the same model or choose a different one. In my case, I had a model that I knew was good enough, so I used the same model twice. I was a little more aggressive the second time with the speed throttle.&lt;br&gt;
I ended up 3rd in my region and made it to round 2. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmadiyovd4f6n2k5etj6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzmadiyovd4f6n2k5etj6.png" alt="Image description" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I also received this beautiful box with DeepRacers socks and customized parts to tune my DeepRacer car. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6kjdvh4m9gjg637fv0n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm6kjdvh4m9gjg637fv0n.png" alt="Image description" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This round was also an excellent opportunity to network with people from other regions, as this was one of the few times everyone in the competition participated simultaneously.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Practice Round&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There was a practice round on Tuesday. I tried a few models, but I expected them to perform better. I continued to train some more models overnight. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Second Round&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The second round started; this was the first time I raced in an AWS-hosted physical event. &lt;br&gt;
Each racer has two tries (they can submit two different models or the same model twice). &lt;br&gt;
On the first try, I tried a model I knew might work. It worked okay, but I used a more unstable model that performed poorly on the second try. I ended up in 29th place out of the 32 racers that made it to the second round (From 48 finalists that made it to the final itself).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4xgntdnrz9ao9eswxek.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fz4xgntdnrz9ao9eswxek.jpg" alt="This is me, racing my model" width="768" height="1024"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This is how my model ran in the first try:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/Rxr5uF_MNj8"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Watching the final live was a lot of fun. All top competitors were close to each other, and I saw how they pushed their models more and more to try to make a difference. (The cars drive by themselves, but the racers can use a throttle to speed the models up or down)&lt;/p&gt;

&lt;p&gt;The winner was FiatLux, but his model was run by a proxy driver (Doug Wozniak). Video below.&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/kVapdV6R294"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Final Picture (I'm on the right-most)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjxrk41xhrjogtzjw96g.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnjxrk41xhrjogtzjw96g.JPG" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Karaoke&lt;/strong&gt; &lt;/p&gt;

&lt;p&gt;The AWS DeepRacer team organized a dinner in The Barbershop, a Karaoke place. DeepRacer finalists + AWS Pit Crew, and management were there. &lt;/p&gt;

&lt;p&gt;The finalists even sang, "We are the champions".&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/lFnJLsOZC_s"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;All racers + pit crew from Latin America performed "La Bamba".&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The finals are fun and are a great way to connect with like-minded individuals. There are also a lot of activities you can do at reInvent while you are not racing. If you read this because you made it to the finals and want to know how it will be, "Congratulations! You will have an amazing time". If you are not a finalist, I hope this encourages you to participate in DeepRacer and put some effort into it. It is worth it. &lt;/p&gt;

&lt;p&gt;Here are a few tips from my experience:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Being in Las Vegas is a lot of fun, and you can quickly be booked with activities to do at night, but dedicate some time to sitting in your hotel every night to rework your models. You will be doing tweaks daily and running models every day.&lt;/li&gt;
&lt;li&gt;For virtual sessions, run a practice round yourself in the simulator.&lt;/li&gt;
&lt;li&gt;The organization will give you self-explanatory instructions on what will happen each day of the week. Read everything carefully.&lt;/li&gt;
&lt;li&gt;Don't book any reInvent breakout sessions. All are recorded and uploaded to YouTube. But attend DeepRacer workshops as they are not recorded and usually help you gain more profound knowledge about DeepRacer.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And a few tips that apply to reInvent in general: Leave spare luggage space for swag, drink plenty of water, wear comfortable shoes, and be ready to walk 30k steps daily.&lt;/p&gt;

&lt;p&gt;I hope you find this article helpful! &lt;a href="https://blog.deepracing.io/2024/03/05/announcing-aws-deepracer-league-2024-new-year-new-rules/" rel="noopener noreferrer"&gt;The 2024 session just started, and the new rules have been announced.&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Testing Amazon Bedrock Text G1 Models (Lite vs Express)</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Wed, 17 Jan 2024 18:12:35 +0000</pubDate>
      <link>https://dev.to/aws-builders/testing-amazon-bedrock-text-g1-models-lite-vs-express-n99</link>
      <guid>https://dev.to/aws-builders/testing-amazon-bedrock-text-g1-models-lite-vs-express-n99</guid>
      <description>&lt;p&gt;&lt;strong&gt;Intro&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Amazon released its own LLMs at reInvent 2023, including the G1 Text LLMs Titan Lite and Express. The main difference is that the Lite model has a context length of 4000 tokens, while Express has a context length of up to 8000, making it more robust. While the Lite model has some cost benefits, the Express version should be able to generate better answers. I gave them a try.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pricing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Bedrock prices are measured on how many "tokens" get in and out and out of the model. In Text models like Titan that is somehow connected to the number of words that get in and out of it.&lt;br&gt;
The Lite model currently costs $0.0003 per 1,000 input tokens and $0.0008 per 1,000 output tokens while the Express model costs $0.0006 per 1,000 input tokens and $0.0016 per 1,000 output tokens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Access&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before being able to use them, you need to request access to them. in the "Model Access" section inside the BedRock service console, click on "Manage Model Access", select the models you need access to, and click the "Save Changes" button.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5n0dd92mswgw9ffjj9x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc5n0dd92mswgw9ffjj9x.png" alt="Image description" width="800" height="428"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It can take a couple of minutes to get access to the models. For some models that are not provided by Amazon, it can take several days or weeks as the access needs to be approved by the provider.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Text Generation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To test the output of both models, I used the following input text: 'Generate a text for a marketing campaign for a new Shampoo called "Lambda"'. This is the output they generated:&lt;/p&gt;

&lt;p&gt;Lite:&lt;br&gt;
"Introducing Lambda Shampoo, the ultimate solution for hair that looks and feels its best! Say goodbye to dull, lifeless locks and hello to vibrant, healthy-looking hair with Lambda Shampoo. Formulated with a blend of natural ingredients, Lambda Shampoo is designed to nourish, strengthen, and revitalize your hair from root to tip. Experience the power of botanical extracts, vitamins, and proteins that work in harmony to restore your hair's natural balance and restore its natural shine. Lambda Shampoo is suitable for all hair types, including color-treated hair, and is free from harsh chemicals and sulfates that can strip away natural moisture and damage your hair. It's gentle enough for daily use and leaves your hair feeling soft, smooth, and manageable. But that's not all! Lambda Shampoo comes in an attractive and eco-friendly packaging that reflects our commitment to sustainability. We believe in reducing our environmental footprint and doing our part to protect the planet. Join the Lambda Movement and experience the difference for yourself. Visit our website or find our shampoo at your nearest retailer to get started on your hair care journey today! #LambdaShampoo #HairRevitalization #NourishAndStrength #BotanicalExtracts #HealthyHair #SustainablePackaging"&lt;/p&gt;

&lt;p&gt;Express:&lt;br&gt;
"Lambda Shampoo: The Ultimate Hair Transformation Experience Introducing the revolutionary Lambda Shampoo, the game-changer in hair care that will transform your locks into a breathtaking masterpiece. Say goodbye to ordinary hair and hello to extraordinary beauty with Lambda's cutting-edge formula. Unlock the Power of Natural Ingredients: Lambda Shampoo is crafted with a blend of premium, natural ingredients that have been carefully selected to nourish, strengthen, and enhance your hair's natural beauty. Our formula contains a powerful combination of botanical extracts, vitamins, and antioxidants that work synergistically to promote healthy hair growth, improve scalp health, and restore shine and vitality. Revitalize Your Hair from Root to Tip: Lambda Shampoo is designed to revitalize your hair from root to tip. Our deep cleansing formula gently removes impurities, excess oil, and product buildup, leaving your hair feeling clean, refreshed, and revitalized. With regular use, you'll notice a noticeable improvement in hair texture, volume, and overall hair health. Silky Smooth and Frizz-Free: Say goodbye to frizzy, unruly hair and hello to silky, smooth locks. Lambda Shampoo contains nourishing ingredients that help to tame frizz and impart a lustrous shine to your hair. Experience the joy of running your fingers through hair that feels incredibly soft and manageable. Customized Hair Care for Every Need: Lambda Shampoo offers a range of customized hair care solutions to suit your individual needs. Whether you have dry, damaged hair, oily scalp, or color-treated hair, we have a formula that will address your specific concerns and deliver the desired results. Choose from our variety of scents, including refreshing citrus, soothing lavender, and invigorating mint, to add a touch of luxury to your hair care routine. Sustainable and Eco-Friendly: At Lambda, we are committed to sustainability and eco-friendliness. Our shampoo bottles are made from recycled materials, and our packaging is designed to minimize waste. By choosing Lambda Shampoo, you're not only taking care of your hair but also contributing to a greener and more sustainable future. Join the Lambda Hair Transformation Movement: Experience the transformative power of Lambda Shampoo and join the thousands of satisfied customers who have already embraced this hair care revolution. Elevate your hair care routine and unlock your hair's true potential with Lambda. Visit our website or your nearest retailer to discover the full range of Lambda Shampoo and experience the difference for yourself. Get ready to turn heads with your stunning, healthy hair. Lambda Shampoo: Where Science Meets Beauty, and Your Hair Becomes a Work of Art."&lt;/p&gt;

&lt;p&gt;Both texts are useful for marketing campaigns. It is worth noticing that the Lite text was generated almost immediately, while the Express text took a few seconds to generate. If you don't need to generate long text for these types of tasks, then the Lite version should be enough.&lt;br&gt;
I'm glad none of the models returned a hallucination about AWS Lambda!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code Generation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I tried a few things. First, I tried "Generate a Python function that receives an array of numbers, and returns the biggest number" which worked pretty well for both models, returning this code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def find_biggest_number(array):
    biggest_number = array[0]
    for number in array:
        if number &amp;gt; biggest_number:
            biggest_number = number
    return biggest_number
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, when I tried to do something a little more complex like "Generate a Python function that receives an array of numbers, sorts it from min to max and returns the biggest number" I got an error saying "Sorry - this model is unable to respond to this request."&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj34qks2pea3ruacg9zaf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj34qks2pea3ruacg9zaf.png" alt="Image description" width="800" height="401"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When I tried with the Express version, it generated code (very similar to the previous one) but didn't do the sorting part.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def biggest_number(arr):
    biggest = arr[0]
    for num in arr:
        if num &amp;gt; biggest:
            biggest = num
    return biggest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Both models can generate simple code, but relying on other services like CodeWhisperer for complex tasks is better. I tried this same prompt with other LLMs available on BedRock, like Cohere Command or A21 Jurassic, but both failed to generate a good function as well.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Text generation tests showed that both models produce compelling marketing campaign content, with the Lite version offering a quicker response and the Express version taking a few seconds longer but delivering a more detailed output. Users aiming for shorter text generation tasks may find the Lite model sufficient for their needs.&lt;br&gt;
Regarding Code generation, both models can handle simple requests effectively. However, limitations became evident when presented with more complex tasks, such as sorting an array of numbers. The Lite model failed to respond to the request, while the Express version generated code but omitted the sorting component. This suggests that relying on specialized services like CodeWhisperer may be more effective for complex coding tasks.&lt;br&gt;
In essence, the choice between Titan Lite and Express depends on the specific requirements of the task at hand. For shorter text generation tasks with budget considerations, the Lite model may be a suitable choice. However, the Express model would be a better choice for more extensive tasks.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS DeepRacer activities in Buenos Aires (2023)</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Sat, 16 Dec 2023 20:50:00 +0000</pubDate>
      <link>https://dev.to/aws-builders/aws-deepracer-activities-in-buenos-aires-2023-o9c</link>
      <guid>https://dev.to/aws-builders/aws-deepracer-activities-in-buenos-aires-2023-o9c</guid>
      <description>&lt;p&gt;We were able to run many DeepRacer events in Buenos Aires in 2023; I thought about writing this article to describe how things were put together. The bottom line is that I wanted to gain experience in physical racing but needed a track. Many racers face the same problem, and I get asked how I did it. Finding a university and building a DeepRacer community around it was the solution for me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Intro&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I have been a racer for some time. I started racing in 2020 when DeepRacer announced the F1 contest. I made it to the finals that year, but there was no physical final as re:Invent was virtual because of the pandemic. That was when I also got my first DeepRacer Evo car as a prize for participating. I also joined the AWS Community Builder program and started talking and blogging about DeepRacer.&lt;/p&gt;

&lt;p&gt;The competition in 2021 and 2022 was hard, and I couldn’t make it to the finals again. However, I kept trying and trying. On the other hand, my DeepRacer car was sitting in my Home Office, accumulating dust. I wanted to gain some practice in a physical track, but there are few AWS Summits in LATAM where the track is available to race. I had the money to buy a track myself but I needed to figure out where to place it as I live in a small apartment in the middle of Buenos Aires.&lt;/p&gt;

&lt;p&gt;On the other side of the world, Pablo Inchausti (another AWS Community Builder from Buenos Aires) was at AWS reInvent 2022 and got obsessed with DeepRacer when he saw cars racing on the track. Pablo is also a teacher at UADE, so he thought it would be great for the university to give their students some experience with that technology. We have been working as a team since then.&lt;/p&gt;

&lt;p&gt;Pablo and I started to talk. We spoke with the university. Daniel Feijó, the director of Computer Science Engineering, quickly saw the potential of this opportunity, and they built the track and bought their own DeepRacer Evo cars. The university also had at least five locations to assemble the track.&lt;/p&gt;

&lt;p&gt;Since then, we have organized different races, and the university has always helped host the races in different places and helped with the logistics around those races (transporting the track to different places, assembling the track, providing facilities for events, etc.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UADE Races (Engineering Projects)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In Computer Science Engineering, students must present a final thesis/project. Three groups of students started working on DeepRacer-related projects:.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Autonomous Vehicles Applied to Object Detection in Logistics Centers&lt;/li&gt;
&lt;li&gt;Autonomous Vehicles Applied to Inventory Management in Logistics Centers&lt;/li&gt;
&lt;li&gt;Optimizing Machine Learning Models for Autonomous Vehicles
The first few times we built the track was for these students to test their DeepRacer models.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlbwk21vih3rjm0vg2td.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxlbwk21vih3rjm0vg2td.PNG" alt="Image description" width="800" height="746"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nerdearla Technology Conference&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nerdearla is one of the most significant technology events in South America. It is a 3-day event and more than 10.000 people attended the conference in person and 27.000 virtually. We gave a virtual workshop before the in-person event so that people could see what DeepRacer was about and race. We had 20 racers competing on the track and the best lap time was 18 seconds. If you would like to know more about this race, you can read more in this blog post.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ujf1c77w83d4gafetla.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9ujf1c77w83d4gafetla.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AWS User Group Buenos Aires Race&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We also hosted a race for the local AWS User Group at UADE Belgrano. We assembled the track in a Garden house in an open park across the university’s street. We had about 18 racers, both from the local community and students. The best lap time was 10.2 seconds. Many neighbors stopped by to see what we were doing. After the race in the park, we had a small reception at the university.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wjp2fc0t5l2wpjf25im.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2wjp2fc0t5l2wpjf25im.png" alt="Image description" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F868clqy4xm741vp4knnx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F868clqy4xm741vp4knnx.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UADE Application Development 2 Subject Race&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;During the last semester, I was offered the opportunity to participate as a teacher in one of the UADE courses that Pablo was running. The subject is about different ways for applications to integrate. Everyone works on the same project: The classroom is divided into groups; each group works on its own microservice, and then it needs to integrate with the other groups. The project was about an autonomous food delivery service, so we used DeepRacer to learn about how ML models work and we ran an internal race as well.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1h2b7agdnd97x0f6yjrf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1h2b7agdnd97x0f6yjrf.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UADE’s own use&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;UADE has been using the track at science fairs and taking it across different parts of the country to showcase the DeepRacer technology, including high schools.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwni0np15m93eulsn9e41.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwni0np15m93eulsn9e41.png" alt="Image description" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Notes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;As you might see in this article, we ran several races for different groups. Hosting all these races was fun, and we are starting to build a DeepRacer community in the region, and that is rewarding. If you are facing the same problem I had: You want to get your hands on a physical track but don’t have the money or the space, reach out to the Universities in your region; one of them might be interested in collaborating with you. It is definitely worth it for both parties.&lt;/p&gt;

&lt;p&gt;We are planning a wide variety of activities next year, from doing DeepRacer activities in high schools and universities to running races at Community and Technology events in the region. I’ll be posting about them on my LinkedIn page.&lt;/p&gt;

&lt;p&gt;If you are in the region, feel free contact me through my LinkedIn or join the AWS DeepRacer Community on Slack, where you can get in touch and share experiences with me and other DeepRacer community leaders from across the globe.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extra&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27nuyjbt4k6ikdhiku1r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F27nuyjbt4k6ikdhiku1r.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I had the chance to take my kids to the race we did at Nerdearla in one of the three days, and a few days later, they built this beautiful representation of the race.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>UADE AWS DeepRacer GrandPrix Buenos Aires</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Tue, 28 Nov 2023 08:27:32 +0000</pubDate>
      <link>https://dev.to/mkreder/uade-aws-deepracer-grandprix-buenos-aires-33gi</link>
      <guid>https://dev.to/mkreder/uade-aws-deepracer-grandprix-buenos-aires-33gi</guid>
      <description>&lt;p&gt;El 11 de Noviembre a las 11 en la calle 11 .................(no es broma) de Septiembre en UADE Belgrano organizamos en conjunto con UADE y el AWS User Group de Buenos Aires una competencia de DeepRacer. La competencia se hizo en una plaza del barrio de Belgrano debajo de una histórica Glorieta. &lt;/p&gt;

&lt;p&gt;Muchas personas se acercaron a correr sus modelos de inteligencia artificial y &lt;a href="https://www.linkedin.com/in/mbarreneche/" rel="noopener noreferrer"&gt;Manuel Barreneche&lt;/a&gt; fue el ganador con un tiempo de 10.2 segundos!&lt;/p&gt;

&lt;p&gt;Podemos ver el modelo de Manuel corriendo en el siguiente video:&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/0OG6TK1V9ns"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;Gracias &lt;a href="https://www.uade.edu.ar" rel="noopener noreferrer"&gt;UADE&lt;/a&gt;, por facilitarnos la logística y por prestarnos la Sede de Belgrano para reunirnos después del evento a conversar sobre el User Group y comer algo rico provisto por AWS. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fotos&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyk4uhl5ieq3tesea4a9v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyk4uhl5ieq3tesea4a9v.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxjqo16o02dnjuwvoe0j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnxjqo16o02dnjuwvoe0j.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxv64fnooikyyq8fp34e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flxv64fnooikyyq8fp34e.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fica5up171k06nb2gg5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0fica5up171k06nb2gg5.png" alt="Image description" width="800" height="1066"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0u0g59nck5wog1jbf4ak.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0u0g59nck5wog1jbf4ak.png" alt="Image description" width="800" height="1066"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flu57u2iwb2evpjcr34k0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flu57u2iwb2evpjcr34k0.png" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9yt6otls80e60chsq96.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9yt6otls80e60chsq96.png" alt="Image description" width="800" height="1066"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyw7d8mnksvb2b2i8c5k.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftyw7d8mnksvb2b2i8c5k.jpg" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwuxanzdw4uc3egji9v7.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwuxanzdw4uc3egji9v7.jpg" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frazz23z8hnab1vsl8pms.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frazz23z8hnab1vsl8pms.jpg" alt="Image description" width="800" height="1066"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tmmuhuunpi6xwyz9eh6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tmmuhuunpi6xwyz9eh6.jpg" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3r54eyndlyzg6wxi5dy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3r54eyndlyzg6wxi5dy.jpg" alt="Image description" width="800" height="600"&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbm9xgmb3jem8fltr1yy.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flbm9xgmb3jem8fltr1yy.jpg" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Corriendo DeepRacer con UADE en Nerdearla (Buenos Aires)</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Thu, 26 Oct 2023 15:07:57 +0000</pubDate>
      <link>https://dev.to/aws-builders/deepracer-en-nerdearla-buenos-aires-25b1</link>
      <guid>https://dev.to/aws-builders/deepracer-en-nerdearla-buenos-aires-25b1</guid>
      <description>&lt;p&gt;&lt;strong&gt;Intro&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://nerdear.la" rel="noopener noreferrer"&gt;Nerdearla&lt;/a&gt; es probablemente el evento de tecnología mas grande de Argentina (incluso quizás de Latinoamérica).  Para que se den una idea en números: Participan 200 speakers, 27.000 personas ven el evento por internet y unas 10.000 personas concurren al evento en persona. Vale mencionar que el evento es completamente gratuito desde el día 0. &lt;/p&gt;

&lt;p&gt;En esta oportunidad &lt;a href="https://uade.edu.ar" rel="noopener noreferrer"&gt;UADE&lt;/a&gt; patrocinó la carrera de DeepRacer llevando su pista y los vehículos y todo lo relacionado a la organización de la carrera en si. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;La Carrera&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Antes de Nerdearla, dimos un workshop sobre conceptos básicos de DeepRacer y aprendizaje por refuerzos. Queríamos que las personas puedan familiarizarse con estos conceptos y puedan crear sus propios modelos para poder correrlos en la pista, durante la conferencia. &lt;br&gt;
El video del workshop esta disponible en &lt;a href="https://www.youtube.com/watch?v=4VuGWl57xYc" rel="noopener noreferrer"&gt;YouTube&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Por temas de espacio, tuvimos que armar la pista en el patio del Konex. Por suerte, ninguno de los 3 dias llovio! Hubo algunas complicaciones con armar la pista al aire libre ya que el reflejo del sol causaba reflexiones en la pista. Sin embargo, 20 corredores pudieron correr sus modelos exitosamente. &lt;/p&gt;

&lt;p&gt;El mejor tiempo fue de 18 segundos. Siendo el record mundial en esta pista cerca de 7 segundos, todavía hay un montón para mejorar. Pero lo bueno es que ahora gracias a UADE tenemos una pista disponible para seguir practicando y aprendiendo ML en Buenos Aires.&lt;/p&gt;

&lt;p&gt;En el mes de Noviembre vamos a estar haciendo mas carreras. Si están en la zona y se quieren sumar pueden seguir al &lt;a href="https://www.linkedin.com/company/aws-ug-bsas/" rel="noopener noreferrer"&gt;UG de Buenos Aires&lt;/a&gt; o a &lt;a href="https://www.linkedin.com/in/mkreder/" rel="noopener noreferrer"&gt;mi&lt;/a&gt; en LinkedIn. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdynoq28b0xnx0zssv7rn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdynoq28b0xnx0zssv7rn.png" alt="Foto del campeón de la carrera recibiendo el premio. De Izquiera a Derecha: Tomas Bond (organizador), Matias Kreder (organizador), Luciano Sosa (ganador), Pablo Ezequiel Inchausti (organizador)" width="760" height="783"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Foto del campeón de la carrera recibiendo el premio. De Izquierda a Derecha: &lt;a href="https://www.linkedin.com/in/tomas-bond/" rel="noopener noreferrer"&gt;Tomas Bond&lt;/a&gt;(organizador), &lt;a href="https://www.linkedin.com/in/tomas-bond/" rel="noopener noreferrer"&gt;Matias Kreder&lt;/a&gt; (organizador), &lt;a href="https://www.linkedin.com/in/lucianososadev" rel="noopener noreferrer"&gt;Luciano Sosa&lt;/a&gt; (ganador), &lt;a href="https://www.linkedin.com/in/pablo-ezequiel-inchausti/" rel="noopener noreferrer"&gt;Pablo Ezequiel Inchausti&lt;/a&gt; (organizador)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Más Fotos&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdt6fngow93oh0ibvjz5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzdt6fngow93oh0ibvjz5.png" alt="DeepRacer corriendo en la pista" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvweqcb35x9f0a1jz5j2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuvweqcb35x9f0a1jz5j2.png" alt="DeepRacer corriendo en la pista" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4hmlrww0gzs1yj77739.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft4hmlrww0gzs1yj77739.png" alt="DeepRacer esperando para arrancar" width="800" height="1066"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm68vw46xg8yb0doyaowe.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm68vw46xg8yb0doyaowe.jpg" alt="DeepRacer corriendo la pista" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcu25z4r0mfa02214c5y.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqcu25z4r0mfa02214c5y.jpg" alt="Gente mirando la carrera" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgz1u5fkiaft3wam85nq.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftgz1u5fkiaft3wam85nq.jpg" alt="Gente consultando sobre la carrera" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to stream video from the DeepRacer camera</title>
      <dc:creator>Matias Kreder</dc:creator>
      <pubDate>Sat, 02 Sep 2023 18:36:16 +0000</pubDate>
      <link>https://dev.to/mkreder/how-to-stream-video-from-the-deepracer-camera-mci</link>
      <guid>https://dev.to/mkreder/how-to-stream-video-from-the-deepracer-camera-mci</guid>
      <description>&lt;p&gt;&lt;strong&gt;Intro&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I have been working on several projects that require individuals to get access to the DeepRacer camera. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;At UADE University, students needed access to the camera for some assignments.&lt;/li&gt;
&lt;li&gt;At a conference called Nerdearla, the video streaming team would like access to the car's camera to stream from it. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After hacking around with the car a little and with some hints from David Smith from AWS, I came up with the following process. I'm documenting it here for anyone needing it and a future version of me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backup Configuration Files&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p /home/deepracer/backup
cp /opt/aws/deepracer/lib/device_console/static/bundle.js /home/deepracer/backup/
cp /etc/nginx/sites-enabled/default /home/deepracer/backup/site-config

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Perform Configuration Changes&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sudo sed -i "s/isVideoPlaying\: true/isVideoPlaying\: false/" /opt/aws/deepracer/lib/device_console/static/bundle.js
sudo sed -i "s/auth_request \/auth;/#auth_request \/auth;/" /etc/nginx/sites-enabled/default
systemctl restart nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Accessing The Car Camera&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The URL to access the car camera is:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://CAR_IP/route?topic=/camera_pkg/display_mjpeg&amp;amp;width=480&amp;amp;height=360" rel="noopener noreferrer"&gt;https://CAR_IP/route?topic=/camera_pkg/display_mjpeg&amp;amp;width=480&amp;amp;height=360&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please consider that this change disabled authentication on the ROS backend on the car, so make sure to do this only when you are working in an isolated network you trust.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlwawebvvugatf1rvbou.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhlwawebvvugatf1rvbou.png" alt="Image description" width="800" height="568"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
