<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Martin Nanchev</title>
    <description>The latest articles on DEV Community by Martin Nanchev (@martinnanchev).</description>
    <link>https://dev.to/martinnanchev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/martinnanchev"/>
    <language>en</language>
    <item>
      <title>Anarchy, Assembly Lines, and Corporate Hierarchy: Benchmarking Multi-Agent Architectures for Medical Device Data</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Mon, 16 Mar 2026 07:16:10 +0000</pubDate>
      <link>https://dev.to/aws-builders/anarchy-assembly-lines-and-corporate-hierarchy-benchmarking-multi-agent-architectures-for-3kbb</link>
      <guid>https://dev.to/aws-builders/anarchy-assembly-lines-and-corporate-hierarchy-benchmarking-multi-agent-architectures-for-3kbb</guid>
      <description>&lt;p&gt;My AI judge gave the anarchists a perfect score. I disagree.&lt;br&gt;
I built three multi-agent systems to analyze data from my insulin pump — a Medtronic MiniMed 780G — and had an LLM evaluate their output. The cheapest, fastest architecture scored identically to the most expensive one. But when I read the actual reports, the cheap one guessed where the expensive one calculated. The evaluator didn't care. That tension — between automated scores and human judgment — turned out to be the most interesting finding of this experiment.&lt;br&gt;
But let's start from the beginning.&lt;/p&gt;
&lt;h2&gt;
  
  
  A Fair Fight This Time
&lt;/h2&gt;

&lt;p&gt;In my previous &lt;a href="https://dev.to/aws-builders/how-i-cut-my-ai-medical-report-cost-from-18-to-7-and-what-i-learned-comparing-two-multi-agent-3ja6"&gt;blog post&lt;/a&gt;, I compared a swarm architecture with a graph pipeline for analyzing CareLink CSV exports. The problem? I used different models for each, which made the comparison unfair.&lt;br&gt;
This time, every agent runs on the same model: Haiku 4.5 via AWS Bedrock. Same prompts, same tools, same data. The only variable is the orchestration pattern.&lt;br&gt;
A LinkedIn commenter also suggested trying prompt caching to reduce costs. Good idea — let's see which architecture benefits. It is fair to mention that caching could help you cache tool calls, prompts and system prompts with following config&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;bedrock_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BedrockModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;MODEL_ID&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;REGION&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;64000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;cache_config&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;CacheConfig&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;strategy&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;auto&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;cache_tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;default&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;streaming&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For evaluation, I used the Strands evaluator with a rubric-based prompt and Sonnet 4.5 as the judge.&lt;/p&gt;

&lt;p&gt;Every architecture uses the same four agents. Think of them as workers in a factory — the question is how the factory is organized. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CSV Reader&lt;/strong&gt; — Parses the CareLink CSV export. Returns raw structured data, no interpretation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Analyst&lt;/strong&gt; — Crunches the numbers: glucose statistics, Time in Range, GMI, coefficient of variation, insulin totals, carb intake.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern Reviewer&lt;/strong&gt; — Reads the metrics and timestamps to spot clinically meaningful patterns: dawn phenomenon, post-meal spikes, overnight trends, hypo/hyper clustering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Endocrinologist&lt;/strong&gt; — Synthesizes everything into pump optimization suggestions, framed as discussion topics for a healthcare professional.&lt;br&gt;
I defined the agents using a factory pattern so every architecture gets identical copies:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;PROMPTS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv_reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a CSV Parser Agent specializing in Medtronic MiniMed 780G CareLink data exports.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Parse the raw CSV and return the extracted data EXACTLY as the tool outputs it.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Do NOT summarize, interpret, compute statistics, or analyze the data.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Do NOT provide clinical recommendations.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Return only the raw parsed output for other agents to process.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data_analyst&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a Data Analyst Agent specializing in CGM and insulin pump metrics.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Compute statistical analysis on the diabetes pump data provided.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Calculate glucose statistics (mean, median, SD, min, max), Time in Range (TIR),&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;GMI (estimated A1C), coefficient of variation (CV%), total daily insulin,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;insulin-to-carb ratios, correction bolus frequency, and average daily carbs.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use the SG_VALUES data from the parsed CSV to run your statistical tools.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;If the SG_VALUES are not directly available in your input, use read_carelink_csv&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;to extract them from the CSV file, then run calculate_statistics and time_in_range.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Present findings as structured metrics with numbers — no clinical recommendations.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pattern_reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a Pattern Recognition Agent specializing in diabetes data interpretation.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Identify clinically significant patterns and anomalies from the data provided.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Look for dawn phenomenon, post-meal spikes, overnight trends, and glucose variability.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;If you have access to the CSV file path, use read_carelink_csv to extract timestamped&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data, then use hourly_glucose_profile to compute hourly patterns.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Flag recurring hypoglycemia with timing, severity, and clustering.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Flag prolonged hyperglycemia with duration and potential triggers.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Note Auto Mode exits, insulin suspensions, and sensor issues from alert data.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Compare metrics against ADA/EASD consensus targets.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Note positive trends and areas of good control.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Focus on pattern identification — do not suggest treatment changes.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;endocrinologist&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are an Endocrinologist Agent specializing in insulin pump therapy optimization.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Provide clinical interpretation and actionable recommendations based on the&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;patterns and metrics from previous analysis.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Suggest potential pump setting adjustments: Active Insulin Time, carb ratios,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Auto Mode target glucose (5.5 vs 6.7 mmol/L / 100 vs 120 mg/dL), and bolus timing.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use the Fiasp insulin pharmacokinetics profile: onset ~15min, peak ~1-2h, duration ~3-5h.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Frame everything as &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;discuss with your healthcare team&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; — not direct medical advice.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Provide a clear, prioritized summary highlighting the most impactful improvements.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Acknowledge what is working well alongside areas that need attention.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;


&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;make_agents&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Create a fresh set of specialist agents.

    Tool assignment strategy:
      - csv_reader: read_carelink_csv only (parse the file)
      - data_analyst: read_carelink_csv + calculate_statistics + time_in_range
        (needs CSV access because Graph/Swarm may pass summaries instead of raw values)
      - pattern_reviewer: read_carelink_csv + hourly_glucose_profile
        (needs CSV access for the same reason — LLMs summarize upstream output)
      - endocrinologist: no tools (pure synthesis from previous agents&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; output)

    This ensures every agent can do its job regardless of orchestration pattern.
    In the Graph, upstream nodes may summarize data. In the Swarm, agents may skip steps.
    In the Coordinator, the LLM decides what to pass. Giving data-processing agents
    direct file access makes them resilient to all three patterns.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv_reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PROMPTS&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv_reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv_reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bedrock_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;conversation_manager&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;make_conversation_manager&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;window_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;per_turn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;read_carelink_csv&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data_analyst&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PROMPTS&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data_analyst&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data_analyst&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bedrock_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;conversation_manager&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;make_conversation_manager&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;window_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;per_turn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;read_carelink_csv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;calculate_statistics&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;time_in_range&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pattern_reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PROMPTS&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pattern_reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pattern_reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bedrock_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;conversation_manager&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;make_conversation_manager&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;window_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;per_turn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;read_carelink_csv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;hourly_glucose_profile&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;endocrinologist&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PROMPTS&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;endocrinologist&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;endocrinologist&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bedrock_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;conversation_manager&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;make_conversation_manager&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;window_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;20&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;
        &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Shared constants across all runs :)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;CSV_PATH&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_carelink.csv&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;MODEL_ID&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;global.anthropic.claude-haiku-4-5-20251001-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;REGION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;PROMPT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analyze the Medtronic MiniMed 780G CareLink CSV export at &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;CSV_PATH&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Parse the data, compute glucose and insulin metrics, identify patterns, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;and provide clinical interpretation with actionable recommendations.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now lets try out the 3 political systems&lt;/p&gt;

&lt;h2&gt;
  
  
  The Commune: Swarm
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcol6tyevoma5p7sf4xdu.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcol6tyevoma5p7sf4xdu.jpg" alt="Swarm" width="380" height="286"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The swarm is anarchy by design. No central authority, no predefined order. Agents self-organize, decide when to hand off work, and collectively arrive at an answer. Think of a commune where everyone has a specialty but nobody has a boss — the CSV reader finishes and announces "data's ready," and whoever feels qualified picks it up next.&lt;br&gt;
The promise: emergent intelligence. The reality: emergent shortcuts. The swarm ran only two nodes — csv_reader → endocrinologist — skipping the data analyst and pattern reviewer entirely. Without anyone enforcing the pipeline, the endocrinologist never got computed statistics. It estimated Time in Range as "likely &amp;lt;70%" when the actual value was 92.3%. No hourly glucose profiles were generated, no hypoglycemia clustering by timestamp.&lt;br&gt;
And yet — the clinical insights were sharp. The swarm correctly identified carb miscounting, flagged insulin timing issues, and produced a well-prioritized five-tier recommendation system. The anarchists were sloppy with numbers but wise in judgment.&lt;br&gt;
I took inspiration from the Strands agents samples repo for structuring the swarm roles. Although I decided to avoid being exact in my instructions -&amp;gt; anarchy&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/strands-agents/samples/blob/main/01-tutorials/02-multi-agent-systems/02-swarm-agent/swarm.ipynb" rel="noopener noreferrer"&gt;strands agents GitHub repo&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;agents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;make_agents&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;swarm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Swarm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv_reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data_analyst&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pattern_reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;endocrinologist&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]],&lt;/span&gt;
    &lt;span class="n"&gt;entry_point&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv_reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;max_handoffs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_iterations&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;execution_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;600.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;node_timeout&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;180.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;repetitive_handoff_detection_window&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;repetitive_handoff_min_unique_agents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;start&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;swarm_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;swarm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PROMPT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;swarm_elapsed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;

&lt;span class="n"&gt;swarm_output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;swarm_result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;swarm_metrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;extract_metrics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;swarm_result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;swarm_metrics&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wall_time_s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;swarm_elapsed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;swarm_output&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print_metrics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;swarm_elapsed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;swarm_metrics&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;SWARM&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;benchmark&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;swarm&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;swarm_output&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;metrics&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;swarm_metrics&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Assembly line: Graph Pipeline
&lt;/h2&gt;

&lt;p&gt;The graph pipeline is the Soviet factory model — a rigid, sequential process where each station does exactly one job and passes the product forward. No loops, no improvisation. Parse → Analyse → Review → Recommend, in that order, every time.&lt;br&gt;
What the assembly line lacks in flexibility it makes up in thoroughness. Because each agent receives the full output of the previous one, nothing gets lost. The analyst computed exact TIR (92.3%), precise CV% (23.9%), and correct GMI (6.5%). The pattern reviewer clustered hypoglycemia events by timestamp. Every number was grounded in the actual data.&lt;br&gt;
The downside? Every agent processes everything sequentially, which means 4x the token throughput. And the rigid structure means you can't easily skip steps or parallelize work. The factory runs at the speed of its slowest station.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl631prkzeyayehxga4cx.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl631prkzeyayehxga4cx.jpg" alt="Graph pipeline" width="738" height="76"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;agents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;make_agents&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;builder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GraphBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv_reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;parse&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data_analyst&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyse&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pattern_reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;review&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;endocrinologist&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recommend&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_edge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;parse&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyse&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_edge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyse&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;review&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_edge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;review&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;recommend&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_entry_point&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;parse&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;graph&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="n"&gt;start&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;graph_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PROMPT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;graph_elapsed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;


&lt;span class="n"&gt;graph_output_parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;node_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;node_result&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;graph_result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;results&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;items&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;text&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;node_result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;hasattr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;node_result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;result&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;node_result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;--- &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;node_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; ---&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;graph_output_parts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;node_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;]&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;graph_output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;graph_output_parts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;graph_metrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;extract_metrics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;graph_result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;graph_metrics&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wall_time_s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;graph_elapsed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print_metrics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;graph_elapsed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;graph_metrics&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;GRAPH&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;benchmark&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;graph&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;graph_output&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;metrics&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;graph_metrics&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Coordinator
&lt;/h2&gt;

&lt;p&gt;The coordinator is the capitalist org chart. One manager, four direct reports. The manager decides who works on what, in what order, and synthesizes the final deliverable. The specialists are demoted to tools — they don't talk to each other, they report up.&lt;br&gt;
Caching works differently across architectures. Both the swarm and coordinator benefit from prompt caching — the swarm's csv_reader cached 73K tokens across its internal conversation cycles, and the coordinator cached 57K tokens across its four sequential tool calls. The graph creates fresh agent contexts per node, so it gets zero cache hits. My extract_metrics function reported the swarm's cache as 0 at the top level, but the nested AgentInvocation metrics tell the real story.&lt;br&gt;
The result: the coordinator combined the statistical precision of the graph with the clinical nuance of the swarm. It added measurable targets (e.g., "post-lunch glucose &amp;lt;9.0 mmol/L") and flagged a battery failure event as an "unacceptable safety risk." The graph also caught the battery failure in its pattern review, but the swarm — having skipped the pattern reviewer — missed it entirely.&lt;br&gt;
An interesting self-correction happened inside the coordinator: the pattern reviewer estimated TIR as "~70-75%" (wrong), but the data analyst computed 92.3% (correct), and the final synthesized report used the right numbers. The corporate hierarchy's redundancy — multiple specialists touching the same data — caught errors that a single-pass architecture would miss.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvclpv7q6ksoerm3n7e24.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvclpv7q6ksoerm3n7e24.jpg" alt="Coordinator" width="520" height="236"&gt;&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;agents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;make_agents&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Wrap each specialist as a tool for the coordinator
&lt;/span&gt;&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;parse_csv&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Parse a Medtronic CareLink CSV export and extract raw diabetes data.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;csv_reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Parse the CareLink CSV at &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;file_path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;analyse_data&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;parsed_data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Compute glucose and insulin statistics on parsed CareLink data.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;data_analyst&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="n"&gt;parsed_data&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;review_patterns&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Identify clinically significant patterns and anomalies in diabetes metrics.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pattern_reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="n"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;clinical_assessment&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;patterns&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Provide endocrinology clinical interpretation and pump setting recommendations.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;agents&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;endocrinologist&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="n"&gt;patterns&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

&lt;span class="n"&gt;coordinator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a coordinator analyzing Medtronic MiniMed 780G insulin pump data.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You have access to four specialist tools:&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;  - parse_csv: extracts raw structured data from the CareLink CSV&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;  - analyse_data: computes glucose statistics, TIR, GMI, CV%&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;  - review_patterns: identifies glucose patterns and anomalies&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;  - clinical_assessment: provides endocrinology interpretation&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Call them in order: parse_csv first, then analyse_data with the parsed output,&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;then review_patterns with the analysis, then clinical_assessment with the patterns.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;After all specialists have contributed, synthesize their outputs into a final report.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;coordinator&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;bedrock_model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;conversation_manager&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nf"&gt;make_conversation_manager&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;window_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;25&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;per_turn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;parse_csv&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;analyse_data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;review_patterns&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;clinical_assessment&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;start&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;coordinator_result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;coordinator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;PROMPT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;coordinator_elapsed&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;start&lt;/span&gt;

&lt;span class="n"&gt;coordinator_output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;coordinator_result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;coordinator_metrics&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;extract_metrics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;coordinator_result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;coordinator_metrics&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wall_time_s&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;coordinator_elapsed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;coordinator_output&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print_metrics&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;coordinator_elapsed&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;coordinator_metrics&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;COORDINATOR&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;benchmark&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;coordinator&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;coordinator_output&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;metrics&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;coordinator_metrics&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The Judge: LLM-as-Evaluator
&lt;/h2&gt;

&lt;p&gt;To score each architecture's output, I used the Strands evaluator framework with Sonnet 4.5 as the judging model. The rubric scores five criteria, each weighted equally at 20%:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Completeness&lt;/strong&gt; — Are glucose, insulin, carbs, and device status all covered?&lt;br&gt;
&lt;strong&gt;Statistical Accuracy&lt;/strong&gt; — Are TIR, GMI, CV%, and glucose stats correctly calculated and compared to clinical targets?&lt;br&gt;
&lt;strong&gt;Pattern Identification&lt;/strong&gt; — Are meaningful patterns (dawn phenomenon, post-meal spikes, hypo clustering) identified with specifics?&lt;br&gt;
&lt;strong&gt;Clinical Recommendations&lt;/strong&gt; — Are pump optimization suggestions specific, balanced, and actionable?&lt;br&gt;
&lt;strong&gt;Safety &amp;amp; Framing&lt;/strong&gt; — Are severe hypos flagged, and is the report framed as informational rather than medical advice?&lt;br&gt;
In code this prompt is called rubric :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands_evals&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Case&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Experiment&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands_evals.evaluators&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OutputEvaluator&lt;/span&gt;

&lt;span class="n"&gt;RUBRIC&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
You are an expert endocrinologist and diabetes data analyst evaluating
AI-generated reports on Medtronic MiniMed 780G insulin pump data.

Score each criterion from 0.0 to 1.0:

1. DATA COMPLETENESS (weight: 0.20)
   - Were Sensor Glucose values extracted and summarized?
   - Were insulin delivery records (basal, bolus, auto-correction) covered?
   - Were carbohydrate entries mentioned?
   - Were device settings or Auto Mode status addressed?
   Score 1.0 if all four are present. Deduct 0.25 per missing category.

2. STATISTICAL ACCURACY (weight: 0.20)
   - Were Time in Range (TIR) percentages reported?
   - Were glucose statistics (mean, median, SD, CV%) included?
   - Was GMI / estimated A1C calculated?
   - Were values compared against ADA/EASD consensus targets?
   Score 1.0 if all metrics present and correctly interpreted.

3. PATTERN IDENTIFICATION (weight: 0.20)
   - Were hypoglycemia patterns identified (timing, severity, frequency)?
   - Were post-meal spikes addressed?
   - Was dawn phenomenon or overnight trends discussed?
   - Were Auto Mode exits or insulin suspensions noted?
   Score 1.0 if clinically significant patterns are identified with specifics.

4. CLINICAL RECOMMENDATIONS (weight: 0.20)
   - Were pump setting adjustments suggested (AIT, carb ratios, target)?
   - Were recommendations specific and actionable (not generic)?
   - Were both problem areas and positive aspects acknowledged?
   - Was pre-bolusing timing discussed if relevant?
   Score 1.0 if recommendations are specific, balanced, and actionable.

5. SAFETY &amp;amp; FRAMING (weight: 0.20)
   - Were severe hypos (&amp;lt;54 mg/dL) flagged as safety concerns?
   - Was the report framed as informational, not medical advice?
   - Was the patient directed to discuss changes with their healthcare team?
   - Was Fiasp pharmacokinetics referenced appropriately if relevant?
   Score 1.0 if safety concerns are prominently flagged and framing is appropriate.

OVERALL SCORE: Weighted average of all five criteria.
Provide the overall score and a brief explanation for each criterion.
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="n"&gt;evaluator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OutputEvaluator&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;rubric&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;RUBRIC&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;include_inputs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;global.anthropic.claude-sonnet-4-5-20250929-v1:0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Build cases — one per pattern
&lt;/span&gt;&lt;span class="n"&gt;pattern_names&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;benchmark&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;keys&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
&lt;span class="n"&gt;cases&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;pattern_name&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;pattern_names&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;benchmark&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;pattern_name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="n"&gt;cases&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;Case&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;
            &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;pattern_name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="nb"&gt;input&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;PROMPT&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;expected_output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
            &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pattern&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;pattern_name&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;task_fn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;case&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Case&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Return the pre-computed output for evaluation.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;case&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;expected_output&lt;/span&gt;

&lt;span class="n"&gt;experiment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Experiment&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;](&lt;/span&gt;&lt;span class="n"&gt;cases&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;cases&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;evaluators&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;evaluator&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;span class="n"&gt;reports&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;experiment&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run_evaluations&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;task_fn&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# reports is a list per evaluator — we have 1 evaluator
# report.scores is a list per case, report.reasons is a list per case
&lt;/span&gt;&lt;span class="n"&gt;report&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;reports&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Overall score across all patterns: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;report&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;overall_score&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pattern_name&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;pattern_names&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;report&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;report&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;scores&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
    &lt;span class="n"&gt;reason&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;report&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;reasons&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;idx&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;idx&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;report&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;reasons&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
    &lt;span class="n"&gt;benchmark&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;pattern_name&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;eval_score&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt;
    &lt;span class="n"&gt;benchmark&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;pattern_name&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;eval_reasoning&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;reason&lt;/span&gt;
    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;pattern_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;score&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Score&lt;/th&gt;
&lt;th&gt;Cost&lt;/th&gt;
&lt;th&gt;Time&lt;/th&gt;
&lt;th&gt;Cache&lt;/th&gt;
&lt;th&gt;Verdict&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Swarm anarchist&lt;/td&gt;
&lt;td&gt;1.00&lt;/td&gt;
&lt;td&gt;$0.057&lt;/td&gt;
&lt;td&gt;56s&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;Anarchists on assembly line don't work&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Graph&lt;/td&gt;
&lt;td&gt;0.98&lt;/td&gt;
&lt;td&gt;$0.315&lt;/td&gt;
&lt;td&gt;337s&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;Expensive perfectionist&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coordinator&lt;/td&gt;
&lt;td&gt;1.00&lt;/td&gt;
&lt;td&gt;$0.071&lt;/td&gt;
&lt;td&gt;395s&lt;/td&gt;
&lt;td&gt;57K&lt;/td&gt;
&lt;td&gt;Slow but right&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Swarm cache&lt;/strong&gt; hits were internal to the csv_reader's conversation cycles — extract_metrics reported 0 at the top level, but nested AgentInvocation metrics show 73K tokens read from cache.&lt;br&gt;
The anarchists (swarm) and the corporate manager both scored 1.00. The factory scored lowest. And yet — the factory and coordinator both computed Time in Range (92.3%) from raw data. The swarm wrote "likely &amp;lt;70%" — off by 22 percentage points — and the judge didn't blink.&lt;br&gt;
Swarm, passed the rubric. That's the headline.&lt;br&gt;
The swarm skipped two agents entirely — csv_reader handed off straight to the endocrinologist, bypassing the data analyst and pattern reviewer. The endocrinologist improvised statistics from alert counts and data summaries instead of computing them from raw glucose values. Maybe the swarm architecture is more appropriate for creative non-deterministic work. Here we have a process, that required agents to talk to each other in specific way. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The graph&lt;/strong&gt; was rigorous but couldn't surface trade-offs the upstream agent didn't frame. &lt;/p&gt;

&lt;p&gt;*&lt;em&gt;The coordinator *&lt;/em&gt; had an interesting self-correction: its pattern reviewer also estimated TIR wrong (~70-75%), but the data analyst computed it correctly, and the final synthesis used the right numbers. Redundancy caught what a single pass missed.&lt;/p&gt;

&lt;p&gt;Both the swarm and coordinator got cache hits. The graph paid full price for every node — fresh agent context each time, zero caching.&lt;br&gt;
Pick Your Politics&lt;/p&gt;

&lt;p&gt;Swarm — cheap and fast, don't trust the numbers if you skip and agent or you don't have explicit handoff especially for deterministic use-cases. Graph — when precision justifies 5× the cost. Coordinator — the default.&lt;/p&gt;

&lt;p&gt;The Real Takeaway&lt;br&gt;
My rubric checked whether metrics appeared, not how they were derived. Maybe here we can put more work here. Add a provenance criterion — can each claim trace to a tool call? — and the swarm drops where it belongs.&lt;/p&gt;

</description>
      <category>awscommunitybuilders</category>
      <category>aws</category>
      <category>strandsagents</category>
      <category>ai</category>
    </item>
    <item>
      <title>Optimizing Multi-Agent Costs on Bedrock: From ~$18 to ~$7 per Diabetes Report Run (Graph vs Swarm Comparison)</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Thu, 12 Mar 2026 17:38:51 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-i-cut-my-ai-medical-report-cost-from-18-to-7-and-what-i-learned-comparing-two-multi-agent-3ja6</link>
      <guid>https://dev.to/aws-builders/how-i-cut-my-ai-medical-report-cost-from-18-to-7-and-what-i-learned-comparing-two-multi-agent-3ja6</guid>
      <description>&lt;h2&gt;
  
  
  The Problem With 5,000 Rows of Blood Sugar Data
&lt;/h2&gt;

&lt;p&gt;I've been living with Type 1 diabetes for over 17 years. My mother had it too, along with some of its complications. The disease hasn't changed much — but the tech around it has.&lt;/p&gt;

&lt;p&gt;I use a MiniMed 780G insulin pump with a Guardian 4 CGM sensor running in SmartGuard auto mode. Every 14 days it produces a report: a ~5,000-row CSV of pump events and CGM readings plus pdf data. &lt;/p&gt;

&lt;p&gt;I wanted something better — a structured clinical summary that's actually useful for medical staff and for people, that makes sense of patterns. And because I'm a DevOps engineer who can't resist over-engineering things, I decided to benchmark two multi-agent architectures against each other:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Graph&lt;/strong&gt; (sequential pipeline)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Swarm&lt;/strong&gt; (autonomous handoffs)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This article covers Graph and Swarm. &lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture 1: The Graph Pipeline
&lt;/h2&gt;

&lt;p&gt;Four agents, one after another. Each does its job and passes results to the next:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reader&lt;/strong&gt; — Ingests the raw CareLink CSV. Extracts CGM glucose readings, insulin delivery (basal/bolus), timestamps, sensor metadata. Flags data quality issues like gaps and sensor warmup periods.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Analyser&lt;/strong&gt; — Crunches the numbers with Python: Time in Range (TIR), GMI, CV%, time-block patterns. Cross-validates against the PDF report.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reviewer&lt;/strong&gt; — The sceptic. Checks the analysis for statistical validity, flags confounders like compression lows and sensor first-day artefacts, separates validated findings from questionable ones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Endocrinologist&lt;/strong&gt; — Takes everything and writes a clinical consultation report with pump setting recommendations and discussion points for the next endo visit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdm6mmoed9kdbdooqsqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbdm6mmoed9kdbdooqsqx.png" alt="Graph architecture" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  The Code
&lt;/h3&gt;

&lt;p&gt;Setup — imports, PDF tool, and model config&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;logging&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pypdf&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PdfReader&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands.models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BedrockModel&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands.multiagent&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;GraphBuilder&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands_tools&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;python_repl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;calculator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;file_read&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands.agent.conversation_manager&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SlidingWindowConversationManager&lt;/span&gt;

&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;BYPASS_TOOL_CONSENT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;read_pdf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;gt&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Reads the content of a pdf file and returns it as a string list&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;PdfReader&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;texts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;page&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pages&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;texts&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;page&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;extract_text&lt;/span&gt;&lt;span class="p"&gt;())&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;texts&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Error reading file &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;filename&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;BedrockModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;global.anthropic.claude-opus-4-6&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;region_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;us-east-1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Agent definitions — four specialists, each with a clear handoff&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;You extract and structure diabetes management data from files 
    in the current directory. Files follow the pattern &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;X DD-MM-YYYY.csv&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; and 
    &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;X DD-MM-YYYY.pdf&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;.

    Extract: CGM glucose readings, insulin delivery (basal/bolus), timestamps, 
    sensor events. Flag any data quality issues (gaps, sensor warmup, anomalies).

    HANDOFF: Summarize date range, data completeness %, readings extracted.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;file_read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;read_pdf&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;analyser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;You perform quantitative analysis on structured diabetes data.

    Calculate: TIR (3.9-10.0), hypo/hyper percentages, GMI, CV%, average glucose, 
    basal/bolus ratio, patterns by time block (overnight/morning/afternoon/evening).

    HANDOFF: All computed metrics, identified patterns with confidence levels, 
    data quality caveats.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;calculator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;python_repl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;file_read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;read_pdf&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;reviewer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;You critically review diabetes data analysis.

    Assess: data sufficiency, pattern validity, confounders (weekend vs weekday, 
    sensor first-day inaccuracy, compression lows), risk prioritization.

    HANDOFF: Validated findings, disputed findings, risk-prioritized concerns.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;endocrinologist&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;endocrinologist&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;You are a virtual endocrinology consultant. The patient uses 
    a MiniMed 780G with Guardian 4 CGM in SmartGuard auto mode.

    Produce: executive summary, wins, priority concerns, actionable recommendations 
    (active insulin time, carb ratios, targets), discussion points for next visit.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Wiring the graph — four nodes, three edges, linear flow&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;builder&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;GraphBuilder&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;reader&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;analyser&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;reviewer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_node&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;endocrinologist&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;endocrinologist&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_edge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_edge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_edge&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reviewer&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;endocrinologist&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;set_entry_point&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;graph&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;builder&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;graph&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Analyze the diabetes data files and produce a clinical report.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  What Came Out
&lt;/h3&gt;

&lt;p&gt;Here's the executive summary the system produced, condensed:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Excellent Type 1 diabetes control — 92.5% Time in Range, GMI 6.4%, low glucose variability — placing among the top 5–10% of international T1D outcomes. The MiniMed 780G is performing particularly well overnight and fasting. Main safety concern: three severe hypoglycaemic episodes (&amp;lt;3.0 mmol/L) within 14 days. Secondary optimisation: afternoon post-meal hyperglycaemia (14:00–17:00). Recommended approach: reduce hypo risk first (adjust active insulin time and glucose target), then address lunch-related spikes through earlier pre-bolusing and improved carb counting.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Clinically useful? I think so.&lt;/p&gt;

&lt;p&gt;What impressed me most was how deep the review went. The reviewer noticed that the reader’s rough guess of TIR (55–65%) was actually way off — when they calculated it from all 3,716 data points, it was 92.5%. They also spotted things the analyzer completely missed, like differences between weekends and weekdays, first-day sensor issues, and compression lows. And the endocrinologist summed it up perfectly by saying it’s about “fine-tuning, not an overhaul” — which is exactly how you’d approach someone who’s already at 92% TIR.&lt;/p&gt;

&lt;h3&gt;
  
  
  The $18 Problem
&lt;/h3&gt;

&lt;p&gt;Here's where it gets uncomfortable. The unoptimised graph: 18 minutes, ~3.3 million tokens:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Node&lt;/th&gt;
&lt;th&gt;Input Tokens&lt;/th&gt;
&lt;th&gt;Output Tokens&lt;/th&gt;
&lt;th&gt;Total Cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;reader&lt;/td&gt;
&lt;td&gt;1,623,805&lt;/td&gt;
&lt;td&gt;19,252&lt;/td&gt;
&lt;td&gt;$8.60&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;analyser&lt;/td&gt;
&lt;td&gt;1,667,440&lt;/td&gt;
&lt;td&gt;40,461&lt;/td&gt;
&lt;td&gt;$9.35&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;reviewer&lt;/td&gt;
&lt;td&gt;5,810&lt;/td&gt;
&lt;td&gt;7,780&lt;/td&gt;
&lt;td&gt;$0.22&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;endocrinologist&lt;/td&gt;
&lt;td&gt;~3,331&lt;/td&gt;
&lt;td&gt;~4,242&lt;/td&gt;
&lt;td&gt;~$0.25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~3.3M&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~67K&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$18.42&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;$18.42 for a single report. The reader and analyser are the culprits — they're passing the entire conversation history (including all tool call results) as context to each subsequent model invocation. The 5,000-row CSV gets re-read and re-sent multiple times.&lt;/p&gt;

&lt;p&gt;My agents were going through tokens like people go through rakia (alcohol drink) at a village wedding. &lt;/p&gt;

&lt;h3&gt;
  
  
  Cutting It With a Sliding Window
&lt;/h3&gt;

&lt;p&gt;One-liner fix per agent. Strands has a &lt;code&gt;SlidingWindowConversationManager&lt;/code&gt; that caps how much conversation history gets sent to the model:&lt;/p&gt;

&lt;p&gt;Sliding window configuration — add to reader and analyser&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;reader&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reader&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# same as before
&lt;/span&gt;    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;file_read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;read_pdf&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;conversation_manager&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;SlidingWindowConversationManager&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;window_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;should_truncate_results&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;per_turn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;analyser&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# same as before
&lt;/span&gt;    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;calculator&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;python_repl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;file_read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;read_pdf&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;conversation_manager&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;SlidingWindowConversationManager&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;window_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;should_truncate_results&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;per_turn&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;The reviewer and endocrinologist don't need it — their input is already small.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Token reduction by node:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Node&lt;/th&gt;
&lt;th&gt;Previous (no manager)&lt;/th&gt;
&lt;th&gt;With Sliding Window&lt;/th&gt;
&lt;th&gt;Savings&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;reader&lt;/td&gt;
&lt;td&gt;1,623,805 input&lt;/td&gt;
&lt;td&gt;1,054,919 input&lt;/td&gt;
&lt;td&gt;-35%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;analyser&lt;/td&gt;
&lt;td&gt;1,667,440 input&lt;/td&gt;
&lt;td&gt;135,770 input&lt;/td&gt;
&lt;td&gt;-92%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;reviewer&lt;/td&gt;
&lt;td&gt;5,810 input&lt;/td&gt;
&lt;td&gt;1,260 input&lt;/td&gt;
&lt;td&gt;-78%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;endocrinologist&lt;/td&gt;
&lt;td&gt;~3,331 input&lt;/td&gt;
&lt;td&gt;~3,331 input&lt;/td&gt;
&lt;td&gt;same&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Cost after optimisation:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Node&lt;/th&gt;
&lt;th&gt;Input Cost&lt;/th&gt;
&lt;th&gt;Output Cost&lt;/th&gt;
&lt;th&gt;Total&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;reader&lt;/td&gt;
&lt;td&gt;$5.27&lt;/td&gt;
&lt;td&gt;$0.31&lt;/td&gt;
&lt;td&gt;$5.58&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;analyser&lt;/td&gt;
&lt;td&gt;$0.68&lt;/td&gt;
&lt;td&gt;$0.39&lt;/td&gt;
&lt;td&gt;$1.07&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;reviewer&lt;/td&gt;
&lt;td&gt;$0.01&lt;/td&gt;
&lt;td&gt;$0.08&lt;/td&gt;
&lt;td&gt;$0.09&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;endocrinologist&lt;/td&gt;
&lt;td&gt;$0.02&lt;/td&gt;
&lt;td&gt;$0.13&lt;/td&gt;
&lt;td&gt;$0.15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;$6.89&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;From $18.42 to $6.89 — a &lt;strong&gt;63% cost reduction&lt;/strong&gt; with no meaningful loss in output quality. The analyser benefited most because it was the worst offender: every &lt;code&gt;python_repl&lt;/code&gt; tool call was accumulating in context.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture 2: The Swarm
&lt;/h2&gt;

&lt;p&gt;Swarm is the wildest of the three. No predefined sequence — agents share context and decide for themselves who to talk to next. Less assembly line, more group chat where everyone's an expert. A life without a manager, basically.&lt;/p&gt;

&lt;p&gt;Doesn't mean it's cheaper though. Shared context means every agent sees what every other agent said, and token counts snowball. I hit ~3.6M tokens and some failed runs trying this on Opus. So I got pragmatic: &lt;strong&gt;Sonnet 4.6 as the coordinator&lt;/strong&gt;, three worker agents on &lt;strong&gt;Sonnet 4&lt;/strong&gt;. Costs came down, the thing actually finished.The Sonnet 4 is actually the default model for the agent swarm tool&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa494455as5ob6mrifa49.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa494455as5ob6mrifa49.png" alt="Swarm architecture" width="800" height="537"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Swarm setup — Sonnet coordinator, Sonnet workers&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;strands.multiagent&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Swarm&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Diabetes analyser&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;BedrockModel&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;global.anthropic.claude-sonnet-4-6&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;swarm&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;python_repl&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;file_read&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;calculator&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Create three agents to analyse &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;X 10-03-2026.csv&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; in current folder&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Three swarm agents — glucose analyst, insulin analyst, clinical advisor — each working their own angle on the data.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Came Out
&lt;/h3&gt;

&lt;p&gt;The Swarm got the numbers right. TIR 92.5%, afternoon variability, the hypo events — all there. The clinical advisor generated standard recommendations: pre-bolusing, carb counting, alert management.&lt;/p&gt;

&lt;p&gt;But the report read more like a template than a consultation. Generic advice about rotating sensor sites and keeping firmware updated sat next to the actual data-specific findings. The coordinator's summary was basically "here's what each agent did" rather than a unified clinical document. I think I was wrong a bit, setting the coordinator to the larger model, while keeping the workers smaller or older model. I think another approach will be using Haiku for coordinator, while leaving each agent to use Opus or Sonnet. This will lead to more depth, but I was careful with the tokens or rakia as well, so i decided to use the default Sonnet&lt;/p&gt;




&lt;h2&gt;
  
  
  The Showdown: Graph vs Swarm
&lt;/h2&gt;

&lt;p&gt;Same 14-day CareLink export, both architectures. Here's what happened.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tokens &amp;amp; Cost
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Graph (Opus, optimised)&lt;/th&gt;
&lt;th&gt;Swarm (Opus coord + Sonnet agents)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Total tokens&lt;/td&gt;
&lt;td&gt;~900K&lt;/td&gt;
&lt;td&gt;~175K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost&lt;/td&gt;
&lt;td&gt;~$6.89&lt;/td&gt;
&lt;td&gt;~$2.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time&lt;/td&gt;
&lt;td&gt;~11 min&lt;/td&gt;
&lt;td&gt;~2.5 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Worker model&lt;/td&gt;
&lt;td&gt;Opus (all 4 nodes)&lt;/td&gt;
&lt;td&gt;Sonnet 4 (3 agents)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Coordinator&lt;/td&gt;
&lt;td&gt;N/A (sequential)&lt;/td&gt;
&lt;td&gt;Opus&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Swarm is &lt;strong&gt;3x cheaper&lt;/strong&gt; and &lt;strong&gt;4x faster&lt;/strong&gt;. But this comparison is again a bit unfair — the Graph runs everything on the most expensive model, while the Swarm pushes the heavy lifting to Sonnet.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where the Graph pulled ahead
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Self-correction.&lt;/strong&gt; The reviewer noticed the reader's TIR estimate (55–65%) was way off and flagged 92.5% as the real number. The Swarm had no mechanism for one agent to challenge another.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Confounders.&lt;/strong&gt; The reviewer flagged that the Level 2 hypo events weren't checked against the sensor change date (Mar 5). If any of them fell on that day, they could be first-day sensor artefacts, not real hypos or low blood glucose. Swarm didn't consider this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compression lows.&lt;/strong&gt; Overnight readings below range could include compression lows — you roll onto the sensor and it reads artificially low. Graph flagged it. Swarm didn't.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Weekend vs weekday.&lt;/strong&gt; The reviewer noted this wasn't assessed at all. The afternoon variability might look completely different on weekends vs workdays. Swarm just generated recommendations without thinking about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern prioritisation.&lt;/strong&gt; The reviewer took the "12.6 bolus entries/day" grazing pattern, upgraded it from moderate to high confidence, and called it the single most actionable finding. Swarm saw the same number but didn't do anything with it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Specific recommendations.&lt;/strong&gt; The endocrinologist said "weaken afternoon CR by 10–15%", "verify AIT is 3.0–3.5 hours", "do NOT lower SmartGuard target until hypos are addressed." The Swarm said "rotate sensor sites" and "keep firmware updated." One of these I can take to my doctor. The other I already know.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where the Swarm held up
&lt;/h3&gt;

&lt;p&gt;The core metrics were correct — TIR, GMI, CV% all matched. Parallel analysis was efficient. And for a quick "how's my last two weeks looking?" check, it's perfectly fine. &lt;/p&gt;

&lt;h3&gt;
  
  
  The thing I need to be honest about
&lt;/h3&gt;

&lt;p&gt;I can't cleanly separate architecture quality from model quality here. And that bugs me.&lt;/p&gt;

&lt;p&gt;The Graph runs all four nodes on Opus. The Swarm runs the workers on Sonnet. So when the Graph's reviewer catches confounders that the Swarm misses — is that the sequential pipeline being better, or is it just Opus being smarter than Sonnet?&lt;/p&gt;

&lt;p&gt;Probably both. My gut says ~60% model, ~40% architecture. The confounder analysis — compression lows, sensor artefacts, weekend splits — that's Opus-level reasoning that Sonnet doesn't typically do unprompted. But the architecture &lt;em&gt;gave&lt;/em&gt; Opus a dedicated step to do that reasoning. The Swarm doesn't have a reviewer step even if you ran it all on Opus.&lt;/p&gt;

&lt;p&gt;The fair test would be:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Graph with Sonnet everywhere vs Swarm with Sonnet (isolate architecture)&lt;/li&gt;
&lt;li&gt;Graph with Opus everywhere vs Swarm with Opus (same thing, higher tier)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Verdict
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Dimension&lt;/th&gt;
&lt;th&gt;Graph&lt;/th&gt;
&lt;th&gt;Swarm&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Output quality&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;★★★★★&lt;/td&gt;
&lt;td&gt;★★★☆☆&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Clinical depth&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Deep, specific&lt;/td&gt;
&lt;td&gt;Competent, generic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost (optimised)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$6.89&lt;/td&gt;
&lt;td&gt;~$2.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Speed&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~11 min&lt;/td&gt;
&lt;td&gt;~2.5 min&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Error correction&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Built-in (reviewer)&lt;/td&gt;
&lt;td&gt;None&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Best for&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Clinic-ready reports&lt;/td&gt;
&lt;td&gt;Weekly trend checks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Graph wins on quality. I'd hand the Graph report to my endocrinologist. The Swarm report I'd keep for myself.&lt;/p&gt;

&lt;p&gt;Swarm wins on speed and cost. For a quick "how's my last two weeks" check, it gives 80% of the insight at 30% of the cost.&lt;/p&gt;

&lt;p&gt;As we say in Bulgaria: — "A good word opens even an iron door." The Graph doesn't just give you more words — it gives you the right ones. When you're handing a report to the person managing your chronic condition, that matters.&lt;/p&gt;




&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Context management is everything.&lt;/strong&gt; Sliding window: $18 → $7. 63% cut. Zero quality loss. If you're building multi-agent pipelines, do this first. Watch your analyser nodes — any agent with tool use will balloon context because every tool call/result pair stays in history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Review steps are worth it.&lt;/strong&gt; The reviewer cost $0.09 and caught a wrong TIR estimate, four unassessed confounders, and upgraded the most actionable finding. Best nine cents I've ever spent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Model and architecture are tangled.&lt;/strong&gt; I used Opus for Graph and Sonnet for Swarm workers. That means I can't say for sure whether the Graph's better output is architecture or just Opus being Opus. Probably both — 60/40 model/architecture is my guess. Need to test properly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Pick the right tool.&lt;/strong&gt; Graph for depth, Swarm for speed. "Which is better?" → "Better at what?"&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. The output is actually useful.&lt;/strong&gt; Whether $7 per report is worth it — or at least I think so. My endo agreed with the recommendations. That's the test that matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Start with the smallest model.&lt;/strong&gt; It could do magic and be really useful. &lt;/p&gt;




&lt;h2&gt;
  
  
  How much did the over - engineering costs? Any additional costs?
&lt;/h2&gt;

&lt;p&gt;I always thought this will cost a couple of bucks max.&lt;br&gt;
Well I ran these agents a couple of times - 20 for development and there is a catch about the pricing. There are additional costs for long context. Luckily I am Community Builder, so I don't need to pay the $500 for using the models unresponsive. Use the models responsibly and start with the smallest possible model&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mvui7mq0yv3xgqd31np.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0mvui7mq0yv3xgqd31np.png" alt="Costs" width="800" height="195"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aws</category>
      <category>bedrock</category>
      <category>awscommunitybuilders</category>
    </item>
    <item>
      <title>The smart way of centralizing VPC endpoints with Route 53 Profiles</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Wed, 27 Aug 2025 15:27:46 +0000</pubDate>
      <link>https://dev.to/aws-builders/the-smart-way-of-centralizing-vpc-endpoints-with-route-53-profiles-38bc</link>
      <guid>https://dev.to/aws-builders/the-smart-way-of-centralizing-vpc-endpoints-with-route-53-profiles-38bc</guid>
      <description>&lt;h2&gt;
  
  
  How to Centralize Endpoints Smartly
&lt;/h2&gt;

&lt;p&gt;A few months ago in April, AWS introduced a new feature to the Route 53 arsenal: Route 53 Profiles. One would think—ah, another AWS feature to manage DNS centrally. But there's much more to it than that.&lt;/p&gt;

&lt;p&gt;Basically, Amazon realized that managing DNS across multiple environments is about as organized as a toddler's toy box or socks after laundry day (one of them falls victim to the sock-eating monster). So they created profiles—separate rule sets you can share between AWS accounts. It's like having one proper manual that everyone follows, instead of each department making up their own rules and calling it "innovation." When it goes wrong—which it will—you know exactly which profile to blame and how to fix it.&lt;/p&gt;

&lt;p&gt;At a high level, the design allows you to, for example, share VPC endpoints and centralize them.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthd2btfyyw9suvth1cs0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fthd2btfyyw9suvth1cs0.png" alt="Centralized VPC endpoints" width="800" height="886"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You create a central VPC with all of the VPC interface endpoints. Why interface? Because they are transitive compared to gateway endpoints and can be reused.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8d4zrri5bcch4lqha1j4.png" alt=" " width="800" height="195"&gt;
&lt;/li&gt;
&lt;li&gt;You can keep private DNS enabled on your centralized endpoints. No need to manually create hosted zones or resolver rules.&lt;/li&gt;
&lt;li&gt;Create a Route 53 profile.&lt;/li&gt;
&lt;li&gt;Simply associate endpoints with a Route 53 profile in the hub account and the hub VPC.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fez7dqal8i4ghk179j1xc.png" alt="Associate the Route 53 profile with the VPC endpoints" width="800" height="364"&gt;
&lt;/li&gt;
&lt;li&gt;Share the profile via AWS Resource Access Manager.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv9pt2jkn3uy1zjiktugl.png" alt="RAM share" width="800" height="432"&gt;
&lt;/li&gt;
&lt;li&gt;Associate spoke VPCs to the profile—done.
&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0uy22zr01o7qvsq0r68d.png" alt="Spoke VPC association" width="800" height="359"&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Reality Check
&lt;/h2&gt;

&lt;p&gt;This will save you between 84-87% of costs per hour by centralizing the endpoints according to my previous &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p"&gt;article&lt;/a&gt;. In reality, after one month living with this centralization technique, the savings were around 70%, mainly because I forgot to account for the attachment costs between accounts and the traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Other Benefits
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Remember that DNS Firewall feature in VPC, where you can define a rule group and domain list of whitelisted domains? Now you can associate them with the Route 53 profile as well and share them between accounts via Resource Access Manager.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;One last—but not least—thing is that you can associate the profile with a private hosted zone. Afterward, we share this Route 53 profile with RAM again and associate it with each and every VPC that needs it.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Route 53 Profiles simplify the life of DevOps engineers and Solutions Architects by giving you the ability to share firewall rule groups, hosted zones, and VPC endpoints between accounts. Why does this matter? Besides being easy and removing the heavy lifting, we now have some saved money in our pockets and we're more sustainable—fewer network interfaces are better for the environment. And don't try to argue; I know your VPC endpoints are already underutilized.&lt;/p&gt;

&lt;p&gt;Now you would ask: "Is there Terraform for it?"&lt;br&gt;
Of course there is!&lt;/p&gt;

&lt;h2&gt;
  
  
  Terraform Code
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53profiles_profile"&lt;/span&gt; &lt;span class="s2"&gt;"primorsko"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"dtpl-r53-profiles"&lt;/span&gt;
  &lt;span class="nx"&gt;tags&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;Environment&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ProbkoTestov testva surfa na primorsko"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53profiles_association"&lt;/span&gt; &lt;span class="s2"&gt;"spoke"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"spoke"&lt;/span&gt;
  &lt;span class="nx"&gt;profile_id&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53profiles_profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;primorsko&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;resource_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;hub_vpc_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53profiles_association"&lt;/span&gt; &lt;span class="s2"&gt;"hub"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"hub"&lt;/span&gt;
  &lt;span class="nx"&gt;profile_id&lt;/span&gt;  &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53profiles_profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;primorsko&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;resource_id&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;spoke_vpc_id&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53profiles_resource_association"&lt;/span&gt; &lt;span class="s2"&gt;"ssm"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ssm"&lt;/span&gt;
  &lt;span class="nx"&gt;profile_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53profiles_profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;primorsko&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;resource_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_endpoint_ec2&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53profiles_resource_association"&lt;/span&gt; &lt;span class="s2"&gt;"ssmmessages"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ssmmessages"&lt;/span&gt;
  &lt;span class="nx"&gt;profile_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53profiles_profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;primorsko&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;resource_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_endpoint_ssmmessages&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_route53profiles_resource_association"&lt;/span&gt; &lt;span class="s2"&gt;"ec2messages"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"ec2messages"&lt;/span&gt;
  &lt;span class="nx"&gt;profile_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_route53profiles_profile&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;primorsko&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
  &lt;span class="nx"&gt;resource_arn&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;vpc_endpoint_ec2messages&lt;/span&gt; &lt;span class="c1"&gt;# This is legacy; you can go without it&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>dns</category>
      <category>route53</category>
      <category>awscommunitybuilders</category>
    </item>
    <item>
      <title>Build Games Challenge: Build Tower of Hanoi with Amazon Q Developer CLI</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Wed, 02 Jul 2025 17:44:54 +0000</pubDate>
      <link>https://dev.to/aws-builders/build-games-challenge-build-tower-of-hanoi-with-amazon-q-developer-cli-2hh4</link>
      <guid>https://dev.to/aws-builders/build-games-challenge-build-tower-of-hanoi-with-amazon-q-developer-cli-2hh4</guid>
      <description>&lt;h1&gt;
  
  
  Table of Contents
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;The Frog and the Ox: Why I Started This Project&lt;/li&gt;
&lt;li&gt;Why Tower of Hanoi?&lt;/li&gt;
&lt;li&gt;Getting Started with Amazon Q Developer CLI&lt;/li&gt;
&lt;li&gt;
The Zero-Shot Miracle

&lt;ul&gt;
&lt;li&gt;The First Success&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;The Design Challenge&lt;/li&gt;

&lt;li&gt;The Material Design Evolution&lt;/li&gt;

&lt;li&gt;The Final Polish: Advanced Zero-Shot&lt;/li&gt;

&lt;li&gt;Technical Insights&lt;/li&gt;

&lt;li&gt;Code Architecture Highlights&lt;/li&gt;

&lt;li&gt;Performance and User Experience&lt;/li&gt;

&lt;li&gt;Reflections on AI-Assisted Development&lt;/li&gt;

&lt;li&gt;Conclusion: The Frog's Success&lt;/li&gt;

&lt;li&gt;Try It Yourself&lt;/li&gt;

&lt;/ul&gt;

&lt;h1&gt;
  
  
  Building Tower of Hanoi with Amazon Q Developer CLI: A Journey from Zero-Shot to Polished Game
&lt;/h1&gt;

&lt;h2&gt;
  
  
  The Frog and the Ox: Why I Started This Project
&lt;/h2&gt;

&lt;p&gt;A few days ago, I stumbled upon an &lt;a href="https://community.aws/content/2y6egGcPAGQs8EwtQUM9KAONojz/build-games-challenge-build-classics-with-amazon-q-developer-cli" rel="noopener noreferrer"&gt;AWS Community post&lt;/a&gt; about the Build Games Challenge using Amazon Q Developer CLI. It reminded me of a Bulgarian proverb: &lt;em&gt;"The frog saw the ox being shod and lifted her leg too."&lt;/em&gt; This saying captures the essence of someone copying others blindly, trying to be part of something they clearly don't belong to—like a frog thinking she's an ox just because she saw the ox getting horseshoes.&lt;/p&gt;

&lt;p&gt;That's exactly why I decided to build the Tower of Hanoi game and participate in the challenge. Part nostalgia, part curiosity, and maybe a little bit of that frog mentality.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Tower of Hanoi?
&lt;/h2&gt;

&lt;p&gt;Tower of Hanoi holds a special place in my programming journey. I first coded it back in university, and it's stuck with me ever since. The game's elegant simplicity masks its mathematical complexity—moving a set of disks from one pole to another while following just a few rules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Three rods/poles total&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Move only one disk at a time&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Never place a larger disk on top of a smaller one&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The mathematical beauty lies in its solution: the minimum number of moves needed is &lt;strong&gt;2^n - 1&lt;/strong&gt;, where n is the number of disks.&lt;/p&gt;

&lt;p&gt;This time, I wanted to bring it to life in a new way—powered by Amazon Q Developer CLI, without the hard memories of Java classes that haunted my university days.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with Amazon Q Developer CLI
&lt;/h2&gt;

&lt;p&gt;Setting up Amazon Q Developer CLI is refreshingly simple. If you're on macOS with Homebrew:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;brew &lt;span class="nb"&gt;install &lt;/span&gt;amazon-q
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once installed, just type &lt;code&gt;q&lt;/code&gt; in your terminal to start chatting with your AI coding companion.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Zero-Shot Miracle
&lt;/h2&gt;

&lt;p&gt;My first prompt was deliberately naive:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;help me build tower of hanoi using pygame framework&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The results absolutely stunned me. Without any examples, rules, or detailed specifications, Amazon Q Developer CLI generated a &lt;strong&gt;fully functional&lt;/strong&gt; Tower of Hanoi game using pygame. Zero-shot prompting at its finest.&lt;/p&gt;

&lt;h3&gt;
  
  
  The First Success
&lt;/h3&gt;

&lt;p&gt;The initial game was surprisingly complete:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ Interactive gameplay with mouse controls&lt;/li&gt;
&lt;li&gt;✅ Visual feedback and animations&lt;/li&gt;
&lt;li&gt;✅ Rules enforcement&lt;/li&gt;
&lt;li&gt;✅ Move counter&lt;/li&gt;
&lt;li&gt;✅ Auto-solve functionality&lt;/li&gt;
&lt;li&gt;✅ Variable disk counts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's what that first iteration looked like:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtz3wl1zxavwh9gpt9ha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhtz3wl1zxavwh9gpt9ha.png" alt="Tower of Hanoi - First Version" width="800" height="634"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmg2xk59t4rvuh4juuzva.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmg2xk59t4rvuh4juuzva.png" alt="Tower of Hanoi - First Version winning" width="800" height="591"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Clean, functional, but lacking visual polish&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Design Challenge
&lt;/h2&gt;

&lt;p&gt;Emboldened by this success, I decided to push further. If zero-shot prompting could create a working game, surely it could handle design improvements, right?&lt;/p&gt;

&lt;p&gt;My next prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;make it more polished using modern ui design patterns like material design&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Plot twist&lt;/strong&gt;: This didn't work as smoothly. The generated code was incomplete, cutting off mid-function. After several attempts and follow-up prompts like "the last function draw_rounded_rect was not finished," I finally got a working solution—but it required manual debugging.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lessons Learned
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Zero-shot works best for complete, well-defined tasks&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Incremental changes can be trickier than starting fresh&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Always review and test AI-generated code&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Be prepared to debug and iterate&lt;/strong&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Material Design Evolution
&lt;/h2&gt;

&lt;p&gt;After some back-and-forth, I achieved a much more polished version with Material Design principles:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mxcdya0fv81f8584bsp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8mxcdya0fv81f8584bsp.png" alt="Tower of Hanoi - Material Design" width="800" height="595"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwh1h31pps5c42xur0wg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwh1h31pps5c42xur0wg.png" alt="Tower of Hanoi - Material Design winning" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Modern Material Design aesthetic with elevated cards, shadows, and proper color palette&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Material Design Elements Added:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Elevated surfaces&lt;/strong&gt; with subtle shadows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Material color palette&lt;/strong&gt; (Blue 500, Orange 500, etc.)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rounded corners&lt;/strong&gt; and proper spacing&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Card-based UI&lt;/strong&gt; for controls&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Visual hierarchy&lt;/strong&gt; with proper typography&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hover states&lt;/strong&gt; and interactive feedback&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Final Polish: Advanced Zero-Shot
&lt;/h2&gt;

&lt;p&gt;For my final iteration, I crafted a comprehensive prompt that specified exactly what I wanted:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Create a complete, polished Tower of Hanoi game using Python Pygame framework, styled according to Google's Material Design principles. Include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Visually appealing, responsive UI with Material Design elements&lt;/li&gt;
&lt;li&gt;Smooth animations and visual feedback&lt;/li&gt;
&lt;li&gt;Drag-and-drop or click-to-move functionality&lt;/li&gt;
&lt;li&gt;Rules enforcement and user-friendly error handling&lt;/li&gt;
&lt;li&gt;Move counter, elapsed timer, and reset functionality&lt;/li&gt;
&lt;li&gt;Level selection (3-7 disks)&lt;/li&gt;
&lt;li&gt;Light/dark theme toggle&lt;/li&gt;
&lt;li&gt;Modular, well-documented OOP code&lt;/li&gt;
&lt;li&gt;Material Design color palettes and shadows&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;This comprehensive prompt yielded the most impressive result:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq85pjzdj4gv172becxb6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq85pjzdj4gv172becxb6.png" alt="Tower of Hanoi - Final Polish" width="800" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojrux6n2o1zp1gmu2bny.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fojrux6n2o1zp1gmu2bny.png" alt="Tower of Hanoi - Final Polish winning" width="800" height="588"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The final version with enhanced responsiveness and refined interactions&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  New Features in the Final Version:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Enhanced mouse interactions&lt;/strong&gt; with better responsiveness&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved visual feedback&lt;/strong&gt; for user actions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Smoother animations&lt;/strong&gt; and transitions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better error handling&lt;/strong&gt; and edge case management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More intuitive UI&lt;/strong&gt; with clearer visual hierarchy&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Technical Insights
&lt;/h2&gt;

&lt;h3&gt;
  
  
  What Amazon Q Developer CLI Excelled At:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Complete game logic implementation&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pygame framework integration&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Object-oriented design patterns&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mathematical algorithm implementation&lt;/strong&gt; (recursive Hanoi solver)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event handling&lt;/strong&gt; and game state management&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Where It Struggled:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Incremental design changes&lt;/strong&gt; to existing code&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex prompt continuation&lt;/strong&gt; when code was cut off&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fine-tuning visual details&lt;/strong&gt; without explicit guidance&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  The Sweet Spot:
&lt;/h3&gt;

&lt;p&gt;The most effective approach was &lt;strong&gt;comprehensive, specific prompts&lt;/strong&gt; that clearly defined the entire scope of what I wanted. This worked better than trying to modify existing code piecemeal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Architecture Highlights
&lt;/h2&gt;

&lt;p&gt;The final implementation showcased excellent software engineering practices:&lt;/p&gt;

&lt;p&gt;Key architectural decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Separation of concerns&lt;/strong&gt; between game logic and rendering&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Event-driven architecture&lt;/strong&gt; for user interactions&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State management&lt;/strong&gt; for game progression&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Modular design&lt;/strong&gt; for easy extensibility&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Performance and User Experience
&lt;/h2&gt;

&lt;p&gt;The final game delivers on multiple fronts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Responsive controls&lt;/strong&gt; with immediate visual feedback&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Intuitive interface&lt;/strong&gt; following Material Design principles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accessibility considerations&lt;/strong&gt; with clear visual hierarchy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Error prevention&lt;/strong&gt; through smart UI design&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Reflections on AI-Assisted Development
&lt;/h2&gt;

&lt;p&gt;This project revealed fascinating insights about working with AI coding assistants:&lt;/p&gt;

&lt;h3&gt;
  
  
  The Good:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rapid prototyping&lt;/strong&gt; from concept to working game&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Best practices implementation&lt;/strong&gt; without explicit instruction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Complex algorithm generation&lt;/strong&gt; (recursive Hanoi solver)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Framework expertise&lt;/strong&gt; beyond what I could have written alone&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Challenging:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Iteration difficulties&lt;/strong&gt; when making incremental changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Need for human oversight&lt;/strong&gt; and debugging&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt engineering importance&lt;/strong&gt; for optimal results&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Surprising:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Zero-shot capability&lt;/strong&gt; exceeded expectations for complete tasks&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code quality&lt;/strong&gt; was consistently high and well-structured&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation and comments&lt;/strong&gt; were thoughtfully included&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion: The Frog's Success
&lt;/h2&gt;

&lt;p&gt;Looking back at that Bulgarian proverb, maybe sometimes it's okay to be the frog lifting her leg when she sees the ox being shod. In this case, the "copying" led to genuine learning and a surprisingly sophisticated result.&lt;/p&gt;

&lt;p&gt;Amazon Q Developer CLI proved to be an impressive coding companion, especially for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Complete project generation&lt;/strong&gt; from clear specifications&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Framework-specific implementations&lt;/strong&gt; with best practices&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Algorithm implementation&lt;/strong&gt; with proper optimization&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key lesson? &lt;strong&gt;Be specific, be comprehensive, and be prepared to iterate.&lt;/strong&gt; The most successful prompts were those that painted a complete picture of the desired outcome rather than asking for incremental modifications.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;Want to experiment with Amazon Q Developer CLI? Start with a clear, comprehensive prompt for a complete project rather than trying to modify existing code. You might be surprised by what you can achieve with the right approach to AI-assisted development.&lt;/p&gt;

&lt;p&gt;The Tower of Hanoi might be an ancient puzzle, but building it with modern AI tools offers fresh insights into both game development and the evolving landscape of programming assistance.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you tried building games with AI coding assistants? What was your experience? Share your own "frog and ox" moments in the comments below.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Code initial version:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pygame
import sys
import time

# Initialize pygame
pygame.init()

# Constants
WIDTH, HEIGHT = 800, 600
DISK_HEIGHT = 20
MAX_DISKS = 5
ANIMATION_SPEED = 5

# Colors
BACKGROUND = (50, 50, 50)
TOWER_COLOR = (139, 69, 19)
DISK_COLORS = [
    (255, 0, 0),    # Red
    (255, 165, 0),  # Orange
    (255, 255, 0),  # Yellow
    (0, 255, 0),    # Green
    (0, 0, 255),    # Blue
]

# Set up the display
screen = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Tower of Hanoi")
clock = pygame.time.Clock()

class Disk:
    def __init__(self, size, color):
        self.size = size
        self.color = color
        self.x = 0
        self.y = 0
        self.moving = False
        self.target_x = 0
        self.target_y = 0

    def draw(self):
        width = (self.size + 1) * 30
        pygame.draw.rect(screen, self.color, (self.x - width // 2, self.y - DISK_HEIGHT // 2, width, DISK_HEIGHT), 0, 5)

    def move_towards_target(self):
        dx = self.target_x - self.x
        dy = self.target_y - self.y

        if abs(dx) &amp;lt; ANIMATION_SPEED and abs(dy) &amp;lt; ANIMATION_SPEED:
            self.x = self.target_x
            self.y = self.target_y
            self.moving = False
            return True

        if abs(dx) &amp;gt; 0:
            self.x += ANIMATION_SPEED if dx &amp;gt; 0 else -ANIMATION_SPEED

        if abs(dy) &amp;gt; 0:
            self.y += ANIMATION_SPEED if dy &amp;gt; 0 else -ANIMATION_SPEED

        return False

class Tower:
    def __init__(self, x):
        self.x = x
        self.y = HEIGHT - 100
        self.disks = []

    def draw(self):
        # Draw tower base
        pygame.draw.rect(screen, TOWER_COLOR, (self.x - 10, self.y, 20, 20))
        pygame.draw.rect(screen, TOWER_COLOR, (self.x - 50, self.y + 20, 100, 10))

        # Draw tower pole
        pygame.draw.rect(screen, TOWER_COLOR, (self.x - 5, self.y - 200, 10, 200))

    def add_disk(self, disk):
        disk_y = self.y - len(self.disks) * DISK_HEIGHT - DISK_HEIGHT // 2
        disk.x = self.x
        disk.y = disk_y
        self.disks.append(disk)

    def remove_top_disk(self):
        if self.disks:
            return self.disks.pop()
        return None

    def can_add_disk(self, disk):
        if not self.disks:
            return True
        return disk.size &amp;lt; self.disks[-1].size

class Game:
    def __init__(self, num_disks=3):
        self.num_disks = min(num_disks, MAX_DISKS)
        self.towers = [
            Tower(WIDTH // 4),
            Tower(WIDTH // 2),
            Tower(3 * WIDTH // 4)
        ]
        self.moves = 0
        self.selected_tower = None
        self.selected_disk = None
        self.moving_disk = None
        self.auto_solving = False
        self.solution_steps = []
        self.solution_index = 0
        self.last_move_time = 0
        self.game_won = False

        # Initialize the first tower with disks
        for i in range(self.num_disks, 0, -1):
            disk = Disk(i - 1, DISK_COLORS[(i - 1) % len(DISK_COLORS)])
            self.towers[0].add_disk(disk)

    def draw(self):
        screen.fill(BACKGROUND)

        # Draw towers
        for tower in self.towers:
            tower.draw()

        # Draw disks
        for tower in self.towers:
            for disk in tower.disks:
                disk.draw()

        # Draw moving disk
        if self.moving_disk:
            self.moving_disk.draw()

        # Draw move counter
        font = pygame.font.SysFont('Arial', 24)
        moves_text = font.render(f"Moves: {self.moves}", True, (255, 255, 255))
        screen.blit(moves_text, (20, 20))

        # Draw win message
        if self.game_won:
            win_font = pygame.font.SysFont('Arial', 48)
            win_text = win_font.render("You Win!", True, (255, 215, 0))
            screen.blit(win_text, (WIDTH // 2 - win_text.get_width() // 2, 50))

        # Draw buttons
        self.draw_buttons()

        pygame.display.flip()

    def draw_buttons(self):
        # Reset button
        pygame.draw.rect(screen, (200, 200, 200), (20, HEIGHT - 60, 100, 40), 0, 5)
        font = pygame.font.SysFont('Arial', 20)
        reset_text = font.render("Reset", True, (0, 0, 0))
        screen.blit(reset_text, (45, HEIGHT - 50))

        # Auto Solve button
        pygame.draw.rect(screen, (200, 200, 200), (140, HEIGHT - 60, 120, 40), 0, 5)
        solve_text = font.render("Auto Solve", True, (0, 0, 0))
        screen.blit(solve_text, (155, HEIGHT - 50))

        # Disk count buttons
        for i in range(1, MAX_DISKS + 1):
            pygame.draw.rect(screen, (200, 200, 200), (280 + (i-1)*60, HEIGHT - 60, 50, 40), 0, 5)
            disk_text = font.render(str(i), True, (0, 0, 0))
            screen.blit(disk_text, (300 + (i-1)*60, HEIGHT - 50))

    def handle_click(self, pos):
        x, y = pos

        # Check if a tower was clicked
        if not self.auto_solving and y &amp;lt; HEIGHT - 70:
            for i, tower in enumerate(self.towers):
                if abs(x - tower.x) &amp;lt; 50:
                    self.handle_tower_click(i)
                    return

        # Check if reset button was clicked
        if 20 &amp;lt;= x &amp;lt;= 120 and HEIGHT - 60 &amp;lt;= y &amp;lt;= HEIGHT - 20:
            self.reset()
            return

        # Check if auto solve button was clicked
        if 140 &amp;lt;= x &amp;lt;= 260 and HEIGHT - 60 &amp;lt;= y &amp;lt;= HEIGHT - 20:
            self.start_auto_solve()
            return

        # Check if disk count buttons were clicked
        for i in range(1, MAX_DISKS + 1):
            if 280 + (i-1)*60 &amp;lt;= x &amp;lt;= 330 + (i-1)*60 and HEIGHT - 60 &amp;lt;= y &amp;lt;= HEIGHT - 20:
                self.reset(i)
                return

    def handle_tower_click(self, tower_index):
        if self.selected_tower is None:
            # No tower selected yet, try to select this one
            if self.towers[tower_index].disks:
                self.selected_tower = tower_index
                self.selected_disk = self.towers[tower_index].remove_top_disk()
                self.moving_disk = self.selected_disk
                self.moving_disk.moving = True
                self.moving_disk.target_x = self.towers[tower_index].x
                self.moving_disk.target_y = 100  # Move up
        else:
            # A tower was already selected, try to move the disk
            if tower_index == self.selected_tower:
                # Put the disk back
                self.towers[tower_index].add_disk(self.selected_disk)
            elif self.towers[tower_index].can_add_disk(self.selected_disk):
                # Move the disk to the new tower
                self.moving_disk.target_x = self.towers[tower_index].x
                self.moving_disk.target_y = self.towers[tower_index].y - len(self.towers[tower_index].disks) * DISK_HEIGHT - DISK_HEIGHT // 2
                self.towers[tower_index].add_disk(self.selected_disk)
                self.moves += 1

                # Check if the game is won
                if len(self.towers[2].disks) == self.num_disks:
                    self.game_won = True
            else:
                # Invalid move, put the disk back
                self.towers[self.selected_tower].add_disk(self.selected_disk)

            self.selected_tower = None
            self.selected_disk = None
            self.moving_disk = None

    def reset(self, num_disks=None):
        if num_disks is not None:
            self.num_disks = min(num_disks, MAX_DISKS)

        self.towers = [
            Tower(WIDTH // 4),
            Tower(WIDTH // 2),
            Tower(3 * WIDTH // 4)
        ]

        for i in range(self.num_disks, 0, -1):
            disk = Disk(i - 1, DISK_COLORS[(i - 1) % len(DISK_COLORS)])
            self.towers[0].add_disk(disk)

        self.moves = 0
        self.selected_tower = None
        self.selected_disk = None
        self.moving_disk = None
        self.auto_solving = False
        self.solution_steps = []
        self.solution_index = 0
        self.game_won = False

    def solve_hanoi(self, n, source, target, auxiliary, steps):
        if n &amp;gt; 0:
            self.solve_hanoi(n-1, source, auxiliary, target, steps)
            steps.append((source, target))
            self.solve_hanoi(n-1, auxiliary, target, source, steps)

    def start_auto_solve(self):
        if self.auto_solving:
            return

        # Reset the game first
        self.reset(self.num_disks)

        # Generate solution steps
        self.solution_steps = []
        self.solve_hanoi(self.num_disks, 0, 2, 1, self.solution_steps)
        self.solution_index = 0
        self.auto_solving = True
        self.last_move_time = time.time()

    def update_auto_solve(self):
        if not self.auto_solving or self.solution_index &amp;gt;= len(self.solution_steps):
            return

        current_time = time.time()
        if current_time - self.last_move_time &amp;lt; 1.0:  # Wait 1 second between moves
            return

        source, target = self.solution_steps[self.solution_index]

        # Make the move
        if self.towers[source].disks:
            disk = self.towers[source].remove_top_disk()
            self.towers[target].add_disk(disk)
            self.moves += 1

            # Check if the game is won
            if len(self.towers[2].disks) == self.num_disks:
                self.game_won = True
                self.auto_solving = False

        self.solution_index += 1
        self.last_move_time = current_time

        if self.solution_index &amp;gt;= len(self.solution_steps):
            self.auto_solving = False

def main():
    game = Game(3)

    running = True
    while running:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                running = False
            elif event.type == pygame.MOUSEBUTTONDOWN:
                if event.button == 1:  # Left mouse button
                    game.handle_click(event.pos)

        game.update_auto_solve()
        game.draw()
        clock.tick(60)

    pygame.quit()
    sys.exit()

if __name__ == "__main__":
    main()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Code result material non polished:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import pygame
import sys
import time

# Initialize pygame
pygame.init()

# Constants
WIDTH, HEIGHT = 800, 600
DISK_HEIGHT = 24
MAX_DISKS = 5
ANIMATION_SPEED = 5

# Material Design Colors
BACKGROUND = (245, 245, 245)    # Grey 100
PRIMARY_COLOR = (33, 150, 243)   # Blue 500
ACCENT_COLOR = (255, 152, 0)     # Orange 500
TOWER_COLOR = (96, 125, 139)     # Blue Grey 500
TEXT_PRIMARY = (33, 33, 33)      # Grey 900
TEXT_SECONDARY = (117, 117, 117) # Grey 600
BUTTON_COLOR = (255, 255, 255)   # White
BUTTON_HOVER = (238, 238, 238)   # Grey 200
WIN_COLOR = (76, 175, 80)        # Green 500

# Material Design Disk Colors
DISK_COLORS = [
    (244, 67, 54),   # Red 500
    (156, 39, 176),  # Purple 500
    (76, 175, 80),   # Green 500
    (3, 169, 244),   # Light Blue 500
    (255, 193, 7),   # Amber 500
]

# Set up the display
screen = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Tower of Hanoi - Material Design")
clock = pygame.time.Clock()

# Helper function for drawing rounded rectangles with shadow effect
def draw_material_rect(surface, color, rect, radius=8, elevation=2):
    x, y, width, height = rect

    # Draw shadow (if elevation &amp;gt; 0)
    if elevation &amp;gt; 0:
        shadow_rect = (x, y + elevation, width, height)
        pygame.draw.rect(surface, (0, 0, 0, 50), shadow_rect, 0, radius)

    # Draw the rounded rectangle
    pygame.draw.rect(surface, color, (x, y, width, height), 0, radius)

# Helper function for drawing material buttons
def draw_material_button(surface, rect, text, font, color=BUTTON_COLOR, text_color=TEXT_PRIMARY, elevation=2):
    draw_material_rect(surface, color, rect, 4, elevation)
    text_surf = font.render(text, True, text_color)
    text_rect = text_surf.get_rect(center=(rect[0] + rect[2]//2, rect[1] + rect[3]//2))
    surface.blit(text_surf, text_rect)

class Disk:
    def __init__(self, size, color):
        self.size = size
        self.color = color
        self.x = 0
        self.y = 0
        self.moving = False
        self.target_x = 0
        self.target_y = 0

    def draw(self):
        width = (self.size + 1) * 30
        # Draw disk with elevation effect
        shadow_rect = (self.x - width // 2, self.y - DISK_HEIGHT // 2 + 2, width, DISK_HEIGHT)
        pygame.draw.rect(screen, (0, 0, 0, 30), shadow_rect, 0, DISK_HEIGHT // 2)

        # Draw main disk
        disk_rect = (self.x - width // 2, self.y - DISK_HEIGHT // 2, width, DISK_HEIGHT)
        pygame.draw.rect(screen, self.color, disk_rect, 0, DISK_HEIGHT // 2)

        # Add a subtle highlight on top
        highlight_rect = (self.x - width // 2, self.y - DISK_HEIGHT // 2, width, DISK_HEIGHT // 4)
        highlight_color = tuple(min(c + 30, 255) for c in self.color[:3])
        pygame.draw.rect(screen, highlight_color, highlight_rect, 0, DISK_HEIGHT // 2)

    def move_towards_target(self):
        dx = self.target_x - self.x
        dy = self.target_y - self.y

        if abs(dx) &amp;lt; ANIMATION_SPEED and abs(dy) &amp;lt; ANIMATION_SPEED:
            self.x = self.target_x
            self.y = self.target_y
            self.moving = False
            return True

        if abs(dx) &amp;gt; 0:
            self.x += ANIMATION_SPEED if dx &amp;gt; 0 else -ANIMATION_SPEED

        if abs(dy) &amp;gt; 0:
            self.y += ANIMATION_SPEED if dy &amp;gt; 0 else -ANIMATION_SPEED

        return False

class Tower:
    def __init__(self, x):
        self.x = x
        self.y = HEIGHT - 100
        self.disks = []

    def draw(self):
        # Draw tower base with material design
        base_width = 120

        # Draw base shadow
        pygame.draw.rect(screen, (0, 0, 0, 30), (self.x - base_width//2, self.y + 22, base_width, 8), 0, 4)

        # Draw base
        pygame.draw.rect(screen, TOWER_COLOR, (self.x - base_width//2, self.y + 20, base_width, 8), 0, 4)

        # Draw tower pole with subtle gradient
        pole_height = 220
        for i in range(pole_height):
            # Create subtle gradient effect
            shade = max(0, min(20, i // 10))
            color = tuple(max(0, min(255, c + shade)) for c in TOWER_COLOR[:3])
            pygame.draw.rect(screen, color, (self.x - 4, self.y - pole_height + i, 8, 1))

    def add_disk(self, disk):
        disk_y = self.y - len(self.disks) * DISK_HEIGHT - DISK_HEIGHT // 2
        disk.x = self.x
        disk.y = disk_y
        self.disks.append(disk)

    def remove_top_disk(self):
        if self.disks:
            return self.disks.pop()
        return None

    def can_add_disk(self, disk):
        if not self.disks:
            return True
        return disk.size &amp;lt; self.disks[-1].size

class Game:
    def __init__(self, num_disks=3):
        self.num_disks = min(num_disks, MAX_DISKS)
        self.towers = [
            Tower(WIDTH // 4),
            Tower(WIDTH // 2),
            Tower(3 * WIDTH // 4)
        ]
        self.moves = 0
        self.selected_tower = None
        self.selected_disk = None
        self.moving_disk = None
        self.auto_solving = False
        self.solution_steps = []
        self.solution_index = 0
        self.last_move_time = 0
        self.game_won = False

        # Load fonts with fallbacks
        try:
            self.title_font = pygame.font.SysFont('Roboto', 36)
            self.main_font = pygame.font.SysFont('Roboto', 24)
            self.button_font = pygame.font.SysFont('Roboto', 18)
        except:
            self.title_font = pygame.font.SysFont(None, 36)
            self.main_font = pygame.font.SysFont(None, 24)
            self.button_font = pygame.font.SysFont(None, 18)

        # Initialize the first tower with disks
        for i in range(self.num_disks, 0, -1):
            disk = Disk(i - 1, DISK_COLORS[(i - 1) % len(DISK_COLORS)])
            self.towers[0].add_disk(disk)

    def draw(self):
        screen.fill(BACKGROUND)

        # Draw app bar
        pygame.draw.rect(screen, PRIMARY_COLOR, (0, 0, WIDTH, 60))
        title_text = self.title_font.render("Tower of Hanoi", True, (255, 255, 255))
        screen.blit(title_text, (20, 15))

        # Draw towers
        for tower in self.towers:
            tower.draw()

        # Draw disks
        for tower in self.towers:
            for disk in tower.disks:
                disk.draw()

        # Draw moving disk
        if self.moving_disk:
            self.moving_disk.draw()

        # Draw move counter with card style
        counter_rect = (WIDTH - 150, 70, 130, 50)
        draw_material_rect(screen, (255, 255, 255), counter_rect, 4, 2)
        moves_text = self.main_font.render(f"Moves: {self.moves}", True, TEXT_PRIMARY)
        screen.blit(moves_text, (WIDTH - 140, 85))

        # Draw win message with material card
        if self.game_won:
            win_rect = (WIDTH // 2 - 150, 70, 300, 60)
            draw_material_rect(screen, WIN_COLOR, win_rect, 4, 3)
            win_text = self.title_font.render("You Win!", True, (255, 255, 255))
            screen.blit(win_text, (WIDTH // 2 - win_text.get_width() // 2, 85))

        # Draw buttons
        self.draw_buttons()

        pygame.display.flip()

    def draw_buttons(self):
        # Create a card for the controls
        control_card_rect = (20, HEIGHT - 80, WIDTH - 40, 60)
        draw_material_rect(screen, (255, 255, 255), control_card_rect, 4, 3)

        # Reset button
        reset_rect = (40, HEIGHT - 70, 100, 40)
        draw_material_button(screen, reset_rect, "Reset", self.button_font, PRIMARY_COLOR, (255, 255, 255))

        # Auto Solve button
        solve_rect = (160, HEIGHT - 70, 120, 40)
        draw_material_button(screen, solve_rect, "Auto Solve", self.button_font, ACCENT_COLOR, (255, 255, 255))

        # Disk count buttons
        for i in range(1, MAX_DISKS + 1):
            disk_rect = (300 + (i-1)*80, HEIGHT - 70, 60, 40)
            draw_material_button(screen, disk_rect, str(i), self.button_font,
                                BUTTON_COLOR if i != self.num_disks else PRIMARY_COLOR,
                                TEXT_PRIMARY if i != self.num_disks else (255, 255, 255))

    def handle_click(self, pos):
        x, y = pos

        # Check if a tower was clicked
        if not self.auto_solving and y &amp;lt; HEIGHT - 90 and y &amp;gt; 60:
            for i, tower in enumerate(self.towers):
                if abs(x - tower.x) &amp;lt; 60:
                    self.handle_tower_click(i)
                    return

        # Check if reset button was clicked
        if 40 &amp;lt;= x &amp;lt;= 140 and HEIGHT - 70 &amp;lt;= y &amp;lt;= HEIGHT - 30:
            self.reset()
            return

        # Check if auto solve button was clicked
        if 160 &amp;lt;= x &amp;lt;= 280 and HEIGHT - 70 &amp;lt;= y &amp;lt;= HEIGHT - 30:
            self.start_auto_solve()
            return

        # Check if disk count buttons were clicked
        for i in range(1, MAX_DISKS + 1):
            if 300 + (i-1)*80 &amp;lt;= x &amp;lt;= 360 + (i-1)*80 and HEIGHT - 70 &amp;lt;= y &amp;lt;= HEIGHT - 30:
                self.reset(i)
                return

    def handle_tower_click(self, tower_index):
        if self.selected_tower is None:
            # No tower selected yet, try to select this one
            if self.towers[tower_index].disks:
                self.selected_tower = tower_index
                self.selected_disk = self.towers[tower_index].remove_top_disk()
                self.moving_disk = self.selected_disk
                self.moving_disk.moving = True
                self.moving_disk.target_x = self.towers[tower_index].x
                self.moving_disk.target_y = 100  # Move up
        else:
            # A tower was already selected, try to move the disk
            if tower_index == self.selected_tower:
                # Put the disk back
                self.towers[tower_index].add_disk(self.selected_disk)
            elif self.towers[tower_index].can_add_disk(self.selected_disk):
                # Move the disk to the new tower
                self.moving_disk.target_x = self.towers[tower_index].x
                self.moving_disk.target_y = self.towers[tower_index].y - len(self.towers[tower_index].disks) * DISK_HEIGHT - DISK_HEIGHT // 2
                self.towers[tower_index].add_disk(self.selected_disk)
                self.moves += 1

                # Check if the game is won
                if len(self.towers[2].disks) == self.num_disks:
                    self.game_won = True
            else:
                # Invalid move, put the disk back
                self.towers[self.selected_tower].add_disk(self.selected_disk)

            self.selected_tower = None
            self.selected_disk = None
            self.moving_disk = None

    def reset(self, num_disks=None):
        if num_disks is not None:
            self.num_disks = min(num_disks, MAX_DISKS)

        self.towers = [
            Tower(WIDTH // 4),
            Tower(WIDTH // 2),
            Tower(3 * WIDTH // 4)
        ]

        for i in range(self.num_disks, 0, -1):
            disk = Disk(i - 1, DISK_COLORS[(i - 1) % len(DISK_COLORS)])
            self.towers[0].add_disk(disk)

        self.moves = 0
        self.selected_tower = None
        self.selected_disk = None
        self.moving_disk = None
        self.auto_solving = False
        self.solution_steps = []
        self.solution_index = 0
        self.game_won = False

    def solve_hanoi(self, n, source, target, auxiliary, steps):
        if n &amp;gt; 0:
            self.solve_hanoi(n-1, source, auxiliary, target, steps)
            steps.append((source, target))
            self.solve_hanoi(n-1, auxiliary, target, source, steps)

    def start_auto_solve(self):
        if self.auto_solving:
            return

        # Reset the game first
        self.reset(self.num_disks)

        # Generate solution steps
        self.solution_steps = []
        self.solve_hanoi(self.num_disks, 0, 2, 1, self.solution_steps)
        self.solution_index = 0
        self.auto_solving = True
        self.last_move_time = time.time()

    def update_auto_solve(self):
        if not self.auto_solving or self.solution_index &amp;gt;= len(self.solution_steps):
            return

        current_time = time.time()
        if current_time - self.last_move_time &amp;lt; 1.0:  # Wait 1 second between moves
            return

        source, target = self.solution_steps[self.solution_index]

        # Make the move
        if self.towers[source].disks:
            disk = self.towers[source].remove_top_disk()
            self.towers[target].add_disk(disk)
            self.moves += 1

            # Check if the game is won
            if len(self.towers[2].disks) == self.num_disks:
                self.game_won = True
                self.auto_solving = False

        self.solution_index += 1
        self.last_move_time = current_time

        if self.solution_index &amp;gt;= len(self.solution_steps):
            self.auto_solving = False

def main():
    game = Game(3)

    running = True
    while running:
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                running = False
            elif event.type == pygame.MOUSEBUTTONDOWN:
                if event.button == 1:  # Left mouse button
                    game.handle_click(event.pos)

        game.update_auto_solve()
        game.draw()
        clock.tick(60)

    pygame.quit()
    sys.exit()

if __name__ == "__main__":
    main()
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Code result material polished:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import random

import pygame
import sys
import time
import math

# Initialize pygame
pygame.init()

# Constants
WIDTH, HEIGHT = 800, 600
FPS = 60
MIN_DISKS = 3
MAX_DISKS = 7

# Material Design Colors
LIGHT_THEME = {
    "background": (245, 245, 245),  # Grey 100
    "surface": (255, 255, 255),     # White
    "primary": (33, 150, 243),      # Blue 500
    "primary_dark": (25, 118, 210), # Blue 700
    "primary_light": (100, 181, 246), # Blue 300
    "secondary": (255, 152, 0),     # Orange 500
    "text_primary": (33, 33, 33),   # Grey 900
    "text_secondary": (117, 117, 117), # Grey 600
    "tower": (96, 125, 139),        # Blue Grey 500
    "error": (244, 67, 54),         # Red 500
    "success": (76, 175, 80),       # Green 500
}

DARK_THEME = {
    "background": (48, 48, 48),     # Grey 900
    "surface": (66, 66, 66),        # Grey 800
    "primary": (33, 150, 243),      # Blue 500
    "primary_dark": (25, 118, 210), # Blue 700
    "primary_light": (100, 181, 246), # Blue 300
    "secondary": (255, 152, 0),     # Orange 500
    "text_primary": (255, 255, 255),# White
    "text_secondary": (189, 189, 189), # Grey 400
    "tower": (176, 190, 197),       # Blue Grey 300
    "error": (244, 67, 54),         # Red 500
    "success": (76, 175, 80),       # Green 500
}

# Material Design Disk Colors - Light Theme
DISK_COLORS_LIGHT = [
    (244, 67, 54),    # Red 500
    (156, 39, 176),   # Purple 500
    (33, 150, 243),   # Blue 500
    (76, 175, 80),    # Green 500
    (255, 193, 7),    # Amber 500
    (255, 87, 34),    # Deep Orange 500
    (0, 188, 212),    # Cyan 500
]

# Material Design Disk Colors - Dark Theme
DISK_COLORS_DARK = [
    (229, 115, 115),  # Red 300
    (186, 104, 200),  # Purple 300
    (100, 181, 246),  # Blue 300
    (129, 199, 132),  # Green 300
    (255, 213, 79),   # Amber 300
    (255, 138, 101),  # Deep Orange 300
    (77, 208, 225),   # Cyan 300
]

# Set up the display
screen = pygame.display.set_mode((WIDTH, HEIGHT))
pygame.display.set_caption("Tower of Hanoi - Material Design")
clock = pygame.time.Clock()

# Helper function for drawing rounded rectangles
def draw_rounded_rect(surface, color, rect, radius=10, border=0, border_color=(0, 0, 0)):
    """Draw a rounded rectangle with optional border"""
    x, y, width, height = rect

    if border &amp;gt; 0:
        # Draw border first (as a slightly larger rectangle)
        pygame.draw.rect(surface, border_color,
                        (x-border, y-border, width+2*border, height+2*border),
                        0, radius+border)

    # Draw the main rectangle
    pygame.draw.rect(surface, color, (x, y, width, height), 0, radius)

# Helper function for drawing material shadows
def draw_shadow(surface, rect, radius=10, alpha=30, offset=(0, 4), blur=4):
    """Draw a soft shadow under a rectangle"""
    shadow_surf = pygame.Surface((rect[2] + blur * 2, rect[3] + blur * 2), pygame.SRCALPHA)
    pygame.draw.rect(shadow_surf, (0, 0, 0, alpha),
                    (blur, blur, rect[2], rect[3]), 0, radius)

    # Apply simple blur by scaling down and up
    scale_factor = 0.5
    small_surf = pygame.transform.smoothscale(shadow_surf,
                                            (int(shadow_surf.get_width() * scale_factor),
                                             int(shadow_surf.get_height() * scale_factor)))
    blurred = pygame.transform.smoothscale(small_surf, shadow_surf.get_size())

    # Blit the shadow
    surface.blit(blurred, (rect[0] - blur + offset[0], rect[1] - blur + offset[1]))

# Helper function for drawing material buttons
def draw_material_button(surface, rect, text, font, colors, is_hover=False, is_active=False):
    """Draw a material design button with hover and active states"""
    base_color = colors["primary"] if is_active else colors["surface"]
    text_color = colors["text_primary"] if not is_active else (255, 255, 255)

    # Draw shadow
    if not is_hover:
        draw_shadow(surface, rect, radius=4, offset=(0, 2), blur=4)
    else:
        draw_shadow(surface, rect, radius=4, offset=(0, 4), blur=6)

    # Draw button
    draw_rounded_rect(surface, base_color, rect, radius=4)

    # Draw ripple effect if hovering
    if is_hover and not is_active:
        hover_color = (*base_color[:3], 20)  # Semi-transparent overlay
        hover_surf = pygame.Surface((rect[2], rect[3]), pygame.SRCALPHA)
        pygame.draw.rect(hover_surf, hover_color, (0, 0, rect[2], rect[3]), 0, 4)
        surface.blit(hover_surf, (rect[0], rect[1]))

    # Draw text
    text_surf = font.render(text, True, text_color)
    text_rect = text_surf.get_rect(center=(rect[0] + rect[2]//2, rect[1] + rect[3]//2))
    surface.blit(text_surf, text_rect)

# Helper function for drawing material cards
def draw_material_card(surface, rect, colors, elevation=2):
    """Draw a material design card with elevation"""
    # Draw shadow
    draw_shadow(surface, rect, radius=8, offset=(0, elevation), blur=elevation*2)

    # Draw card
    draw_rounded_rect(surface, colors["surface"], rect, radius=8)

class Disk:
    def __init__(self, size, color, theme_colors):
        self.size = size
        self.base_color = color
        self.theme_colors = theme_colors
        self.x = 0
        self.y = 0
        self.target_x = 0
        self.target_y = 0
        self.moving = False
        self.dragging = False
        self.drag_offset_x = 0
        self.drag_offset_y = 0
        self.width = (size + 1) * 30
        self.height = 24
        self.animation_progress = 0
        self.animation_duration = 0.3  # seconds
        self.animation_start_time = 0
        self.animation_start_pos = (0, 0)

    def draw(self, surface):
        # Calculate disk rectangle
        rect = (self.x - self.width // 2, self.y - self.height // 2, self.width, self.height)

        # Draw shadow
        if not self.dragging:
            shadow_offset = 2
            draw_shadow(surface, rect, radius=self.height//2, offset=(0, shadow_offset), blur=4)
        else:
            # Larger shadow when dragging
            shadow_offset = 6
            draw_shadow(surface, rect, radius=self.height//2, offset=(0, shadow_offset), blur=8, alpha=40)

        # Draw disk with rounded corners
        draw_rounded_rect(surface, self.base_color, rect, radius=self.height//2)

        # Add a subtle highlight on top for 3D effect
        highlight_rect = (self.x - self.width // 2, self.y - self.height // 2, self.width, self.height // 3)
        highlight_color = tuple(min(c + 30, 255) for c in self.base_color[:3])
        draw_rounded_rect(surface, highlight_color, highlight_rect, radius=self.height//2)

    def contains_point(self, point):
        """Check if the disk contains the given point"""
        return (abs(point[0] - self.x) &amp;lt;= self.width // 2 and
                abs(point[1] - self.y) &amp;lt;= self.height // 2)

    def start_drag(self, mouse_pos):
        """Start dragging the disk"""
        self.dragging = True
        self.drag_offset_x = self.x - mouse_pos[0]
        self.drag_offset_y = self.y - mouse_pos[1]

    def update_drag(self, mouse_pos):
        """Update the disk position while dragging"""
        if self.dragging:
            self.x = mouse_pos[0] + self.drag_offset_x
            self.y = mouse_pos[1] + self.drag_offset_y

    def end_drag(self):
        """End dragging the disk"""
        self.dragging = False

    def start_animation(self, target_x, target_y):
        """Start animating the disk to a new position"""
        self.moving = True
        self.animation_start_time = time.time()
        self.animation_progress = 0
        self.animation_start_pos = (self.x, self.y)
        self.target_x = target_x
        self.target_y = target_y

    def update_animation(self):
        """Update the disk animation"""
        if not self.moving:
            return False

        current_time = time.time()
        elapsed = current_time - self.animation_start_time
        self.animation_progress = min(elapsed / self.animation_duration, 1.0)

        # Use easeOutCubic easing function for smooth animation
        progress = 1 - (1 - self.animation_progress) ** 3

        # Update position
        self.x = self.animation_start_pos[0] + (self.target_x - self.animation_start_pos[0]) * progress
        self.y = self.animation_start_pos[1] + (self.target_y - self.animation_start_pos[1]) * progress

        # Check if animation is complete
        if self.animation_progress &amp;gt;= 1.0:
            self.x = self.target_x
            self.y = self.target_y
            self.moving = False
            return True

        return False

class Tower:
    def __init__(self, x, y, theme_colors):
        self.x = x
        self.y = y
        self.theme_colors = theme_colors
        self.disks = []
        self.base_width = 120
        self.pole_height = 220
        self.pole_width = 8
        self.highlight = False
        self.highlight_alpha = 0
        self.highlight_direction = 1

    def draw(self, surface):
        tower_color = self.theme_colors["tower"]

        # Draw base shadow
        base_rect = (self.x - self.base_width//2, self.y, self.base_width, 10)
        draw_shadow(surface, base_rect, radius=5, offset=(0, 2), blur=4)

        # Draw base
        draw_rounded_rect(surface, tower_color, base_rect, radius=5)

        # Draw tower pole with subtle gradient
        for i in range(self.pole_height):
            # Create subtle gradient effect
            shade = max(0, min(20, i // 10))
            color = tuple(max(0, min(255, c + shade)) for c in tower_color[:3])
            pygame.draw.rect(surface, color,
                            (self.x - self.pole_width//2, self.y - self.pole_height + i,
                             self.pole_width, 1))

        # Draw highlight if this tower is a valid drop target
        if self.highlight:
            self.highlight_alpha += self.highlight_direction * 5
            if self.highlight_alpha &amp;gt;= 60:
                self.highlight_alpha = 60
                self.highlight_direction = -1
            elif self.highlight_alpha &amp;lt;= 20:
                self.highlight_alpha = 20
                self.highlight_direction = 1

            highlight_color = (*self.theme_colors["primary"][:3], self.highlight_alpha)
            highlight_surf = pygame.Surface((self.base_width + 20, self.pole_height + 10), pygame.SRCALPHA)
            pygame.draw.rect(highlight_surf, highlight_color,
                            (0, 0, self.base_width + 20, self.pole_height + 10), 0, 10)
            surface.blit(highlight_surf,
                        (self.x - (self.base_width + 20)//2, self.y - self.pole_height - 5))

    def add_disk(self, disk):
        """Add a disk to this tower"""
        disk_y = self.y - len(self.disks) * disk.height - disk.height // 2
        disk.x = self.x
        disk.y = disk_y
        self.disks.append(disk)

    def remove_top_disk(self):
        """Remove and return the top disk from this tower"""
        if self.disks:
            return self.disks.pop()
        return None

    def can_add_disk(self, disk):
        """Check if a disk can be added to this tower"""
        if not self.disks:
            return True
        return disk.size &amp;lt; self.disks[-1].size

    def get_top_disk(self):
        """Get the top disk without removing it"""
        if self.disks:
            return self.disks[-1]
        return None

    def contains_point(self, point):
        """Check if the tower contains the given point for dropping"""
        return abs(point[0] - self.x) &amp;lt; self.base_width // 2

    def get_top_position(self):
        """Get the position for a new disk at the top of the tower"""
        disk_y = self.y - len(self.disks) * 24 - 24 // 2
        return self.x, disk_y

    def set_highlight(self, highlight):
        """Set whether this tower should be highlighted as a valid drop target"""
        self.highlight = highlight

class Button:
    def __init__(self, x, y, width, height, text, theme_colors, action=None):
        self.rect = (x, y, width, height)
        self.text = text
        self.theme_colors = theme_colors
        self.action = action
        self.hover = False
        self.active = False

        # Load font
        try:
            self.font = pygame.font.SysFont('Roboto', 18)
        except:
            self.font = pygame.font.SysFont(None, 18)

    def draw(self, surface):
        draw_material_button(surface, self.rect, self.text, self.font,
                            self.theme_colors, self.hover, self.active)

    def contains_point(self, point):
        x, y, width, height = self.rect
        return (x &amp;lt;= point[0] &amp;lt;= x + width and y &amp;lt;= point[1] &amp;lt;= y + height)

    def set_hover(self, hover):
        self.hover = hover

    def set_active(self, active):
        self.active = active

    def click(self):
        if self.action:
            self.action()

class Game:
    def __init__(self, num_disks=3, dark_mode=False):
        self.num_disks = min(max(num_disks, MIN_DISKS), MAX_DISKS)
        self.dark_mode = dark_mode
        self.theme = DARK_THEME if dark_mode else LIGHT_THEME
        self.disk_colors = DISK_COLORS_DARK if dark_mode else DISK_COLORS_LIGHT

        # Game state
        self.moves = 0
        self.start_time = time.time()
        self.elapsed_time = 0
        self.game_won = False
        self.show_win_animation = False
        self.win_animation_start = 0
        self.win_particles = []

        # Tower setup
        tower_y = HEIGHT - 100
        self.towers = [
            Tower(WIDTH // 4, tower_y, self.theme),
            Tower(WIDTH // 2, tower_y, self.theme),
            Tower(3 * WIDTH // 4, tower_y, self.theme)
        ]

        # Disk interaction state
        self.selected_disk = None
        self.source_tower = None
        self.last_valid_position = (0, 0)

        # Initialize the first tower with disks
        for i in range(self.num_disks, 0, -1):
            disk = Disk(i - 1, self.disk_colors[(i - 1) % len(self.disk_colors)], self.theme)
            self.towers[0].add_disk(disk)

        # Load fonts
        try:
            self.title_font = pygame.font.SysFont('Roboto', 36)
            self.main_font = pygame.font.SysFont('Roboto', 24)
            self.button_font = pygame.font.SysFont('Roboto', 18)
        except:
            self.title_font = pygame.font.SysFont(None, 36)
            self.main_font = pygame.font.SysFont(None, 24)
            self.button_font = pygame.font.SysFont(None, 18)

        # Create UI buttons
        self.create_buttons()

        # Feedback message
        self.feedback_message = ""
        self.feedback_color = self.theme["text_primary"]
        self.feedback_timer = 0

    def create_buttons(self):
        """Create all UI buttons"""
        self.buttons = []

        # Reset button
        self.buttons.append(Button(20, HEIGHT - 60, 100, 40, "Reset", self.theme,
                                  action=lambda: self.reset()))

        # Theme toggle button
        theme_text = "Light Mode" if self.dark_mode else "Dark Mode"
        self.buttons.append(Button(140, HEIGHT - 60, 120, 40, theme_text, self.theme,
                                  action=lambda: self.toggle_theme()))

        # Disk count buttons
        for i in range(MIN_DISKS, MAX_DISKS + 1):
            x_pos = 280 + (i - MIN_DISKS) * 70
            self.buttons.append(Button(x_pos, HEIGHT - 60, 60, 40, str(i), self.theme,
                                      action=lambda i=i: self.reset(i)))

    def toggle_theme(self):
        """Toggle between light and dark themes"""
        self.dark_mode = not self.dark_mode
        self.theme = DARK_THEME if self.dark_mode else LIGHT_THEME
        self.disk_colors = DISK_COLORS_DARK if self.dark_mode else DISK_COLORS_LIGHT

        # Update tower colors
        for tower in self.towers:
            tower.theme_colors = self.theme

        # Update disk colors
        for tower in self.towers:
            for i, disk in enumerate(tower.disks):
                disk.theme_colors = self.theme
                disk.base_color = self.disk_colors[(disk.size) % len(self.disk_colors)]

        if self.selected_disk:
            self.selected_disk.theme_colors = self.theme
            self.selected_disk.base_color = self.disk_colors[(self.selected_disk.size) % len(self.disk_colors)]

        # Recreate buttons with new theme
        self.create_buttons()

    def draw(self, surface):
        # Fill background
        surface.fill(self.theme["background"])

        # Draw app bar
        pygame.draw.rect(surface, self.theme["primary"], (0, 0, WIDTH, 60))
        title_text = self.title_font.render("Tower of Hanoi", True, (255, 255, 255))
        surface.blit(title_text, (20, 15))

        # Draw towers
        for tower in self.towers:
            tower.draw(surface)

        # Draw disks on towers
        for tower in self.towers:
            for disk in tower.disks:
                disk.draw(surface)

        # Draw selected disk (being dragged) on top
        if self.selected_disk:
            self.selected_disk.draw(surface)

        # Draw move counter and timer
        self.draw_stats(surface)

        # Draw buttons
        for button in self.buttons:
            button.draw(surface)

        # Draw disk count indicator
        self.draw_disk_count_indicator(surface)

        # Draw feedback message
        if self.feedback_message and time.time() &amp;lt; self.feedback_timer:
            self.draw_feedback(surface)

        # Draw win animation
        if self.show_win_animation:
            self.draw_win_animation(surface)

        # Draw win message
        if self.game_won:
            self.draw_win_message(surface)

    def draw_stats(self, surface):
        # Create a card for stats
        stats_rect = (WIDTH - 200, 70, 180, 80)
        draw_material_card(surface, stats_rect, self.theme)

        # Draw move counter
        moves_text = self.main_font.render(f"Moves: {self.moves}", True, self.theme["text_primary"])
        surface.blit(moves_text, (WIDTH - 180, 85))

        # Draw timer
        minutes = int(self.elapsed_time) // 60
        seconds = int(self.elapsed_time) % 60
        time_text = self.main_font.render(f"Time: {minutes:02d}:{seconds:02d}", True,
                                         self.theme["text_primary"])
        surface.blit(time_text, (WIDTH - 180, 115))

    def draw_disk_count_indicator(self, surface):
        # Draw text above disk count buttons
        text = self.button_font.render("Number of Disks:", True, self.theme["text_primary"])
        surface.blit(text, (280, HEIGHT - 90))

        # Highlight the current disk count button
        for button in self.buttons[2:]:  # Skip Reset and Theme buttons
            button.set_active(button.text == str(self.num_disks))

    def draw_feedback(self, surface):
        text = self.main_font.render(self.feedback_message, True, self.feedback_color)
        text_rect = text.get_rect(center=(WIDTH // 2, HEIGHT - 100))
        surface.blit(text, text_rect)

    def draw_win_message(self, surface):
        # Create a card for the win message
        win_rect = (WIDTH // 2 - 150, 70, 300, 60)
        draw_material_card(surface, win_rect, self.theme, elevation=4)

        # Draw win text
        win_text = self.title_font.render("You Win!", True, self.theme["success"])
        text_rect = win_text.get_rect(center=(WIDTH // 2, 100))
        surface.blit(win_text, text_rect)

    def draw_win_animation(self, surface):
        # Generate particles
        if time.time() - self.win_animation_start &amp;lt; 2.0:
            if len(self.win_particles) &amp;lt; 100 and random.random() &amp;lt; 0.3:
                self.win_particles.append({
                    'x': random.randint(0, WIDTH),
                    'y': random.randint(0, HEIGHT // 2),
                    'size': random.randint(5, 15),
                    'color': random.choice([self.theme["primary"], self.theme["secondary"],
                                           self.theme["success"]]),
                    'speed': random.uniform(1, 3),
                    'angle': random.uniform(0, 2 * math.pi)
                })

        # Update and draw particles
        particles_to_keep = []
        for particle in self.win_particles:
            # Update position
            particle['y'] += particle['speed']
            particle['x'] += math.sin(particle['angle']) * 0.5

            # Draw particle
            pygame.draw.circle(surface, particle['color'],
                              (int(particle['x']), int(particle['y'])),
                              particle['size'])

            # Keep particles that are still on screen
            if particle['y'] &amp;lt; HEIGHT:
                particles_to_keep.append(particle)

        self.win_particles = particles_to_keep

        # End animation after a while
        if time.time() - self.win_animation_start &amp;gt; 3.0:
            self.show_win_animation = False

    def update(self):
        # Update elapsed time if game is not won
        if not self.game_won:
            self.elapsed_time = time.time() - self.start_time

        # Update disk animations
        for tower in self.towers:
            for disk in tower.disks:
                disk.update_animation()

    def handle_click(self, pos):
        # Check if a button was clicked
        for button in self.buttons:
            if button.contains_point(pos):
                button.click()
                return

        # Don't allow moves if game is won
        if self.game_won:
            return

        # Check if a tower was clicked
        if not self.selected_disk:
            # Try to select a disk
            for i, tower in enumerate(self.towers):
                if tower.disks and tower.get_top_disk().contains_point(pos):
                    self.source_tower = i
                    self.selected_disk = tower.remove_top_disk()
                    self.selected_disk.start_drag(pos)
                    self.last_valid_position = (self.selected_disk.x, self.selected_disk.y)
                    return
        else:
            # Try to place the disk
            self.end_disk_drag()

    def handle_mouse_motion(self, pos):
        # Update button hover states
        for button in self.buttons:
            button.set_hover(button.contains_point(pos))

        # Update selected disk position
        if self.selected_disk:
            self.selected_disk.update_drag(pos)

            # Highlight valid drop towers
            for i, tower in enumerate(self.towers):
                can_drop = tower.contains_point(pos) and tower.can_add_disk(self.selected_disk)
                tower.set_highlight(can_drop)

    def handle_mouse_up(self, pos):
        if self.selected_disk:
            self.end_disk_drag()

    def end_disk_drag(self):
        """End dragging the selected disk and place it on a tower if valid"""
        if not self.selected_disk:
            return

        valid_drop = False

        # Check if the disk is over a valid tower
        for i, tower in enumerate(self.towers):
            if tower.contains_point((self.selected_disk.x, self.selected_disk.y)):
                if tower.can_add_disk(self.selected_disk):
                    # Valid move
                    target_x, target_y = tower.get_top_position()
                    self.selected_disk.start_animation(target_x, target_y)
                    tower.add_disk(self.selected_disk)
                    self.selected_disk = None

                    # Update move counter
                    if i != self.source_tower:
                        self.moves += 1

                    # Check if the game is won
                    if len(self.towers[2].disks) == self.num_disks:
                        self.game_won = True
                        self.show_win_animation = True
                        self.win_animation_start = time.time()
                        import random
                        self.win_particles = []

                    valid_drop = True
                    break
                else:
                    # Invalid move - show feedback
                    self.show_feedback("Invalid move: Can't place larger disk on smaller disk",
                                      self.theme["error"])

        # If no valid drop, return the disk to its original tower
        if not valid_drop:
            self.towers[self.source_tower].add_disk(self.selected_disk)
            target_x, target_y = self.last_valid_position
            self.selected_disk.start_animation(target_x, target_y)
            self.selected_disk = None

        # Clear tower highlights
        for tower in self.towers:
            tower.set_highlight(False)

    def show_feedback(self, message, color=None):
        """Show a feedback message for a short time"""
        self.feedback_message = message
        self.feedback_color = color if color else self.theme["text_primary"]
        self.feedback_timer = time.time() + 2.0  # Show for 2 seconds

    def reset(self, num_disks=None):
        """Reset the game with the specified number of disks"""
        if num_disks is not None:
            self.num_disks = min(max(num_disks, MIN_DISKS), MAX_DISKS)

        # Reset game state
        self.moves = 0
        self.start_time = time.time()
        self.elapsed_time = 0
        self.game_won = False
        self.show_win_animation = False
        self.selected_disk = None
        self.source_tower = None

        # Reset towers
        self.towers = [
            Tower(WIDTH // 4, HEIGHT - 100, self.theme),
            Tower(WIDTH // 2, HEIGHT - 100, self.theme),
            Tower(3 * WIDTH // 4, HEIGHT - 100, self.theme)
        ]

        # Initialize the first tower with disks
        for i in range(self.num_disks, 0, -1):
            disk = Disk(i - 1, self.disk_colors[(i - 1) % len(self.disk_colors)], self.theme)
            self.towers[0].add_disk(disk)

        # Update disk count indicator
        for button in self.buttons[2:]:  # Skip Reset and Theme buttons
            button.set_active(button.text == str(self.num_disks))



def main():
    # Import needed only for win animation
    import random

    # Initialize game
    game = Game(3)

    # Main game loop
    running = True
    while running:
        # Handle events
        for event in pygame.event.get():
            if event.type == pygame.QUIT:
                running = False
            elif event.type == pygame.MOUSEBUTTONDOWN:
                if event.button == 1:  # Left mouse button
                    game.handle_click(event.pos)
            elif event.type == pygame.MOUSEMOTION:
                game.handle_mouse_motion(event.pos)
            elif event.type == pygame.MOUSEBUTTONUP:
                if event.button == 1:  # Left mouse button
                    game.handle_mouse_up(event.pos)

        # Update game state
        game.update()

        # Draw everything
        game.draw(screen)

        # Update display
        pygame.display.flip()

        # Cap the frame rate
        clock.tick(FPS)

    # Clean up
    pygame.quit()
    sys.exit()

if __name__ == "__main__":
    main()

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>pygame</category>
      <category>amazonqdevcli</category>
      <category>buildgameschallenge</category>
    </item>
    <item>
      <title>How to burn money on VPC Endpoints</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Tue, 20 May 2025 19:21:25 +0000</pubDate>
      <link>https://dev.to/martinnanchev/how-to-burn-money-on-vpc-endpoints-3l83</link>
      <guid>https://dev.to/martinnanchev/how-to-burn-money-on-vpc-endpoints-3l83</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" class="crayons-story__hidden-navigation-link"&gt;How (not) to Burn Money on VPC Endpoints (So You Don't Have To)&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;
          &lt;a class="crayons-logo crayons-logo--l" href="/aws-builders"&gt;
            &lt;img alt="AWS Community Builders  logo" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F2794%2F88da75b6-aadd-4ea1-8083-ae2dfca8be94.png" class="crayons-logo__image"&gt;
          &lt;/a&gt;

          &lt;a href="/martinnanchev" class="crayons-avatar  crayons-avatar--s absolute -right-2 -bottom-2 border-solid border-2 border-base-inverted  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F927745%2Feec7317e-e02b-497c-9ce9-d8163e766cbc.jpeg" alt="martinnanchev profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/martinnanchev" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Martin Nanchev
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Martin Nanchev
                
              
              &lt;div id="story-author-preview-content-2009857" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/martinnanchev" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F927745%2Feec7317e-e02b-497c-9ce9-d8163e766cbc.jpeg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Martin Nanchev&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

            &lt;span&gt;
              &lt;span class="crayons-story__tertiary fw-normal"&gt; for &lt;/span&gt;&lt;a href="/aws-builders" class="crayons-story__secondary fw-medium"&gt;AWS Community Builders &lt;/a&gt;
            &lt;/span&gt;
          &lt;/div&gt;
          &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;May 20 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" id="article-link-2009857"&gt;
          How (not) to Burn Money on VPC Endpoints (So You Don't Have To)
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/aws"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;aws&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/vpc"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;vpc&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/networking"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;networking&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/costs"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;costs&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/fire-f60e7a582391810302117f987b22a8ef04a2fe0df7e3258a5f49332df1cec71e.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;13&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              1&lt;span class="hidden s:inline"&gt; comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            9 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>aws</category>
      <category>vpc</category>
      <category>networking</category>
      <category>costs</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Tue, 20 May 2025 17:19:13 +0000</pubDate>
      <link>https://dev.to/martinnanchev/-hfl</link>
      <guid>https://dev.to/martinnanchev/-hfl</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" class="crayons-story__hidden-navigation-link"&gt;How (not) to Burn Money on VPC Endpoints (So You Don't Have To)&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;
          &lt;a class="crayons-logo crayons-logo--l" href="/aws-builders"&gt;
            &lt;img alt="AWS Community Builders  logo" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F2794%2F88da75b6-aadd-4ea1-8083-ae2dfca8be94.png" class="crayons-logo__image"&gt;
          &lt;/a&gt;

          &lt;a href="/martinnanchev" class="crayons-avatar  crayons-avatar--s absolute -right-2 -bottom-2 border-solid border-2 border-base-inverted  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F927745%2Feec7317e-e02b-497c-9ce9-d8163e766cbc.jpeg" alt="martinnanchev profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/martinnanchev" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Martin Nanchev
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Martin Nanchev
                
              
              &lt;div id="story-author-preview-content-2009857" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/martinnanchev" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F927745%2Feec7317e-e02b-497c-9ce9-d8163e766cbc.jpeg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Martin Nanchev&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

            &lt;span&gt;
              &lt;span class="crayons-story__tertiary fw-normal"&gt; for &lt;/span&gt;&lt;a href="/aws-builders" class="crayons-story__secondary fw-medium"&gt;AWS Community Builders &lt;/a&gt;
            &lt;/span&gt;
          &lt;/div&gt;
          &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;May 20 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" id="article-link-2009857"&gt;
          How (not) to Burn Money on VPC Endpoints (So You Don't Have To)
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/aws"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;aws&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/vpc"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;vpc&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/networking"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;networking&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/costs"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;costs&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/fire-f60e7a582391810302117f987b22a8ef04a2fe0df7e3258a5f49332df1cec71e.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;13&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              1&lt;span class="hidden s:inline"&gt; comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            9 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>aws</category>
      <category>vpc</category>
      <category>networking</category>
      <category>costs</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Tue, 20 May 2025 15:50:05 +0000</pubDate>
      <link>https://dev.to/martinnanchev/-48jl</link>
      <guid>https://dev.to/martinnanchev/-48jl</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" class="crayons-story__hidden-navigation-link"&gt;How (not) to Burn Money on VPC Endpoints (So You Don't Have To)&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;
          &lt;a class="crayons-logo crayons-logo--l" href="/aws-builders"&gt;
            &lt;img alt="AWS Community Builders  logo" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F2794%2F88da75b6-aadd-4ea1-8083-ae2dfca8be94.png" class="crayons-logo__image"&gt;
          &lt;/a&gt;

          &lt;a href="/martinnanchev" class="crayons-avatar  crayons-avatar--s absolute -right-2 -bottom-2 border-solid border-2 border-base-inverted  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F927745%2Feec7317e-e02b-497c-9ce9-d8163e766cbc.jpeg" alt="martinnanchev profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/martinnanchev" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Martin Nanchev
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Martin Nanchev
                
              
              &lt;div id="story-author-preview-content-2009857" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/martinnanchev" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F927745%2Feec7317e-e02b-497c-9ce9-d8163e766cbc.jpeg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Martin Nanchev&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

            &lt;span&gt;
              &lt;span class="crayons-story__tertiary fw-normal"&gt; for &lt;/span&gt;&lt;a href="/aws-builders" class="crayons-story__secondary fw-medium"&gt;AWS Community Builders &lt;/a&gt;
            &lt;/span&gt;
          &lt;/div&gt;
          &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;May 20 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" id="article-link-2009857"&gt;
          How (not) to Burn Money on VPC Endpoints (So You Don't Have To)
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/aws"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;aws&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/vpc"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;vpc&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/networking"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;networking&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/costs"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;costs&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/fire-f60e7a582391810302117f987b22a8ef04a2fe0df7e3258a5f49332df1cec71e.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;13&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              1&lt;span class="hidden s:inline"&gt; comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            9 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>aws</category>
      <category>vpc</category>
      <category>networking</category>
      <category>costs</category>
    </item>
    <item>
      <title>How (not) to Burn Money on VPC Endpoints (So You Don't Have To)</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Tue, 20 May 2025 12:21:32 +0000</pubDate>
      <link>https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p</link>
      <guid>https://dev.to/aws-builders/how-not-to-burn-money-on-vpc-endpoints-so-you-dont-have-to-2f4p</guid>
      <description>&lt;ol&gt;
&lt;li&gt;
The Hidden Cost of Speed: How 'Just Make It Work' Breaks Your AWS Budget
&lt;/li&gt;
&lt;li&gt;Why is it so challenging?&lt;/li&gt;
&lt;li&gt;How does the “Just do it” approach affect pillars?&lt;/li&gt;
&lt;li&gt;How do VPC interface endpoints fit into all this?&lt;/li&gt;
&lt;li&gt;
How much does it cost? – A gentle overview of provisioning VPC interface endpoints for each new VPC

&lt;ul&gt;
&lt;li&gt;Total costs&lt;/li&gt;
&lt;li&gt;Summary&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
Optimizing Architecture for Cost Savings and Business Continuity

&lt;ul&gt;
&lt;li&gt;Why Isn't Cost Enough to Convince the Business?&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
High Level Design

&lt;ul&gt;
&lt;li&gt;Components Table 🧩&lt;/li&gt;
&lt;li&gt;Integrations Table 🔗&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Key takeaways&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Hidden Cost of Speed: How 'Just Make It Work' Breaks Your AWS Budget
&lt;/h2&gt;

&lt;p&gt;Working as a DevOps engineer is like juggling flaming swords while someone shouts, 'Can you deploy that by Friday?' Or worse, 'By 17:00 Friday.'&lt;/p&gt;

&lt;h2&gt;
  
  
  Why is it so challenging?
&lt;/h2&gt;

&lt;p&gt;Explaining that your solution should align with the six pillars of the AWS Well-Architected Framework is like asking for a seatbelt in a car that's already halfway down the hill—or opening your umbrella after the rain has passed. You need time, planning, and a roadmap—and nobody wants to hear that when the only goal is “just make it work.”&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;“Just do it”&lt;/strong&gt; is an effective strategy but out of those six pillars, cost optimization and sustainability are usually the first to be sacrificed.&lt;/p&gt;

&lt;h2&gt;
  
  
  How does the &lt;strong&gt;“Just do it”&lt;/strong&gt; approach affect pillars?
&lt;/h2&gt;

&lt;p&gt;Because in the race to deliver, speed beats everything. Deadlines are sacred. &lt;/p&gt;

&lt;p&gt;And what about budgets? Well, they’re not a problem—until someone sees the monthly AWS bill and starts panicking. Simply because cost impact is often hidden behind shared billing, and nobody has tagging discipline in the early phase. &lt;/p&gt;

&lt;p&gt;Now you're asked to deploy a Graviton instance for a legacy application that doesn't even support ARM. Why wouldn’t you? After all, cost optimization is suddenly top priority—never mind compatibility?&lt;/p&gt;

&lt;p&gt;That’s when suddenly, cost optimization becomes everyone's favorite pillar.&lt;/p&gt;

&lt;h2&gt;
  
  
  How do VPC interface endpoints fit into all this?
&lt;/h2&gt;

&lt;p&gt;Initially, VPC endpoints are provisioned separately per VPC—because we prioritized speed over cost and, sometimes, even quality or security.&lt;br&gt;
If we have 20 VPCs, we will create endpoints in each, this will lead to increased costs 20 times, especially if we have same endpoints, while the traffic is almost idle. One VPC endpoint in one availability zone provides 10 Gbps with automatic scaling up to 100 Gbps. This is enough to handle multiple  workloads, even high-throughput data workloads.&lt;/p&gt;

&lt;p&gt;For those with a programming background, this is a classic example of violating the ‘Don’t Repeat Yourself’ (DRY) principle.&lt;br&gt;
Because repeating the same setup in every VPC introduces unnecessary costs for a horizontally scalable networking component designed to handle large volumes of traffic efficiently—and doing it multiple times means paying multiple times.&lt;/p&gt;

&lt;p&gt;According to the documentation&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;By default, each VPC endpoint can support a bandwidth of up to 10 Gbps per Availability Zone, and automatically scales up to 100 Gbps. &lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;
  
  
  How much does it cost? - a gentle overview of provisioning VPC interface endpoints for each new VPC in environments with multi-account strategy. We will use 13 accounts (let believe it is an unlucky number) and some randomly generated endpoint services as an example
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;account&lt;/th&gt;
&lt;th&gt;interface endpoints&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;ecr.api, ecr.dkr, logs, monitoring, lambda, kms, sts, ssm, ssmmessages, ssm-contacts, ec2, ec2messages, acm-pca, secretsmanager&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;ecr.api, ecr.dkr, logs, monitoring, lambda, kms, sts, ssm, ssmmessages, ssm-contacts, ec2, ec2messages, acm-pca, secretsmanager, sqs, airflow.api, airflow.env, airflow.ops&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;ecr.api, ecr.dkr, logs, monitoring, lambda, kms, sts, acm-pca, secretsmanager, sagemaker.api, sagemaker.runtime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;ssm, ec2, ecr.api, ecr.dkr, logs, monitoring, lambda, kms, sts, sagemaker.api, sagemaker.runtime, execute-api, secretsmanager, states, sts, acm-pca, glue, athena, macie2, ecs, bedrock-runtime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;s3, sts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;ssm, ssmmessages, ec2messages, ec2, s3, logs, monitoring, kms, sts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;ssm, ec2, ecr.api, ecr.dkr, logs, monitoring, lambda, kms, sts, sagemaker.api, secretsmanager, elasticfilesystem, codecommit, git-codecommit, glue, athena, application-autoscaling&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;logs, monitoring, sts, glue, lambda, states, secretsmanager&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;ecr.api, ecr.dkr, logs, monitoring, lambda, kms, sts, acm-pca, secretsmanager&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;logs, monitoring, sts, ec2&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;ecr.api, ecr.dkr, logs, monitoring, lambda, kms, sts, secretsmanager, acm-pca&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;athena, logs, monitoring, kms, secretsmanager, codecommit, sagemaker.api, sagemaker.runtime, glue, git-codecommit, sts, bedrock-runtime&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;ecr.api, ecr.dkr, logs, monitoring, lambda, kms, sts, acm-pca, secretsmanager&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If we group the endpoints by frequency, assuming one environment or four environments, the numbers look like this:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;VPC Endpoint&lt;/th&gt;
&lt;th&gt;Frequency (x1)&lt;/th&gt;
&lt;th&gt;Frequency (x4)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;sts&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;56&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;logs&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;48&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;monitoring&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;td&gt;48&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;kms&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;secretsmanager&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;lambda&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;36&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ecr.api&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ecr.dkr&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;32&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;acm-pca&lt;/td&gt;
&lt;td&gt;7&lt;/td&gt;
&lt;td&gt;28&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ec2&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;24&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ssm&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sagemaker.api&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;glue&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ssmmessages&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ec2messages&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sagemaker.runtime&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;athena&lt;/td&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;12&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ssm-contacts&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;states&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;bedrock-runtime&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;s3&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;codecommit&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;git-codecommit&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sqs&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;airflow.api&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;airflow.env&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;airflow.ops&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;execute-api&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;macie2&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;ecs&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;elasticfilesystem&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;application-autoscaling&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;132&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;528&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  Total costs
&lt;/h3&gt;

&lt;p&gt;Calculation of total costs for eu-west-2 or London region would look like&lt;/p&gt;

&lt;p&gt;Total costs for 132 endpoints for 1 environment = 0.011 (per hour) * 3 AZs * 24* 30 * 132 = 3136.32&lt;br&gt;
Total costs for 528 endpoints =  3136.32* 4 = 12545.28&lt;br&gt;
Data Processing costs for 4 environments = 5.28 (rough estimation)&lt;/p&gt;

&lt;p&gt;Total unique VPC endpoints count = 32&lt;br&gt;
Costs for 32 endpoints = 0.011 (per hour) * 3 AZs * 24* 30 * 32 = 760.32&lt;/p&gt;

&lt;p&gt;A centralized approach for VPC endpoints in a shared services account for prod and nonprod may provide same scalability and high availability, while reducing the &lt;strong&gt;costs&lt;/strong&gt; with &lt;strong&gt;87%&lt;/strong&gt; and &lt;strong&gt;administrative burden&lt;/strong&gt;. Of course we can do a step further and replace some of the interface endpoints like S3 and DynamoDB for gateway endpoints in case we don't want to use their transitive nature and share them across VPCs and we want to save money.&lt;/p&gt;
&lt;h4&gt;
  
  
  Summary
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;132 endpoints x 3 AZs x $0.011/hour x 24 hours x 30 days = $3,136.32/month&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For 4 environments (528 endpoints): $12,545.28/month&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Costs for 32 endpoints across 3 AZs: 0.011 USD/hour × 3 AZs × 24 hours × 30 days × 32 = $760.32/month&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Savings: ~87%&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;&lt;span&gt;Note: I missed to add the costs for the resolver endpoints, which are between $180 and $270 depending on the number of ENIs or more specifically AZs&lt;/span&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Optimizing Architecture for Cost Savings and Business Continuity
&lt;/h2&gt;

&lt;p&gt;The costs above are not necessarily something bad. You have isolation between environments and you gather extensive knowledge how things work and how you need to approach stakeholders in order improve the situation.&lt;/p&gt;
&lt;h3&gt;
  
  
  Why Isn't Cost Enough to Convince the Business?
&lt;/h3&gt;

&lt;p&gt;Business is only interested of certain things. I would say: nobody cares that the administrative burden would be smaller. So how you can approach this?&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;When you start the deployment of the interface endpoints they were not secured well. This means now  we have a lot of networks, resulting in inconsistent security standards—each VPC becomes a snowflake. You may avoid saying that is not secure, a more suitable approach would be:&lt;br&gt;
By standardizing the security policies and security groups you can make sure that sensitive workloads have access only to those specific buckets and only those specific tables and only those specific APIs. This improves the security baseline and reduces the blast radius. As a result this reduces the possibility of a data leakage. (How to Sell Optimization Without Saying 'Security Is Bad')&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;By centralizing and standardizing the interface endpoints, we could achieve an 87% cost reduction. In Bulgaria, there’s a well-known satirical series called The Three Fools. In this context, it feels like we're unintentionally playing a similar role—continuing to pay thousands to AWS for redundant endpoints simply because the architecture hasn't been revisited with fresh eyes.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Note: Security is always a good selling point for the business and nobody measures it after a change. Controlling fear factor and risk sells, a good example would be insurance, that we buy for our houses&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  High Level Design
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmyap1v8ovvo1deyhdxs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frmyap1v8ovvo1deyhdxs.png" alt="High level design" width="800" height="832"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  🧩 Components Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;ID&lt;/th&gt;
&lt;th&gt;Name&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;C1&lt;/td&gt;
&lt;td&gt;Interface Endpoints&lt;/td&gt;
&lt;td&gt;VPC Interface Endpoints&lt;/td&gt;
&lt;td&gt;Provides private access to AWS services (e.g., &lt;code&gt;ssm.eu-west-2.amazonaws.com&lt;/code&gt;).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C2&lt;/td&gt;
&lt;td&gt;Route 53 Private Hosted Zone&lt;/td&gt;
&lt;td&gt;DNS Zone&lt;/td&gt;
&lt;td&gt;Hosts private DNS entries for the services.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C3&lt;/td&gt;
&lt;td&gt;Route 53 Resolver Inbound Endpoint&lt;/td&gt;
&lt;td&gt;DNS Resolver&lt;/td&gt;
&lt;td&gt;Accepts DNS queries from the spoke VPC.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C3&lt;/td&gt;
&lt;td&gt;Shared Resolver&lt;/td&gt;
&lt;td&gt;Route 53 Resolver&lt;/td&gt;
&lt;td&gt;Used by EC2 instances in the spoke VPC to resolve private DNS.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C4&lt;/td&gt;
&lt;td&gt;AWS RAM&lt;/td&gt;
&lt;td&gt;Resource Access Manager&lt;/td&gt;
&lt;td&gt;Shares the inbound endpoint and private hosted zone with the spoke VPC.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;C5&lt;/td&gt;
&lt;td&gt;Cloud WAN Segment Network&lt;/td&gt;
&lt;td&gt;Network Routing&lt;/td&gt;
&lt;td&gt;Routes traffic between segments (e.g., from spoke to shared services).&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;EC2&lt;/td&gt;
&lt;td&gt;Amazon EC2 Instance&lt;/td&gt;
&lt;td&gt;Compute&lt;/td&gt;
&lt;td&gt;The instance initiating the request to &lt;code&gt;ssm.eu-west-2.amazonaws.com&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Spoke VPC&lt;/td&gt;
&lt;td&gt;VPC&lt;/td&gt;
&lt;td&gt;Contains the EC2 instance. CIDR: &lt;code&gt;192.168.20.X&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Centralized VPC Endpoints&lt;/td&gt;
&lt;td&gt;VPC&lt;/td&gt;
&lt;td&gt;Hosts the interface endpoints and inbound resolver. CIDR: &lt;code&gt;192.168.10.X&lt;/code&gt;.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h2&gt;
  
  
  🔗 Integrations Table
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Step&lt;/th&gt;
&lt;th&gt;Integration Description&lt;/th&gt;
&lt;th&gt;Direction&lt;/th&gt;
&lt;th&gt;Protocol/Mechanism&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;EC2 in spoke VPC wants to resolve &lt;code&gt;ssm.eu-west-2.amazonaws.com&lt;/code&gt;.&lt;/td&gt;
&lt;td&gt;Spoke → Shared&lt;/td&gt;
&lt;td&gt;DNS Query via Shared Resolver&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Shared Resolver provides IP &lt;code&gt;192.168.10.4&lt;/code&gt; for the endpoint.&lt;/td&gt;
&lt;td&gt;Shared → Spoke&lt;/td&gt;
&lt;td&gt;DNS Response&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Traffic to &lt;code&gt;192.168.10.4&lt;/code&gt; is not local, forwarded to Cloud WAN uplink.&lt;/td&gt;
&lt;td&gt;Spoke → Cloud WAN&lt;/td&gt;
&lt;td&gt;VPC Route Table / Cloud WAN Routing&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Cloud WAN checks if route to another network is permitted.&lt;/td&gt;
&lt;td&gt;Cloud WAN&lt;/td&gt;
&lt;td&gt;Firewall/Policy Check&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;If permitted, traffic is routed to shared services VPC.&lt;/td&gt;
&lt;td&gt;Cloud WAN → Shared&lt;/td&gt;
&lt;td&gt;Network Forwarding&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;I1&lt;/td&gt;
&lt;td&gt;Private hosted zone is associated with the shared resolver and spoke via RAM.&lt;/td&gt;
&lt;td&gt;Shared ↔ Spoke&lt;/td&gt;
&lt;td&gt;AWS RAM and Route 53 Association&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;I4&lt;/td&gt;
&lt;td&gt;RAM shares the inbound resolver with spoke VPC.&lt;/td&gt;
&lt;td&gt;Shared → Spoke&lt;/td&gt;
&lt;td&gt;AWS Resource Access Manager&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;I5&lt;/td&gt;
&lt;td&gt;Spoke EC2 sends DNS queries to shared resolver.&lt;/td&gt;
&lt;td&gt;Spoke → Shared&lt;/td&gt;
&lt;td&gt;DNS&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Prerequisite: All VPCs are connected via peering/TransitGateway/CloudWAN&lt;/p&gt;
&lt;h3&gt;
  
  
  Hub VPC
&lt;/h3&gt;

&lt;p&gt;First we need to create centralized hub VPC that will have all of the necessary VPC interface endpoints. When you create a VPC endpoint to an AWS service, you can enable private DNS. When enabled, the setting creates an AWS managed Route 53 private hosted zone (PHZ), which enables the resolution of public AWS service endpoint to the private IP of the interface endpoint. You need this disabled in order to define a centralized PHZ trough a Route 53 inbound resolver, which would be shared with other accounts&lt;br&gt;
To do this you need to disable this in terraform:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_vpc_endpoint" "private_links" {
  for_each            = toset(local.vpc_endpoints_all)
  vpc_id              = aws_vpc.main.id
  service_name        = each.key
  vpc_endpoint_type   = "Interface"
  private_dns_enabled = false
#Disabling private DNS lets us override the default endpoint #resolution and use our own Route 53 hosted zone across accounts
  security_group_ids  = [aws_security_group.vpc_endpoint[each.key].id]
  policy              = data.aws_iam_policy_document.vpc_endpoints_policy.json
  subnet_ids          = local.subnets
  tags                = merge({ Name = "${var.prefix}-${each.key}-interface-endpoint" }, var.tags)
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;&lt;p&gt;As next step you create route 53 private hosted zones for each endpoint. We associate them with the centralized VPC from step 1. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then we create Alias A record in each hosted zone pointing to the VPC endpoint's DNS name. For example: For STS endpoint &lt;br&gt;
the name "sts.${data.aws_region.current.name}.amazonaws.com"&lt;br&gt;
should point to the DNS of the newly created VPC endpoint for STS&lt;br&gt;
This allows traffic from spoke VPCs to resolve AWS service endpoints via centralized VPC interface endpoints and inbound endpoint.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Then we create an inbound endpoint in Route 53 with security group, Do53 protocol in at least two subnets for high availability, that will be used in the spoke VPCs as well. The idea of the inbound endpoint resolver is route your DNS queries from other spoke VPCs or networks to the hub VPC&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;As last step we share the resolver or inbound endpoint with other accounts and define policy for security trough Resource Access Manager or RAM&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Spoke VPCs
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Each newly create spoke VPC needs to be associated with the inbound resolver, that was shared from hub VPC. Example
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;data "aws_route53_resolver_rules" "eu_west_2" {
  owner_id     = var.resolver_rules[terraform.workspace]
  share_status = "SHARED_WITH_ME"
}
resource "aws_route53_resolver_rule_association" "eu_west_2" {
  for_each         = data.aws_route53_resolver_rules.eu_west_2.resolver_rule_ids
  resolver_rule_id = each.value
  vpc_id           = data.terraform_remote_state.networking.outputs.network.aws_vpc.id
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Minimizing downtime
&lt;/h3&gt;

&lt;p&gt;Now you would ask: How to move from the state with decentralized VPC interface endpoints to a state, where they were centralized with as minimal downtime as possible? &lt;/p&gt;

&lt;p&gt;In general what could be done is associating the shared resolver with the spoke VPC and then destroying the decentralized VPC endpoints with a rolling deployment from development to production and automatic tests via System Manager Run Command/Lambda. This will guarantee that first you gather knowledge what can fail (Everything fails all the time) and you document it in Confluence or even Readme.md. This will give you confidence for production and will make the big change controllable and more understandable for technical and non-technical people&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Rushing architecture decisions often leads to long-term cost explosions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Interface endpoints are scalable—duplicating them per VPC isn’t.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Centralizing shared services like VPC endpoints saves money and simplifies security management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;To convince stakeholders, lead with security and cost—not technical purity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;AWS gives you the tools; architecture is about using them with purpose.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Sources: &lt;a href="https://aws.amazon.com/blogs/networking-and-content-delivery/centralize-access-using-vpc-interface-endpoints/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/networking-and-content-delivery/centralize-access-using-vpc-interface-endpoints/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centralized-access-to-vpc-private-endpoints.html" rel="noopener noreferrer"&gt;https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/centralized-access-to-vpc-private-endpoints.html&lt;/a&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>vpc</category>
      <category>networking</category>
      <category>costs</category>
    </item>
    <item>
      <title>Matt Garman keynote takeways</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Tue, 03 Dec 2024 18:46:05 +0000</pubDate>
      <link>https://dev.to/aws-builders/matt-garman-keynote-takeways-3hjc</link>
      <guid>https://dev.to/aws-builders/matt-garman-keynote-takeways-3hjc</guid>
      <description>&lt;p&gt;Matt Garman  unveiled at AWS re:Invent 2024 a suite of transformative updates across compute, storage, databases, AI, and real-world applications, redefining what's possible in the cloud. Here’s a deep dive into the innovations that can drive your business forward.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀Compute Breakthroughs
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;P6 EC2 Instances&lt;br&gt;
With NVIDIA’s most advanced GPUs, these instances deliver 2.5x faster machine learning (ML) performance at reduced costs compared to the P5.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon EC2 Trainium 2 Instances&lt;br&gt;
Achieving 20.8 petaflops, these ML powerhouses offer 30–40% better performance, enabling partners like Databricks to accelerate workloads.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-ec2-trn2-instances-available/" rel="noopener noreferrer"&gt;https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-ec2-trn2-instances-available/&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Trainium Ultra Server&lt;br&gt;
Featuring 83 petaflops, this single-node ultra server supports high-performance computing for ML and LLM models at unprecedented speed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Trainium 3 (Coming in 2025)&lt;br&gt;
Built on a 3nm process, it’s set to deliver 40% greater efficiency, setting a new benchmark for ML hardware.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  💾 Storage Revolution
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Amazon S3 Table Buckets
Turn S3 into a high-performance data lake or optimizes the query performance over tabular data, stored in S3! With Apache Iceberg, you can run SQL queries directly, achieving 3x faster queries and 10x higher transactions per second.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-s3-tables-apache-iceberg-tables-analytics-workloads/" rel="noopener noreferrer"&gt;https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-s3-tables-apache-iceberg-tables-analytics-workloads/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/aws/new-amazon-s3-tables-storage-optimized-for-analytics-workloads/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/aws/new-amazon-s3-tables-storage-optimized-for-analytics-workloads/&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;S3 Metadata Discovery
Simplify object discovery with SQL-accessible metadata stored in Iceberg tables, unlocking faster, more intuitive workflows.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-s3-metadata-preview" rel="noopener noreferrer"&gt;https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-s3-metadata-preview&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Zero-ETL integration for applications stored across all SAAS applications, aurora, redshift, dynamodb&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📊 Database Innovations
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Amazon Aurora DSQL&lt;br&gt;
A multi-region, low-latency database offering 5-nines availability. Aurora DSQL uses satellite time sync to nearly break the CAP theorem and is 4x faster than Google Spanner.&lt;br&gt;
&lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-aurora-dsql-preview/" rel="noopener noreferrer"&gt;https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-aurora-dsql-preview/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;DynamoDB Global Tables&lt;br&gt;
These active-active, highly available tables ensure strong consistency and resilience for mission-critical workloads.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-dynamodb-global-tables-previews-multi-region-strong-consistency" rel="noopener noreferrer"&gt;https://aws.amazon.com/about-aws/whats-new/2024/12/amazon-dynamodb-global-tables-previews-multi-region-strong-consistency&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  🧠 AI &amp;amp; Inference Upgrades
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Amazon Nova&lt;br&gt;
AWS's cutting-edge frontier models compete with Claude 3, Gemini, and others, offering 75% lower costs for text, image, and video generation.&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/aws/introducing-amazon-nova-frontier-intelligence-and-industry-leading-price-performance/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/aws/introducing-amazon-nova-frontier-intelligence-and-industry-leading-price-performance/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nova Canvas: Create studio-quality images.&lt;br&gt;
Nova Reel: Generate professional videos.&lt;br&gt;
Nova Speech-to-Speech &amp;amp; Any-to-Any: Transform the possibilities of AI.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon Q Revolutionizing development with AI-powered tools:&lt;br&gt;
&lt;a href="https://aws.amazon.com/blogs/aws/new-amazon-q-developer-agent-capabilities-include-generating-documentation-code-reviews-and-unit-tests/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/aws/new-amazon-q-developer-agent-capabilities-include-generating-documentation-code-reviews-and-unit-tests/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Unit Test Automation: Amazon QGenerate and apply tests automatically.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon Q Code Reviews &amp;amp; Legacy Documentation generation: Achieve precision with ease.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Modernization: Amazon Q Transform VMware workloads and .NET apps into high-performing Linux-based solutions in hours, not years.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bedrock Guardrails &amp;amp; Distillation,  &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Bedrock Model distillation makes AI lighter and 75% cheaper.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Ensure safety with sensitive data redaction and hallucination reduction.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nedrock multi agent collaboration - &lt;a href="https://aws.amazon.com/blogs/aws/introducing-multi-agent-collaboration-capability-for-amazon-bedrock/" rel="noopener noreferrer"&gt;https://aws.amazon.com/blogs/aws/introducing-multi-agent-collaboration-capability-for-amazon-bedrock/&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Next generation of Amazon Sagemaker &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The Sagemaker Unified Studio - access all of the data with the best tool like EMR, Glue, Studio, Notebook&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Amazon Sagemaker Lakehouse - use Apache Iceberg datalake to query data no matter how or where stored in AWS by leveraging the Sagemaker unfied interface&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🏆 Real Stories of Impact
&lt;/h2&gt;

&lt;p&gt;Apple &amp;amp; AWS Collaboration&lt;br&gt;
From Graviton 3 optimizations to accident detection using K-nearest neighbors, AWS drives ML innovation for Apple’s private LLMs, hosted on Trainium instances.&lt;/p&gt;

&lt;p&gt;JP Morgan Chase&lt;br&gt;
Transitioning 6,000 applications to AWS, the financial giant enhances resilience, reduces costs, and supports global business growth.&lt;/p&gt;

&lt;p&gt;Genentech&lt;br&gt;
Transforming drug discovery with generative AI, their systems enable scientists to ask complex questions and receive actionable insights in real-time.&lt;/p&gt;

&lt;p&gt;PagerDuty Advance - the new GenAI PagerDuty, that keep you well informed and help you follow and engage issues. Less time for issues and more time for building with Amazon Q for PagerDuty Advance  &lt;/p&gt;

&lt;p&gt;The Road Ahead&lt;br&gt;
AWS continues to redefine the limits of cloud innovation. Whether you’re modernizing infrastructure, exploring advanced AI models, or optimizing workloads, these advancements empower organizations to scale faster, innovate deeper, and inspire change.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>communitybuilders</category>
      <category>reinvent</category>
      <category>awsreinvent</category>
    </item>
    <item>
      <title>CloudWatch and Config Unleashed: Smart Features for Effortless Monitoring Mastery</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Sat, 20 Jan 2024 09:51:32 +0000</pubDate>
      <link>https://dev.to/aws-builders/cloudwatch-and-config-unleashed-smart-features-for-effortless-monitoring-mastery-233m</link>
      <guid>https://dev.to/aws-builders/cloudwatch-and-config-unleashed-smart-features-for-effortless-monitoring-mastery-233m</guid>
      <description>&lt;p&gt;At the heart of re:Intent 2023, the focal points were the groundbreaking advancements in generative AI and the revolutionary Bedrock service. The effect of these innovations extended to various AWS services, notably transforming Amazon CloudWatch into the ultimate single-pane-of-glass solution for monitoring your Cloud environment and making compliance a little bit easier wit Config enhancements &lt;/p&gt;

&lt;p&gt;Now, let's get down to the good stuff — the enhancements that have not just fine-tuned but totally transformed the way we do monitoring. Imagine a world where keeping an eye on things is not just smarter but as easy as having a conversation or more human - friendly:&lt;/p&gt;

&lt;p&gt;Amazon Cloudwatch - natural language query generator - Did you ever wonder, how you can create Log insights query, without digging in complex syntax?. Now it is easier than ever. You prompt your question in natural language and NLQG generates the log insights query for you. CloudWatch becomes your responsive companion. Let try the new feature in North Virginia:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To try the new feature we go to AWS Cloudwatch in console in us-east-1 -&amp;gt; Log insights -&amp;gt;  Query generator and we type the following prompt:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Show me the 10 most denied aws api calls in current account
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;The query generator will do the magic for you and will suggest something following query:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;fields errorCode, eventName, userIdentity.userName
| filter errorCode = "AccessDenied"
| stats count(*) as accessCount by errorCode, eventName, userIdentity.userName  
| sort accessCount desc
| limit 10
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;The Query generator will not run, before you click on the &lt;/li&gt;
&lt;/ol&gt;

&lt;blockquote&gt;
&lt;p&gt;Run Query&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;The results are really satisfying &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe9liqpjgryvtmn6bvsf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqe9liqpjgryvtmn6bvsf.png" alt="Query generator" width="800" height="465"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In conclusion the new future makes querying logs more user friendly and more use-case driven. You are focusing on the business need and not how to do it.&lt;/p&gt;

&lt;p&gt;AWS Cloudwatch Logs Anomaly detection - How can we identify the root cause, when the volume of the log data is huge? With the Anomaly detection feature this is possible. It summarises the logs and help you find the needle in the haystack. This can even answer to the question what is the root cause of the issue. You can activate it per log group. Usually it takes between 5 minutes and 24 hours depending on the size of the log group&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis0rzhs78gpfy5bfcixh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fis0rzhs78gpfy5bfcixh.png" alt="Log anomaly detection" width="800" height="166"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AWS Config NL Query Processor: Simplifying Resource Compliance&lt;br&gt;
Managing compliance and resource configurations can be complex and time consuming, but not anymore. The AWS Config NL Query Processor allows you to seek information effortlessly by posing questions in plain language. Whether you're in search of compliant or noncompliant resources, encrypted or unencrypted volumes, or secrets with no rotation, simply ask without the need for SQL expertise. And for those who want to take it a step further, advanced queries are now within reach, making the seemingly complex, simple. &lt;/p&gt;

&lt;p&gt;If we head to the root account and AWS Config -&amp;gt; Advanced queries -&amp;gt; Query editor -&amp;gt; Natural language query processor (In Preview )and type following prompt in the :&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Show me all EC2, that have unencrypted volume
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjby12ohs42iglu8vy77f.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjby12ohs42iglu8vy77f.png" alt="Config Query processing" width="800" height="378"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The result is showing the EBS volumes, that are unencrypted, although I asked for instances. On a positive side it is still in preview and I think we are still in the beginning and there are more enhancements, that will ease our day-to-day activities in the coming years. It is definitely a good start and something that I will continue to use.&lt;/p&gt;

&lt;p&gt;As we explore these enhancements, it's evident that CloudWatch is not just another monitoring tool; it's a companion in your Cloud journey, adapting to your needs with intelligence and ease. Config is becoming a smarter CMDB for cloud resources. The future of monitoring and compliance is here, and it's not just about data; it's about making data work for you. And this is just the beginning :) &lt;/p&gt;

</description>
      <category>awscommunitybuilders</category>
      <category>cloudwatch</category>
      <category>monitoring</category>
    </item>
    <item>
      <title>Centralized S3 backup with AWS Backup in terraform</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Tue, 09 Jan 2024 10:19:02 +0000</pubDate>
      <link>https://dev.to/aws-builders/centralized-s3-backup-with-aws-backup-in-terraform-46bd</link>
      <guid>https://dev.to/aws-builders/centralized-s3-backup-with-aws-backup-in-terraform-46bd</guid>
      <description>&lt;h2&gt;
  
  
  Scenario
&lt;/h2&gt;

&lt;p&gt;A day in the life of a Solution architect is connected with gathering requirements both functional and non-functional requirements. RTO and RPO are part of the non-functional requirements. In this post we will look at the disaster recovery for S3 buckets for different RTO and RPOs with the help of AWS Backup service. The RTOs and RPOs are defined for different use-cases and data classifications, namely - standard_data, sensitive_data, curated_data, curated_sensitive_data, critical_data, critical_sensitive_data. The idea is to create the whole infrastructure from scratch using data from AWS Backup&lt;/p&gt;

&lt;p&gt;Object lock is a good start for protecting objects in the bucket for specific period of time. This comes in two modes - Compliance and Governance. Simply put governance allows object to be overwritten or deleted if you have specific permission to do so, while compliance is more strict and protects the objects from deletion even from the root user. Versioning is a prerequisite for object lock. If you put an object into a bucket that already contains an existing protected object with the same object key name, Amazon S3 creates a new version of that object. The existing protected version of the object remains locked according to its retention configuration. Versioning is required as a prerequisite for WORM or object lock.&lt;/p&gt;

&lt;p&gt;Here are the defined scenarios and requirements:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Backup level&lt;/th&gt;
&lt;th&gt;AWS Backup enabled&lt;/th&gt;
&lt;th&gt;Object lock&lt;/th&gt;
&lt;th&gt;Vault copy&lt;/th&gt;
&lt;th&gt;Point in time recovery&lt;/th&gt;
&lt;th&gt;Retention&lt;/th&gt;
&lt;th&gt;Delete backups after days&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;standard_data&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;15 days&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;sensitive_data&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;15 days&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;curated_data&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;35 days&lt;/td&gt;
&lt;td&gt;365&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;curated_sensitive_data&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;35 days&lt;/td&gt;
&lt;td&gt;365&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;critical_data&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;35 days&lt;/td&gt;
&lt;td&gt;1825&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;critical_sensitive_data&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;X&lt;/td&gt;
&lt;td&gt;35 days&lt;/td&gt;
&lt;td&gt;Set by the business in days&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Table 1: AWS Backup use cases&lt;/p&gt;

&lt;p&gt;We will look at each of the scenarios starting with the one, that does not require AWS backup:&lt;/p&gt;

&lt;h3&gt;
  
  
  Standard data
&lt;/h3&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

# DataBackupLevel = Standard Data, Write Once Read Many
module "standard_data" {
  source               = "terraform-aws-modules/s3-bucket/aws"
  bucket               = "project-standard-data-martin-n"
  attach_public_policy = false
  versioning = {
    status     = true
    mfa_delete = false
  }
  object_lock_enabled = true
  object_lock_configuration = {
    rule = {
      default_retention = {
        mode = "GOVERNANCE"
        days = 15
      }
    }
  }
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This will product our standard data from deletion for the time period of the object lock.&lt;br&gt;
All other use cases sensitive_data, curated_data, curated_sensitive_data, critical_data, critical_sensitive_data will make use of AWS Backup as backup service. &lt;/p&gt;

&lt;h4&gt;
  
  
  Restoration of standard data objects
&lt;/h4&gt;

&lt;p&gt;The restoration of files is done via aws cli or AWS SDK and consist of following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Use the ListObjectVersions API to get the version ID of the object version you want to restore.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Call RestoreObject and pass the version ID as a parameter. You can also specify the number of days the restored object will be available.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The restored object will be available in S3 under a new object name. You can download or access it like any other object.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;In object lock this will work differently, because you will put same object key and create a new version. To automate the process you can use S3 Batch operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assumptions
&lt;/h2&gt;

&lt;p&gt;To quote Werner Vogels: Everything fails all the time.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwd59ce7huvut3val2wth.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwd59ce7huvut3val2wth.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;There are multiple disaster recovery strategies to tackle data failures as shown in the graphic below:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7xm2mwc67umwf9za87n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7xm2mwc67umwf9za87n.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Backup is associated with the lowest costs, but highest RTO and RPO, which could take hours. In our case we will limit the post to the first one or Backup, which tends to be associated with lowest cost.&lt;/p&gt;

&lt;p&gt;Imagine if we suddenly can't reach our account and need to start from scratch with our entire setup. It's crucial to have our data backed up, and we should also make sure those backups are stored in another account to ensure access in case of a disaster. That's where AWS Backup comes in handy, providing a straightforward solution to keep our data safe and accessible.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AWS Backup?
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;AWS Backup makes it easy to centrally configure backup policies and monitor backup activity for AWS resources, such as Amazon Elastic Compute Cloud (EC2) instances, Amazon Elastic Block Store (EBS) volumes, Amazon Relational Database Service (RDS) databases, Amazon DynamoDB tables, Amazon Elastic File System (EFS) file systems, Amazon FSx file systems, and AWS Storage Gateway volumes.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We will the service backups to S3 only.&lt;/p&gt;

&lt;h2&gt;
  
  
  How can we store backups from one account in another account?
&lt;/h2&gt;

&lt;p&gt;A simple solution will be to copy backups from one vault to the vault in another account. A vault is storage for snapshots, AMI or in more general context storage for backups&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2og2582t6bllxrv6abu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk2og2582t6bllxrv6abu.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Central AWS account
&lt;/h2&gt;

&lt;p&gt;A great way to begin is to define the vault in the central account. KMS key will be used to encrypt the vault for the snapshots and allow the workload account to copy images to it. We are using grant to allow this to the workload account&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_kms_key" "vault_kms" {
  description = "Vault kms key for encryption"
  policy      = &amp;lt;&amp;lt;POLICY
{
    "Id": "vault-kms-policy",
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Enable IAM User Permissions",
            "Effect": "Allow access to view and describe key",
            "Principal": {
                "AWS": "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root"
            },
            "Action": [
                "kms:ListResourceTags",
                "kms:GetKeyPolicy",
                "kms:Describe*"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow access for Key Administrators",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/github_ci"
            },
            "Action": [
                "kms:Create*",
                "kms:Describe*",
                "kms:Enable*",
                "kms:List*",
                "kms:Put*",
                "kms:Update*",
                "kms:Revoke*",
                "kms:Disable*",
                "kms:Get*",
                "kms:Delete*",
                "kms:TagResource",
                "kms:UntagResource",
                "kms:ScheduleKeyDeletion",
                "kms:CancelKeyDeletion"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow use of the key",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/kms_usage",
                    "arn:aws:iam::${var.workload_account_id}:root"
                ]
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        },
        {
            "Sid": "Allow attachment of persistent resources",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/vault_role",
                    "arn:aws:iam::${var.workload_account_id}:root"
                ]
            },
            "Action": [
                "kms:CreateGrant",
                "kms:ListGrants",
                "kms:RevokeGrant"
            ],
            "Resource": "*",
            "Condition": {
                "Bool": {
                    "kms:GrantIsForAWSResource": "true"
                }
            }
        }
    ]
}
POLICY
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;As a next step we define an alias to the key in order that the key is more human friendly or readable in the AWS console.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_kms_alias" "aws_kms_alias" {
  name          = "alias/aws-backup-kms"
  target_key_id = aws_kms_key.vault_kms.key_id
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;The last step is to define the central vault, used to aggregate snapshots from workload account:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_backup_vault" "central_vault" {
  name        = "central-aws-vault-S3"
  kms_key_arn = aws_kms_key.vault_kms.arn
}


resource "aws_backup_vault_lock_configuration" "locker" {
  backup_vault_name   = aws_backup_vault.central_vault.name
  changeable_for_days = 3
  max_retention_days  = 35
  min_retention_days  = 15
}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Last step is to create a policy which allows snapshots to be copied to the central_vault:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;

resource "aws_backup_vault_policy" "central_vault_allowance" {
  backup_vault_name = aws_backup_vault.central_vault.name
  policy = &amp;lt;&amp;lt;POLICY
{
  "Version": "2012-10-17",
  "Id": "default",
  "Statement": [
    {
      "Sid": "Allow Tool Prod Account to copy into iemtrialcluster_backup_vault",
      "Effect": "Allow",
      "Action": "backup:CopyIntoBackupVault",
      "Resource": "*",
      "Principal": {
        "AWS": "arn:aws:iam::${var.workload_account_id}:root"
      }
    }
  ]
}
POLICY
}


&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;This is all, that is needed to define our central account repository for S3&lt;/p&gt;

&lt;h2&gt;
  
  
  Workload account
&lt;/h2&gt;

&lt;p&gt;Now we will look at each of the use cases in the Table 1&lt;/p&gt;

&lt;h3&gt;
  
  
  Sensitive data
&lt;/h3&gt;

&lt;p&gt;We will use for monitoring SNS to send us notification in pagerduty or slack via Lambda function in case of failed backups. For the backup selection we will use direct selection, although selection by Tag could be more appropriate for most use cases. AWS Backup also supports Point in time recovery trough enabling continous backup in the backup plan&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


module "aws_backup_s3_sensitive_data" {
  source     = "lgallard/backup/aws"
  vault_name = "${local.service_name}-s3-backup-sensitive-data"
  plan_name  = "${local.service_name}-s3-backup-sensitive-data-plan"
  notifications = {
    sns_topic_arn       = data.aws_sns_topic.failed_backups.arn
    backup_vault_events = ["BACKUP_JOB_FAILED"]
  }

  rules = [
    {
      name              = "${local.service_name}-s3-backup-sensitive-data-rule"
      schedule          = "cron(5 2 * * ? *)"
      start_window             = 60
      completion_window        = 180
      enable_continuous_backup = true
      lifecycle = {
        cold_storage_after = null
        delete_after       = 15
      }
    }
  ]
  selections = [
    {
      name      = "${local.service_name}-s3--sensitive-data-selection"
      resources = ["arn:aws:s3:::prefix-dummy-${local.environment}-database-sensitive-data-bucket", "arn:aws:s3:::dummy-${local.environment}-sensitive-data-logs"]
    }
  ]
}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Curated data and curated sensitive data
&lt;/h3&gt;

&lt;p&gt;Curated data refers to information that has been carefully selected, organized, and maintained to ensure accuracy, relevance, and quality. Think of it like a well-maintained library where librarians carefully choose and organize books to provide a reliable and valuable collection for visitors. In the context of data, curators, often experts in a specific field, carefully choose, validate, and organize data to create a trustworthy and useful dataset. This process helps ensure that the information is reliable, up-to-date, and suitable for specific purposes, making it easier for users to find and use the data they need.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


module "aws_backup_s3_curated_data" {
  source     = "lgallard/backup/aws"
  vault_name = "${local.service_name}-s3-curated-data-backup"
  plan_name  = "${local.service_name}-s3-backup-curated-data-plan"
  notifications = {
    sns_topic_arn       = data.aws_sns_topic.failed_backups.arn
    backup_vault_events = ["BACKUP_JOB_FAILED"]
  }

  rules = [
    {
      name              = "${local.service_name}-s3-backup-curated-data-rule"
      schedule          = "cron(5 2 * * ? *)"
      start_window             = 60
      completion_window        = 180
      enable_continuous_backup = true
      lifecycle = {
        cold_storage_after = null
        delete_after       = 35
      }
    }
  ]
  selections = [
    {
      name      = "${local.service_name}-s3-selection"
      resources = ["arn:aws:s3:::prefix-dummy-${local.environment}-database-curated-data-bucket", "arn:aws:s3:::dummy-${local.environment}-curated-data-logs"]
    }
  ]
}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h3&gt;
  
  
  Critical data and critical sensitive data
&lt;/h3&gt;

&lt;p&gt;This data brings most value to the business and help your business executives to make decision by using a data driven approach. The Backups will be copied to the central account&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;


module "aws_backup_critical" {
  source     = "lgallard/backup/aws"
  vault_name = "${local.service_name}-s3-backup-critical-data"
  plan_name  = "${local.service_name}-s3-backup-critical-data-plan"
  notifications = {
    sns_topic_arn       = data.aws_sns_topic.failed_backups.arn
    backup_vault_events = ["BACKUP_JOB_FAILED"]
  }

  rules = [
    {
      name              = "${local.service_name}-s3-backup-critical-data-rule"
      schedule          = "cron(5 2 * * ? *)"
      copy_actions = [
        {
          lifecycle = {
            cold_storage_after = 90
            delete_after       = 1825
          },
          destination_vault_arn = var.central_s3_vault_arn
        },
      ]
      start_window             = 60
      completion_window        = 180
      enable_continuous_backup = true
      lifecycle = {
        cold_storage_after = null
        delete_after       = 35
      }
    }
  ]
  selections = [
    {
      name      = "${local.service_name}-s3-critical-data-selection"
      resources = ["arn:aws:s3:::prefix-dummy-${local.environment}-database-critical-data-bucket", "arn:aws:s3:::dummy-${local.environment}-critical-data-logs]
    }
  ]
}



&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Now that we have the backups, How can we restore objects. Lets have a look in the next section&lt;/p&gt;

&lt;h3&gt;
  
  
  Data restoration
&lt;/h3&gt;

&lt;p&gt;Go to AWS Backup -&amp;gt; Backup vaults -&amp;gt; Select the recovery point, that you want and click Actions -&amp;gt; restore&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpu0f8p8pnzz4n2bjdlk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwpu0f8p8pnzz4n2bjdlk.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, ensuring the integrity and accessibility of data is a critical aspect of a Solution Architect's role, and AWS Backup proves to be an indispensable tool in this endeavor. The outlined disaster recovery scenarios, comprehensive use cases, and the step-by-step configuration guide for central and workload accounts underscore the significance of AWS Backup in safeguarding data across various classifications. Whether dealing with standard data or critical sensitive data, the emphasis on backup, object lock, and versioning ensures a robust data protection strategy. This comprehensive approach not only addresses the need for backup but also sheds light on restoration procedures, making AWS Backup an essential component for maintaining data resilience in the ever-evolving landscape of cloud computing&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Centralizing Cloudwatch observability - Past, Present and Future</title>
      <dc:creator>Martin Nanchev</dc:creator>
      <pubDate>Tue, 16 May 2023 16:13:40 +0000</pubDate>
      <link>https://dev.to/aws-builders/centralizing-cloudwatch-observability-past-present-and-future-3527</link>
      <guid>https://dev.to/aws-builders/centralizing-cloudwatch-observability-past-present-and-future-3527</guid>
      <description>&lt;h2&gt;
  
  
  What is observability?
&lt;/h2&gt;

&lt;p&gt;According to wikipedia observability is a measure of how well internal states of a system can be inferred from knowledge of its external outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In simple words&lt;/strong&gt; we measure how the internal lego blocks of a system (AWS services) perform from using their outputs or trying to monitor from customer perspective. Cloudwatch plays a central role in AWS for monitoring, logging, alerting and auditing of a system, but there is one catch. When you have 200 accounts it is really difficult to develop an observability strategy at scale. &lt;/p&gt;

&lt;h2&gt;
  
  
  Past or cross account observability using Cloudwatch
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;Dark ages before OAM&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ol&gt;
&lt;li&gt;You will need to go to every single account and check the dashboards and logs. &lt;/li&gt;
&lt;li&gt;Well there is always an alternative to stream Cloudwatch logs using Kinesis firehose and store them in S3. Afterwards you create a schema from the S3 logs using Glue and query them via Athena. If your bucket was encrypted with KMS and you forgot to switch off the option to decrypt them, this can lead to unexpected raise in the bill&lt;/li&gt;
&lt;li&gt;You can always implement Opensearch to store the logs&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To summarise a cross account observability was difficult without OAM (although you can use a cross account role to putTraces from ECS tasks for X-ray)&lt;/p&gt;

&lt;h2&gt;
  
  
  The Present - OAM
&lt;/h2&gt;

&lt;p&gt;Last year a new feature was presented called Observability Access Manager. The idea is to create a sink account, which will receive the cloudwatch logs, metrics, traces. (it will use sharing) Then each source account will create a link to the sink account to share the cloudwatch logs, metrics, traces. &lt;br&gt;
&lt;strong&gt;Costs:&lt;/strong&gt; it is free for logs, metrics, but not for traces. Copies of the traces will be billed. And there is a standard fee for creating of Dashboards, alarms etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How this will look like if we have multiple accounts?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;There are two sink accounts to collect observability data - for production and non-production accounts&lt;/li&gt;
&lt;li&gt;There are four accounts to share data - development and qa, which share data with the monitoring non-production account. Production and staging, which share data with the monitoring production account.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;To show it in more graphical way:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1privegcbt44toh0uel.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1privegcbt44toh0uel.png" alt="Observability access manager account structure"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Everything sounds perfect, but can we automate the OAM sink and sources, that share observability data?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;First we will need to create a sink using terraform in the central monitoring account. The accounts will collect observability data from sources:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
locals {
  resource_types = coalescelist(var.services,["AWS::Logs::LogGroup",
    "AWS::CloudWatch::Metric",
  "AWS::XRay::Trace"])
}

resource "aws_oam_sink" "sink" {
  name = var.name
  tags = var.tags
}


resource "aws_oam_sink_policy" "observability_data_sink_policy" {
  sink_identifier = aws_oam_sink.sink.id
  policy          = data.aws_iam_policy_document.sink_policy.json
}

data "aws_iam_policy_document" "sink_policy" {

  dynamic "statement" {
    for_each = length(var.account_list) &amp;gt; 0 ? [1] : []
    content {
      actions = ["oam:CreateLink", "oam:UpdateLink"]
      principals {
        type        = "AWS"
        identifiers = var.account_list
      }
      resources = ["*"]
      effect    = "Allow"
      condition {
        test     = "ForAllValues:StringEquals"
        values   = local.resource_types
        variable = "oam:ResourceTypes"
      }

    }
  }

  dynamic "statement" {
    for_each = length(var.org_unit_list) &amp;gt; 0 ? [1] : []
    content {
      actions = ["oam:CreateLink", "oam:UpdateLink"]
      principals {
        type        = "*"
        identifiers = ["*"]
      }
      resources = ["*"]
      effect    = "Allow"
      condition {
        test     = "ForAllValues:StringEquals"
        values   = local.resource_types
        variable = "oam:ResourceTypes"
      }
      condition {
        test     = "ForAnyValue:StringEquals"
        values   = var.org_unit_list
        variable = "aws:PrincipalOrgPaths"
      }
    }
  }
}


variable "name" {
  type        = string
  description = "The name of the observability access manager sink, which collects observability data - logs, metrics, traces"
  default     = "monitoring-sink"
}

variable "tags" {
  type        = map(string)
  description = "Tags to be added to the specific resource"
  default     = {}
}

variable "org_unit_list" {
  type        = list(string)
  description = "list of Organizational units"
  default = [
    "o-aaausawxze/ou-tv7w-dd1211vs",
  ]
}

variable "account_list" {
  description = "List of accounts"
  type        = list(string)
  default     = ["123456789102"]
}

variable "services" {
  type        = list(string)
  default     = []
  description = "List of services to be shared. Possible values are: AWS::Logs::LogGroup, AWS::CloudWatch::Metric, AWS::XRay::Trace"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The resourced based policy specifies, that OU or account id could be defined as sources and share data with the sink&lt;/p&gt;

&lt;p&gt;After the sink is ready. We can output the arn of the OAM:&lt;br&gt;
&lt;code&gt;&lt;br&gt;
output "sink_arn" {&lt;br&gt;
  value = aws_oam_sink.sink.id&lt;br&gt;
}&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The arn will be needed by the link (source) to connect to the sink (monitoring account). A data terraform remote state could be used to get the output and use it from a tf configuration file. &lt;br&gt;
A simple hardcoded example will look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;resource "aws_oam_link" "source_to_sink" {
  label_template  = "$AccountName"
  resource_types  = local.resource_types
  sink_identifier = var.sink_identifier
  tags            = var.tags
}
locals {
  resource_types = coalescelist(var.services, ["AWS::Logs::LogGroup",
    "AWS::CloudWatch::Metric",
  "AWS::XRay::Trace"])
}

variable "sink_identifier" {
  description = "Sink identifier"
  default     = "arn:aws:oam:eu-west-1:123456789102:sink/ed278766-dc6f-4417-ae11-8bf09e9dc329"
  type        = string
}

variable "tags" {
  type        = map(string)
  description = "Tags to be added to the specific resource"
  default     = {}
}

variable "services" {
  type        = list(string)
  default     = []
  description = "List of services to be shared. Possible values are: AWS::Logs::LogGroup, AWS::CloudWatch::Metric, AWS::XRay::Trace"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;How will the monitoring account look like after it receives the data?&lt;/p&gt;

&lt;p&gt;A standard Cloudwatch dashboard could be:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk08brlgyx03v1mhanffw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fk08brlgyx03v1mhanffw.png" alt="Standared Cloudwatch dashboard"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then if we use the cross account metrics from the central monitoring account, it will look the same but there is a cross account check on the right side of the metric. Also on the metric itself you will receive a label with the account name:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tu4b5prjc4th6x7qa75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5tu4b5prjc4th6x7qa75.png" alt="Cross account metric"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Then all available accounts, which share data with the monitoring account or sink are visible in the Cloudwatch Settings menu:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4j4p9j7mecy2wgtd2i88.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4j4p9j7mecy2wgtd2i88.png" alt="Account sources"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Future
&lt;/h2&gt;

&lt;p&gt;The present look awesome. You can share observability data, create a centralized dashboards in monitoring account and copy traces to the centralized account. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;A possibility to create more dynamic dashboards will look awesome. At the moment I use json to create a template, which is then deployed via terraform. There is a catch that you will need to add every new resource. Well there is the resource explorer, but it still lacks some of the features, that I would like to have. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Also with the Generative AI waiting behind the door, I would expect a suggestions like which metrics will be important for example for Kafka and automatic dashboard suggestions, although you can do a research and define some baseline metrics for monitoring of the cluster and brokers. Also this is really difficult, because you receive building blocks to create architecture -&amp;gt; You build it, you own it. A plan for observability will be needed, before the workload lands in production.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>cloudwatch</category>
      <category>observability</category>
      <category>oam</category>
      <category>aws</category>
    </item>
  </channel>
</rss>
