<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Martín Rivadavia</title>
    <description>The latest articles on DEV Community by Martín Rivadavia (@rivadaviam).</description>
    <link>https://dev.to/rivadaviam</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rivadaviam"/>
    <language>en</language>
    <item>
      <title>I Spent ~$363 in Two Days on Bedrock — Cheap Lesson or Expensive One?</title>
      <dc:creator>Martín Rivadavia</dc:creator>
      <pubDate>Tue, 24 Mar 2026 16:48:50 +0000</pubDate>
      <link>https://dev.to/rivadaviam/i-spent-363-in-two-days-on-bedrock-cheap-lesson-or-expensive-one-3l48</link>
      <guid>https://dev.to/rivadaviam/i-spent-363-in-two-days-on-bedrock-cheap-lesson-or-expensive-one-3l48</guid>
      <description>&lt;p&gt;The pricing page told me embeddings were "per thousand tokens." The &lt;strong&gt;bill&lt;/strong&gt; told me what that actually means when you sync a corpus more than once while you are still learning.&lt;/p&gt;

&lt;p&gt;This is the story of a &lt;strong&gt;two-day spike&lt;/strong&gt; on my AWS account — mostly &lt;strong&gt;Amazon Bedrock&lt;/strong&gt; — and what I took away from it. Spoiler: I do not think it was only money down the drain.&lt;/p&gt;




&lt;h2&gt;
  
  
  The receipt (literally)
&lt;/h2&gt;

&lt;p&gt;I pulled up &lt;strong&gt;Cost Explorer&lt;/strong&gt; after a weekend of experiments. Two consecutive days in December looked like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2m04uwkccyx5hcdboj0p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2m04uwkccyx5hcdboj0p.png" alt="AWS billing breakdown: Bedrock and OpenSearch over two days" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Roughly:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;/th&gt;
&lt;th&gt;Total (two days)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;All services (shown)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$363&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Amazon Bedrock&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$353&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Amazon OpenSearch Service&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$7&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Most of that was &lt;strong&gt;not&lt;/strong&gt; me chatting with Claude in the playground. It was &lt;strong&gt;Knowledge Base work&lt;/strong&gt;: pointing Bedrock at data in S3, running &lt;strong&gt;ingestion / sync jobs&lt;/strong&gt;, and &lt;strong&gt;re-running&lt;/strong&gt; them while I changed chunking, parsers, and paths — before I had a clear picture of how &lt;strong&gt;embedding tokens&lt;/strong&gt; accumulate across &lt;strong&gt;retries&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;I had expected "some cost." I had not internalized how fast &lt;strong&gt;repeat syncs&lt;/strong&gt; turn into &lt;strong&gt;repeat embeddings&lt;/strong&gt; for the same underlying text.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I thought I was doing
&lt;/h2&gt;

&lt;p&gt;I was building a &lt;strong&gt;RAG-style pipeline&lt;/strong&gt; over &lt;strong&gt;technical PDFs&lt;/strong&gt;: extract text, get it into a &lt;strong&gt;Bedrock Knowledge Base&lt;/strong&gt;, run &lt;strong&gt;retrieval&lt;/strong&gt;, and eventually wire that into an application.&lt;/p&gt;

&lt;p&gt;That sounds linear. In practice my early loop looked more like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Upload or point at objects in S3.&lt;/li&gt;
&lt;li&gt;Start a sync.&lt;/li&gt;
&lt;li&gt;Notice something wrong — layout, chunk boundaries, metadata, or retrieval quality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Change the preprocessing&lt;/strong&gt;, upload again, &lt;strong&gt;sync again&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Repeat step 4 until I understood what "good" looked like.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Every pass felt like "fixing configuration." In billing terms, many passes looked like &lt;strong&gt;new embedding work&lt;/strong&gt;. The console does not always feel like spending money; &lt;strong&gt;Cost Explorer&lt;/strong&gt; does.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the invoice taught me (that the docs had not fully sunk in yet)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;GenAI on AWS is not one line item.&lt;/strong&gt; For Knowledge Bases, the mental model that finally stuck:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Ingestion / embedding&lt;/strong&gt; is where large PDFs hurt — you pay for &lt;strong&gt;tokens processed into the vector store&lt;/strong&gt;, not for "having a PDF on disk."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Retrieval&lt;/strong&gt; and &lt;strong&gt;downstream model calls&lt;/strong&gt; are &lt;strong&gt;additional&lt;/strong&gt; meters. Separating "sync cost" from "query cost" matters when you debug.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Re-syncing the same corpus&lt;/strong&gt; while you tune chunking is not a free redo. You are often paying for &lt;strong&gt;another full pass&lt;/strong&gt; over the same content unless you have designed for &lt;strong&gt;incremental&lt;/strong&gt; or &lt;strong&gt;diff-aware&lt;/strong&gt; updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of that is a secret — it is all in the documentation. &lt;strong&gt;The bill was the tutorial that actually stuck.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  ¿Lección cara o barata?
&lt;/h2&gt;

&lt;p&gt;If you judge only by the number on the screen, it was an &lt;strong&gt;expensive&lt;/strong&gt; lesson.&lt;/p&gt;

&lt;p&gt;If you judge by what I would pay for the same misunderstanding &lt;strong&gt;in production&lt;/strong&gt;, under deadline, with a team and customer trust on the line — &lt;strong&gt;learning it on my own lab account&lt;/strong&gt; starts to look &lt;strong&gt;cheap&lt;/strong&gt;. I paid once in dollars and once in humility. I would rather do that &lt;strong&gt;before&lt;/strong&gt; I ever optimize someone else's budget.&lt;/p&gt;

&lt;p&gt;So my answer: &lt;strong&gt;it was both.&lt;/strong&gt; Painful in the moment, &lt;strong&gt;valuable&lt;/strong&gt; in context — and only "wasteful" if I pretended nothing needed to change afterward.&lt;/p&gt;




&lt;h2&gt;
  
  
  What I changed in how I build
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;1. Measure before sync.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
I stopped treating "upload and sync" as the first step. I added &lt;strong&gt;local&lt;/strong&gt; steps: approximate &lt;strong&gt;token counts&lt;/strong&gt;, &lt;strong&gt;preview chunk boundaries&lt;/strong&gt;, and sanity-check &lt;strong&gt;file sizes&lt;/strong&gt; &lt;em&gt;before&lt;/em&gt; triggering another full ingestion. That mindset is documented in &lt;strong&gt;&lt;a href="https://github.com/rivadaviam/aws-genai-cert-learning-journey/tree/main/labs/lab-02-kb-ingestion-basics" rel="noopener noreferrer"&gt;Lab 02: KB Ingestion — Foundations First&lt;/a&gt;&lt;/strong&gt; in the learning-journey repo — small CLI helpers, no cloud required for the first pass.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Separate "pipeline quality" from "vector store plumbing."&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
I invested in a clearer &lt;strong&gt;PDF → clean text → intentional chunks → S3&lt;/strong&gt; path in &lt;strong&gt;&lt;a href="https://github.com/rivadaviam/aws-pdf-rag-mr" rel="noopener noreferrer"&gt;aws-pdf-rag-mr&lt;/a&gt;&lt;/strong&gt; — the companion repo with the &lt;strong&gt;full pipeline code&lt;/strong&gt; (Terraform, Lambda processor, Bedrock KB, S3 Vectors) that matches what Lab 02 points you toward &lt;em&gt;after&lt;/em&gt; the foundations. That way Bedrock sees &lt;strong&gt;stable, deliberate&lt;/strong&gt; chunks instead of whatever the default path produced on raw PDFs while I was still iterating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Use the cloud bill as a design review.&lt;/strong&gt;&lt;br&gt;&lt;br&gt;
&lt;strong&gt;Billing alerts&lt;/strong&gt; and &lt;strong&gt;Cost Explorer&lt;/strong&gt; by service are now part of my &lt;strong&gt;definition of done&lt;/strong&gt; for experiments, not something I check when I get curious.&lt;/p&gt;




&lt;h2&gt;
  
  
  If you are about to do the same experiment
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Start with tiny files&lt;/strong&gt; — one short text, one sync, one retrieve — until the &lt;strong&gt;numbers&lt;/strong&gt; make sense.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Read embedding and KB pricing&lt;/strong&gt; as &lt;strong&gt;token math&lt;/strong&gt;, then estimate &lt;strong&gt;tokens × price × number of sync attempts&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Treat re-sync as a budget line&lt;/strong&gt;, not a config tweak.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document decisions&lt;/strong&gt; (I use ADRs) so the next you does not repeat the same loop "just one more time."&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;I did not write this to scare anyone away from Bedrock or Knowledge Bases. I still use them. I write it because &lt;strong&gt;the messy middle&lt;/strong&gt; — including an invoice that made me stare at the screen — is part of &lt;strong&gt;learning in public&lt;/strong&gt; honestly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation is not overhead. It is thinking made visible.&lt;/strong&gt; So is a cost breakdown when it forces you to redraw your architecture.&lt;/p&gt;

&lt;p&gt;If this story saved you one accidental &lt;strong&gt;full re-embedding&lt;/strong&gt; of a giant PDF, it was worth more than those two days of spend.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Repos:&lt;/strong&gt;  &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/rivadaviam/aws-genai-cert-learning-journey" rel="noopener noreferrer"&gt;aws-genai-cert-learning-journey&lt;/a&gt;&lt;/strong&gt; — learning path, ADRs, and &lt;strong&gt;Lab 02&lt;/strong&gt; docs (&lt;code&gt;labs/lab-02-kb-ingestion-basics/&lt;/code&gt;).
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;&lt;a href="https://github.com/rivadaviam/aws-pdf-rag-mr" rel="noopener noreferrer"&gt;aws-pdf-rag-mr&lt;/a&gt;&lt;/strong&gt; — &lt;strong&gt;implementation code&lt;/strong&gt; for the PDF → KB pipeline (the “after Lab 02” stack).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tags I'll use when I share:&lt;/strong&gt; &lt;code&gt;#BuildToLearn&lt;/code&gt; &lt;code&gt;#AWSGenAI&lt;/code&gt; &lt;code&gt;#AmazonBedrock&lt;/code&gt; &lt;code&gt;#LearnInPublic&lt;/code&gt; &lt;code&gt;#RAG&lt;/code&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you had a GenAI bill that taught you something the docs alone did not? I would genuinely like to hear what changed in your process afterward.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>amazonbedrock</category>
      <category>generativeai</category>
      <category>learninpublic</category>
    </item>
    <item>
      <title>The Exam That Felt Like a Project Backlog: Passing AWS Certified Generative AI Developer – Professional</title>
      <dc:creator>Martín Rivadavia</dc:creator>
      <pubDate>Thu, 12 Mar 2026 18:05:35 +0000</pubDate>
      <link>https://dev.to/rivadaviam/the-exam-that-felt-like-a-project-backlog-passing-aws-certified-generative-ai-developer--2ljo</link>
      <guid>https://dev.to/rivadaviam/the-exam-that-felt-like-a-project-backlog-passing-aws-certified-generative-ai-developer--2ljo</guid>
      <description>&lt;p&gt;&lt;em&gt;This is part of my &lt;a href="https://dev.to/rivadaviam/the-build-to-learn-framework-how-a-near-disaster-taught-me-to-learn-in-public-c2e"&gt;Build-to-Learn with AWS GenAI&lt;/a&gt; series — where I share the messy middle of building production-grade GenAI systems on AWS, including the wrong turns.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Question 67. Beta exam. Eighty-five total questions. I almost lost it right there.&lt;/p&gt;

&lt;p&gt;Not my composure, exactly — more like my thread. The kind of focused engagement you need to hold through a complex orchestration scenario when your cognitive reserves have been running for two-plus hours. For a moment, I felt the concentration starting to slip. And then something unexpected happened instead: my brain stopped trying to answer the question and started designing it. &lt;em&gt;What if the routing layer worked this way? What if I extended this to handle the insurance claim pipeline I built?&lt;/em&gt; I caught myself, selected the answer, and kept moving. But the moment stayed with me.&lt;/p&gt;

&lt;p&gt;That involuntary drift wasn't a lapse in focus. It was evidence. Evidence that the way I'd prepared had taken root at the right level — not as recall, but as judgment. I almost lost my thread at question 67. What I found instead was proof that months of building had encoded something harder to shake than concentration. And at the Professional tier, that kind of judgment is the only thing being tested.&lt;/p&gt;

&lt;p&gt;I'd spent months preparing for this exam. More accurately: I'd spent months building, and the exam happened to follow.&lt;/p&gt;

&lt;p&gt;
  &lt;a href="https://www.credly.com/badges/7446afbf-7fa1-4450-be74-7e34e5454192" rel="noopener noreferrer"&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Frivadaviam%2Faws-genai-cert-learning-journey%2Fmain%2F_bmad-output%2Fblog-articles%2Fgenai-badge.png" alt="AWS Certified Generative AI Developer – Professional"&gt;
  &lt;/a&gt;
     
  &lt;a href="https://www.credly.com/earner/earned/badge/c4bf319f-af9f-468a-a8c3-127712f2e2f1" rel="noopener noreferrer"&gt;
    &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fraw.githubusercontent.com%2Frivadaviam%2Faws-genai-cert-learning-journey%2Fmain%2F_bmad-output%2Fblog-articles%2Fgenai-early-adopter-badge.png" alt="AWS Early Adopter"&gt;
  &lt;/a&gt;
&lt;/p&gt;

&lt;p&gt;
  &lt;a href="https://www.credly.com/badges/7446afbf-7fa1-4450-be74-7e34e5454192" rel="noopener noreferrer"&gt;AWS Certified Generative AI Developer – Professional&lt;/a&gt;
   · 
  &lt;a href="https://www.credly.com/earner/earned/badge/c4bf319f-af9f-468a-a8c3-127712f2e2f1" rel="noopener noreferrer"&gt;Early Adopter&lt;/a&gt;
&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Certification, Why Now
&lt;/h2&gt;

&lt;p&gt;The AWS Certified Generative AI Developer – Professional is new enough that when I signed up, there were no public pass rates to anchor expectations. No community wisdom about which topics showed up most. No data on what the difficulty curve felt like. I was taking it as a Beta, which meant I was part of the cohort helping AWS calibrate the exam itself — and it meant 85 questions instead of the standard 75.&lt;/p&gt;

&lt;p&gt;That extra 10 is not trivial. By question 75 on an exam of this depth, your cognitive reserves have been running for hours. The later questions tend to be the more architecturally complex ones — multi-service integrations, guardrail trade-offs, agent orchestration patterns that require you to hold several moving parts in your head simultaneously. Adding 10 questions at the back end is a meaningful stamina tax, not just a scheduling footnote.&lt;/p&gt;

&lt;p&gt;I signed up anyway, because it was the right next step in this series. If I'm building production GenAI systems on AWS and documenting every decision publicly, the Professional certification closes the loop. It says: I understand this domain deeply enough to reason through novel scenarios I've never seen before — not just reproduce patterns from documentation I've read.&lt;/p&gt;

&lt;p&gt;There was a bonus I didn't know about when I registered: Beta cohort participants who pass receive an Early Adopter badge from AWS — a second credential recognizing that you helped define the exam itself. I didn't sign up for that badge. I signed up because the exam was there and the timing was right. But it's an accurate description of what taking a Beta actually means: you go in without a map, and if you make it through, you helped draw the map for everyone who comes after.&lt;/p&gt;

&lt;p&gt;What I didn't fully appreciate until exam day was how much the work I'd already done would matter.&lt;/p&gt;




&lt;h2&gt;
  
  
  Starting With Skill Builder (And Why I Pivoted)
&lt;/h2&gt;

&lt;p&gt;My first instinct was &lt;a href="https://skillbuilder.aws" rel="noopener noreferrer"&gt;AWS Skill Builder&lt;/a&gt;. Official source. Authoritative. Comprehensive. The right instinct in principle.&lt;/p&gt;

&lt;p&gt;Here's what I didn't anticipate: the material is genuinely good, and it is genuinely heavy. Skill Builder covers the conceptual foundations of foundation models, the architecture of Amazon Bedrock, the nuances of RAG pipelines, evaluation frameworks, responsible AI — all of it with the depth you'd expect from the people who built the services. For someone building their knowledge base from scratch, it's excellent. For someone who's already been building with these services and needs to sharpen scenario reasoning, reading through it sequentially felt like re-reading a reference manual.&lt;/p&gt;

&lt;p&gt;I didn't abandon it. I still used Skill Builder as a reference layer — when a practice exam question surfaced a concept I couldn't fully explain, I went back to Skill Builder for the deep read. But it stopped being my primary study method about a month in, and I switched to two Udemy courses that fit how I actually learn.&lt;/p&gt;

&lt;p&gt;The first — &lt;a href="https://www.udemy.com/course/ultimate-aws-certified-generative-ai-developer-professional/" rel="noopener noreferrer"&gt;Ultimate AWS Certified Generative AI Developer Professional&lt;/a&gt; — gave me a more accessible conceptual ramp. Good structure, practical framing, the kind of explanations that build a mental model you can reason with rather than recall from. The second — &lt;a href="https://www.udemy.com/course/practice-exams-aws-certified-generative-ai-developer-pro/" rel="noopener noreferrer"&gt;Practice Exams AWS Certified Generative AI Developer Pro&lt;/a&gt; — I treated as a diagnostic tool rather than a score-chaser. Every wrong answer was a prompt for genuine investigation: not "why was I wrong" but "what would I build differently, and why?"&lt;/p&gt;

&lt;p&gt;The combination worked. But neither was the real differentiator.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Thing That Actually Mattered
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/rivadaviam/the-build-to-learn-framework-how-a-near-disaster-taught-me-to-learn-in-public-c2e"&gt;In my previous article&lt;/a&gt;, I wrote about how a near-disaster — an infinite loop that would have cost thousands in AWS bills — crystallized the Build-to-Learn Framework. The lab that came out of that experience was Lab 01: an automated insurance claim document processing pipeline built on Amazon Bedrock, Lambda, and S3.&lt;/p&gt;

&lt;p&gt;I built that lab months before finishing my certification prep. On purpose. That decision came with a risk I documented explicitly at the time:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;ADR-002: Build Lab 01 Before Completing Certification Prep

Status: Accepted

Context:
  Conventional wisdom says finish studying before building.
  The Build-to-Learn thesis says building is the studying.
  Taking the Professional exam before completing all formal modules
  means entering with gaps in coverage.

Decision:
  Build Lab 01 now. Ship it. Write the ADR. Then return to formal study.

Rationale:
  Professional-level exams test scenario reasoning, not recall.
  Reasoning is built through decisions with real consequences.
  A production-adjacent lab creates those consequences in a way
  no module can replicate.

Consequences:
  - Risk: Formal knowledge gaps may surface on exam
  - Benefit: Implementation judgment built before exam; scenarios will feel familiar
  - Accepted tradeoff: depth of experience &amp;gt; breadth of coverage
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The consequences played out exactly as written. There were moments on the exam where I wished I'd read one more Skill Builder module. There were far more moments where I recognized a pattern because I'd lived it.&lt;/p&gt;

&lt;p&gt;The Build-to-Learn thesis is that building real systems before the exam encodes judgment, not just knowledge. And Professional-level exams don't test knowledge — they test judgment. They present scenarios you've never seen before and ask you to reason about what the right architecture would be, given a specific set of constraints and trade-offs. That kind of reasoning doesn't come from reading. It comes from having been in the situation.&lt;/p&gt;

&lt;p&gt;When a question presented a multi-agent orchestration scenario, I recognized the architectural patterns because I'd designed one. When a question asked about guardrail configuration trade-offs, I'd made those decisions in code and felt the consequences. When a question surfaced the cost implications of different model invocation strategies, I'd worried about that in production. The hours spent debugging Lambda triggers, writing Architecture Decision Records, and arguing with IAM policies translated directly into the ability to reason through novel scenarios under pressure.&lt;/p&gt;

&lt;p&gt;That's not a coincidence. That's the Framework, working.&lt;/p&gt;




&lt;h2&gt;
  
  
  What the Beta Exam Actually Feels Like
&lt;/h2&gt;

&lt;p&gt;No study guide can prepare you for question 75 on an 85-question exam.&lt;/p&gt;

&lt;p&gt;The experience is layered. The first 30 questions feel manageable — you're fresh, the cognitive load is still within capacity, and the scenarios are challenging but readable. By question 50, you're aware that you've been concentrating hard for a while. By question 70, your brain is doing something slightly different than it was at the start: it's working harder for the same output.&lt;/p&gt;

&lt;p&gt;The Beta format adds its own texture. There's no score at the end. You complete the exam, submit, and walk out not knowing. AWS collects results from the Beta cohort, calibrates the difficulty model against how candidates performed, and notifies you later. The wait is a specific kind of uncomfortable — you can't iterate on what you don't know, so you just wait.&lt;/p&gt;

&lt;p&gt;What I didn't expect was the moment at question 67 where my brain started drifting toward building rather than answering. It felt, in that moment, like distraction. In retrospect, it was a signal. The exam scenarios are written by people who think about these services the way builders think — as implementation decisions, not as trivia. When you've built with these tools, you speak the same language as the people writing the questions. The exam stops feeling like a test and starts feeling like a design discussion.&lt;/p&gt;

&lt;p&gt;I passed. The score came through later, along with something I hadn't expected: an Early Adopter badge from AWS, issued to the Beta cohort who helped define the exam before it was publicly available. I didn't sign up for that badge — I signed up for the challenge. But it's a fitting symbol. Being early meant no benchmarks, no pass-rate data, no community notes to fall back on. It meant the only preparation that counted was the preparation you'd actually done.&lt;/p&gt;

&lt;p&gt;The pass was less surprising than that moment at question 67 — because by then, the Framework had already proved its point. The Early Adopter badge just confirmed which cohort had been there first.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Study Stack, Honestly Ranked
&lt;/h2&gt;

&lt;p&gt;If you're preparing for this certification, here's what I'd actually recommend:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Build Something First
&lt;/h3&gt;

&lt;p&gt;Not a tutorial. Not a sandbox. A real system with real architecture decisions, real trade-offs, and documentation that forces you to articulate &lt;em&gt;why&lt;/em&gt; you made each choice. Write ADRs. Deploy it. Break it. Fix it. The judgment you build doing this is the single most transferable skill for a Professional-level exam.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Udemy Practice Exams — Scenario Reasoning Engine
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.udemy.com/course/practice-exams-aws-certified-generative-ai-developer-pro/" rel="noopener noreferrer"&gt;Practice Exams AWS Certified Generative AI Developer Pro&lt;/a&gt; — Use these to find your gaps, then investigate the gaps with genuine curiosity. The questions are scenario-based at the right level of complexity. Don't chase scores. Chase understanding.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Udemy Course — Conceptual Architecture
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.udemy.com/course/ultimate-aws-certified-generative-ai-developer-professional/" rel="noopener noreferrer"&gt;Ultimate AWS Certified Generative AI Developer Professional&lt;/a&gt; — Good for building the mental model. More accessible than Skill Builder for daily study. Use it to cover the conceptual landscape before stress-testing it with practice exams.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. AWS Skill Builder — Reference Layer
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://skillbuilder.aws" rel="noopener noreferrer"&gt;skillbuilder.aws&lt;/a&gt; — Use it for depth, not breadth. When something comes up in practice exams that you can't fully explain, this is where you go for the authoritative version. Don't try to read it front to back unless that's genuinely how you learn — treat it as a reference library.&lt;/p&gt;




&lt;h2&gt;
  
  
  What This Certification Actually Means
&lt;/h2&gt;

&lt;p&gt;The certification isn't the story. The certification is the receipt.&lt;/p&gt;

&lt;p&gt;What the receipt is for: months of building real systems, making architecture decisions with real stakes, documenting wrong turns, and learning in public. The exam measured the output of that process. It didn't create it.&lt;/p&gt;

&lt;p&gt;I've said before that certifications are starting points, not destinations. Passing the Professional tier for GenAI development confirms that the domain knowledge is there — but it also clarifies what the next layer of questions looks like. The questions my brain was designing during the exam aren't going away. They're the next labs.&lt;/p&gt;

&lt;p&gt;Build. Document. Share. Repeat.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;The GitHub repo for this series lives at &lt;a href="https://github.com/rivadaviam/aws-genai-cert-learning-journey" rel="noopener noreferrer"&gt;github.com/rivadaviam/aws-genai-cert-learning-journey&lt;/a&gt;. Lab 01 is there with full code, architecture diagrams, and ADRs.&lt;/p&gt;

&lt;p&gt;If you're preparing for this certification:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Low commitment:&lt;/strong&gt; Star the repo and follow along as I build the next lab&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Medium commitment:&lt;/strong&gt; Try building your own version of Lab 01 before your exam. Document one decision with an ADR.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High commitment:&lt;/strong&gt; Share what you build publicly. The accountability changes everything.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The messy middle is where learning lives. Welcome to it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Have you taken this certification? What surprised you most about the exam? Drop it in the comments — I'd genuinely like to know.&lt;/em&gt;&lt;/p&gt;




&lt;h1&gt;
  
  
  BuildToLearn #AWSCommunity #LearnInPublic #AmazonBedrock
&lt;/h1&gt;

</description>
      <category>aws</category>
      <category>amazonbedrock</category>
      <category>generativeai</category>
      <category>learninpublic</category>
    </item>
    <item>
      <title>The Build-to-Learn Framework: How a Near-Disaster Taught Me to Learn in Public</title>
      <dc:creator>Martín Rivadavia</dc:creator>
      <pubDate>Wed, 04 Feb 2026 22:20:47 +0000</pubDate>
      <link>https://dev.to/rivadaviam/the-build-to-learn-framework-how-a-near-disaster-taught-me-to-learn-in-public-c2e</link>
      <guid>https://dev.to/rivadaviam/the-build-to-learn-framework-how-a-near-disaster-taught-me-to-learn-in-public-c2e</guid>
      <description>&lt;p&gt;I almost created an infinite loop that would have cost me thousands in AWS bills.&lt;/p&gt;

&lt;p&gt;And I'm grateful it almost happened.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;When I decided to pursue the &lt;strong&gt;AWS Certified Generative AI Developer&lt;/strong&gt; certification, I made myself a promise: I wouldn't just study for an exam. I would build real systems, make real mistakes, and share everything publicly—including the wrong turns.&lt;/p&gt;

&lt;p&gt;The AWS Skill Builder course has these "Bonus Assignments" scattered throughout. Most people skip them or treat them as quick checkboxes.&lt;/p&gt;

&lt;p&gt;I saw something different: each one was an invitation to build something &lt;strong&gt;production-ready&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;My first assignment: an automated insurance claim processing pipeline using Amazon Bedrock.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jw01yuea3xn1w0q5xka.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0jw01yuea3xn1w0q5xka.png" alt="Architecture Diagram"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;User uploads to S3 → Lambda orchestrates 3-step Bedrock pipeline → Results saved to output bucket&lt;/em&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  The Near-Disaster
&lt;/h2&gt;

&lt;p&gt;My initial architecture was elegant. One S3 bucket, two prefixes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;/input&lt;/code&gt; for uploaded documents&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;/output&lt;/code&gt; for processed results&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Clean. Simple. &lt;strong&gt;Catastrophic.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here's what happens with that design:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;1. Document lands in /input
2. Lambda triggers, processes document
3. Result written to /output (same bucket)
4. S3 event triggers Lambda again
5. Lambda processes the output file...
6. → Infinite loop. Infinite cost.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;I caught this because I was writing an &lt;strong&gt;Architecture Decision Record&lt;/strong&gt;—ADR-002—explaining my bucket strategy.&lt;/p&gt;

&lt;p&gt;The act of documenting forced me to think through the consequences. Mid-sentence, I realized what would happen in production.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# ADR-002: Separate S3 Buckets for Input and Output&lt;/span&gt;

&lt;span class="gu"&gt;## Decision&lt;/span&gt;
Use separate S3 buckets for input and output

&lt;span class="gu"&gt;## Consequences&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Prevents infinite loop scenarios
&lt;span class="p"&gt;-&lt;/span&gt; Better security isolation
&lt;span class="p"&gt;-&lt;/span&gt; Different lifecycle policies per bucket
&lt;span class="p"&gt;-&lt;/span&gt; Slight increase in bucket management complexity

&lt;span class="gu"&gt;## Why This Choice&lt;/span&gt;
Following AWS best practices prevents production issues.
The slight complexity increase is worth avoiding infinite loops.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;That ADR saved me real money. But more importantly, it taught me something:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Documentation isn't overhead. It's thinking made visible.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  The Build-to-Learn Framework
&lt;/h2&gt;

&lt;p&gt;That near-miss crystallized into a methodology I now follow for every lab: build real things, document every decision, and share the messy middle.&lt;/p&gt;

&lt;p&gt;Not tutorials. Not sandboxes. Production-grade systems with real constraints — the kind where a wrong architecture choice costs actual money. For every significant decision, I write an Architecture Decision Record capturing the &lt;em&gt;why&lt;/em&gt;, not just the &lt;em&gt;what&lt;/em&gt;. And when I share the work, I don't polish away the wrong turns. The trade-offs, the moments of doubt, the diagrams I scrapped at midnight — those are the parts that actually help someone else learn.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Implementation
&lt;/h2&gt;

&lt;p&gt;Instead of one monolithic prompt, I designed a &lt;strong&gt;three-step AI pipeline&lt;/strong&gt; where each step uses the optimal model:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Step&lt;/th&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Why&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Document Understanding&lt;/td&gt;
&lt;td&gt;Claude 3.5 Sonnet&lt;/td&gt;
&lt;td&gt;Complex reasoning needed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Information Extraction&lt;/td&gt;
&lt;td&gt;Claude 3.5 Sonnet&lt;/td&gt;
&lt;td&gt;Precision for structured JSON&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Summary Generation&lt;/td&gt;
&lt;td&gt;Claude 3 Haiku&lt;/td&gt;
&lt;td&gt;Cost-efficient for simple output&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;
&lt;h3&gt;
  
  
  Step 1: Document Understanding
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;process_document_understanding&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;document_text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;template_manager&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Analyze and understand the document structure and content.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;template_manager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;document_understanding&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;document_text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;document_text&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;invoke_bedrock_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Low temperature for accuracy
&lt;/span&gt;        &lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2000&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Why this step?&lt;/strong&gt; Complex documents need context before extraction. Understanding the document type (auto claim vs. health expense) improves downstream accuracy.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 2: Information Extraction
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;extract_information&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;document_text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;template_manager&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Extract structured information from the document.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;template_manager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;extract_info&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;document_text&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;document_text&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;invoke_bedrock_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Zero temperature for deterministic output
&lt;/span&gt;        &lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1500&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Key insight:&lt;/strong&gt; Using &lt;code&gt;temperature=0.0&lt;/code&gt; ensures consistent, deterministic extraction. This matters when downstream systems depend on specific JSON fields.&lt;/p&gt;
&lt;h3&gt;
  
  
  Step 3: Summary Generation
&lt;/h3&gt;


&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_summary&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;extracted_info&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;template_manager&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Generate a concise summary of the claim.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;template_manager&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_prompt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;generate_summary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;extracted_info&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;extracted_info&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;invoke_bedrock_model&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mf"&gt;0.7&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# Higher temperature for natural language
&lt;/span&gt;        &lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;500&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;&lt;strong&gt;Why Haiku here?&lt;/strong&gt; The heavy lifting is done. We're just formatting extracted data into prose. Claude 3 Haiku does this well at &lt;strong&gt;~10x lower cost&lt;/strong&gt; than Sonnet.&lt;/p&gt;

&lt;p&gt;This decision is captured in ADR-005:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# ADR-005: Use Claude 3 Haiku for Summary Generation&lt;/span&gt;

&lt;span class="gu"&gt;## Decision&lt;/span&gt;
Use Claude 3 Haiku for summary generation (not Sonnet)

&lt;span class="gu"&gt;## Rationale&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Summaries are the final step - context is already extracted
&lt;span class="p"&gt;-&lt;/span&gt; Haiku is ~10x cheaper than Sonnet
&lt;span class="p"&gt;-&lt;/span&gt; Quality is sufficient for 2-3 sentence summaries
&lt;span class="p"&gt;-&lt;/span&gt; Maintains on-demand throughput support

&lt;span class="gu"&gt;## Trade-off&lt;/span&gt;
Slight quality reduction for significant cost savings
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Model Selection: The Certification Question
&lt;/h2&gt;

&lt;p&gt;If you're preparing for AIP-C01, model selection is a core exam topic — and this project gave me a real-world taste of the trade-offs involved.&lt;/p&gt;

&lt;p&gt;One decision surprised me. Claude 3.5 Sonnet v2 is newer and, on paper, "better" than v1. But when I dug into availability, I discovered that on-demand throughput isn't guaranteed in all regions and might require Provisioned Throughput. For a learning project that other developers will clone and deploy in their own accounts, availability matters more than marginal quality improvements. So I chose v1 — a stable, widely available model that anyone can spin up without provisioning headaches. It was a small decision, but exactly the kind of reasoning the certification expects you to articulate.&lt;/p&gt;


&lt;h2&gt;
  
  
  The Infrastructure
&lt;/h2&gt;

&lt;p&gt;The entire stack is defined in Terraform, which means anyone can deploy it with a single &lt;code&gt;terraform apply&lt;/code&gt;. Here's the Lambda at the heart of it:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight hcl"&gt;&lt;code&gt;&lt;span class="nx"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"aws_lambda_function"&lt;/span&gt; &lt;span class="s2"&gt;"processor"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;function_name&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"${var.project_name}-processor"&lt;/span&gt;
  &lt;span class="nx"&gt;runtime&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"python3.12"&lt;/span&gt;
  &lt;span class="nx"&gt;handler&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"lambda_handler.handler"&lt;/span&gt;
  &lt;span class="nx"&gt;timeout&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;   &lt;span class="c1"&gt;# 5 minutes&lt;/span&gt;
  &lt;span class="nx"&gt;memory_size&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;512&lt;/span&gt;   &lt;span class="c1"&gt;# MB&lt;/span&gt;

  &lt;span class="nx"&gt;environment&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;variables&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;OUTPUT_BUCKET&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;aws_s3_bucket&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;output&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;id&lt;/span&gt;
      &lt;span class="nx"&gt;BEDROCK_MODEL_UNDERSTANDING&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"anthropic.claude-3-5-sonnet-20240620-v1:0"&lt;/span&gt;
      &lt;span class="nx"&gt;BEDROCK_MODEL_EXTRACTION&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"anthropic.claude-3-5-sonnet-20240620-v1:0"&lt;/span&gt;
      &lt;span class="nx"&gt;BEDROCK_MODEL_SUMMARY&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"anthropic.claude-3-haiku-20240307-v1:0"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;The numbers tell a story. A typical run — three sequential Bedrock calls — takes 30 to 90 seconds, but I set the timeout to 5 minutes to absorb cold starts, larger documents, and network latency. Memory sits at 512 MB: enough headroom for boto3 and JSON processing without overpaying for capacity the function will never touch. Both values came from testing, not guessing — another benefit of building before theorizing.&lt;/p&gt;


&lt;h2&gt;
  
  
  What I Learned (Beyond the Tech)
&lt;/h2&gt;

&lt;p&gt;The biggest surprise wasn't technical — it was how much the act of &lt;em&gt;writing things down&lt;/em&gt; changed my engineering decisions. ADRs forced me to justify every choice, and several times I reversed course mid-sentence because explaining a decision out loud revealed the flaw in it. Documentation isn't a chore you do after the code works; it's a design tool you use while the code is still taking shape.&lt;/p&gt;

&lt;p&gt;Building locally first saved me real money, too. The system supports running the full pipeline on your own machine (&lt;code&gt;python main.py --input claim.txt --output result.json&lt;/code&gt;), so I processed dozens of test documents before a single Lambda invocation or Bedrock API call ever hit my AWS bill.&lt;/p&gt;

&lt;p&gt;And here's what connected everything back to the certification: every core topic from the Skill Builder course — model selection criteria, prompt engineering, cost optimization, security best practices, serverless patterns — showed up organically in this one project. I didn't have to memorize them. I had to &lt;em&gt;use&lt;/em&gt; them, and that made them stick.&lt;/p&gt;


&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;This is just the beginning. I'm continuing to build through the AWS GenAI certification curriculum, turning each challenge into a production-ready implementation. More labs are coming — each one following the same Build-to-Learn principles: production-ready code you can actually deploy, every decision documented with ADRs, and the messy middle shared so you learn from my mistakes too.&lt;/p&gt;

&lt;p&gt;Want to follow along? Star the repo or &lt;a href="https://www.linkedin.com/in/martin-rivadavia/" rel="noopener noreferrer"&gt;connect with me on LinkedIn&lt;/a&gt; to get notified when the next lab drops.&lt;/p&gt;


&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;

&lt;p&gt;The entire project is open source. Clone it, read the ADRs, poke around the Terraform — and if you find a flaw in my implementation, &lt;a href="https://github.com/rivadaviam/aws-genai-cert-learning-journey/issues" rel="noopener noreferrer"&gt;open an issue&lt;/a&gt;. If you have a better approach, propose it.&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag-github-readme-tag"&gt;
  &lt;div class="readme-overview"&gt;
    &lt;h2&gt;
      &lt;img src="https://assets.dev.to/assets/github-logo-5a155e1f9a670af7944dd5e12375bc76ed542ea80224905ecaf878b9157cdefc.svg" alt="GitHub logo"&gt;
      &lt;a href="https://github.com/rivadaviam" rel="noopener noreferrer"&gt;
        rivadaviam
      &lt;/a&gt; / &lt;a href="https://github.com/rivadaviam/aws-genai-cert-learning-journey" rel="noopener noreferrer"&gt;
        aws-genai-cert-learning-journey
      &lt;/a&gt;
    &lt;/h2&gt;
    &lt;h3&gt;
      Public learning journey for the AWS Generative AI Certification. Hands-on labs, bonus assignments, and real-world experiments designed to help others learn and build with AWS GenAI services.
    &lt;/h3&gt;
  &lt;/div&gt;
  &lt;div class="ltag-github-body"&gt;
    
&lt;div id="readme" class="md"&gt;
&lt;div class="markdown-heading"&gt;
&lt;h1 class="heading-element"&gt;AWS Generative AI Learning Journey&lt;/h1&gt;
&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Learn in public, build in production.&lt;/strong&gt; This repository is a hands-on journey through AWS Generative AI services, focusing on Amazon Bedrock and real-world use cases. Each lab is production-ready, well-documented, and designed to teach you not just &lt;em&gt;what&lt;/em&gt; to build, but &lt;em&gt;why&lt;/em&gt; we made these choices.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Why This Journey Exists&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;Generative AI is transforming how we build applications, but learning it can feel overwhelming. This repository breaks down complex AWS AI services into practical, reproducible labs. You'll build real systems, understand architectural trade-offs, and learn from decisions documented in Architecture Decision Records (ADRs).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What makes this different:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;Production-ready code&lt;/strong&gt; - Not just demos, but real implementations&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Decision transparency&lt;/strong&gt; - Every architectural choice is documented with rationale&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Learn from mistakes&lt;/strong&gt; - We share what worked, what didn't, and why&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Community-first&lt;/strong&gt; - Built to help others learn and contribute&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="markdown-heading"&gt;
&lt;h2 class="heading-element"&gt;Certification Context&lt;/h2&gt;
&lt;/div&gt;

&lt;p&gt;This repository…&lt;/p&gt;
&lt;/div&gt;


&lt;/div&gt;
&lt;br&gt;
  &lt;div class="gh-btn-container"&gt;&lt;a class="gh-btn" href="https://github.com/rivadaviam/aws-genai-cert-learning-journey" rel="noopener noreferrer"&gt;View on GitHub&lt;/a&gt;&lt;/div&gt;
&lt;br&gt;
&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone and explore&lt;/span&gt;
git clone https://github.com/rivadaviam/aws-genai-cert-learning-journey.git
&lt;span class="nb"&gt;cd &lt;/span&gt;labs/lab-01-claims-doc-processing

&lt;span class="c"&gt;# Read the full guide&lt;/span&gt;
&lt;span class="nb"&gt;cat &lt;/span&gt;README.md

&lt;span class="c"&gt;# Deploy with Terraform&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;infra/terraform
terraform init &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; terraform apply
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  The Invitation
&lt;/h2&gt;

&lt;p&gt;If you're preparing for a certification, don't just study. Build something real. Document your decisions. Share your journey. And if you build something using this framework, tell me about it — I want to see what you create.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Certifications prove you can learn.&lt;br&gt;
Projects prove you can build.&lt;br&gt;
Documentation proves you can teach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Build-to-Learn Framework asks: why not do all three?&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;&lt;em&gt;Have questions? &lt;a href="https://www.linkedin.com/in/martin-rivadavia/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt; or drop a comment below.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build. Document. Share. Repeat.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;#BuildToLearn&lt;/code&gt; &lt;code&gt;#AWSCommunity&lt;/code&gt; &lt;code&gt;#LearnInPublic&lt;/code&gt;&lt;/p&gt;

</description>
      <category>aws</category>
      <category>amazonbedrock</category>
      <category>generativeai</category>
      <category>learninpublic</category>
    </item>
  </channel>
</rss>
