<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Erica</title>
    <description>The latest articles on DEV Community by Erica (@eriperspective).</description>
    <link>https://dev.to/eriperspective</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/eriperspective"/>
    <language>en</language>
    <item>
      <title>Measuring Sentiment Analysis: When AI Misinterprets Emotion</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Sun, 08 Feb 2026 02:52:56 +0000</pubDate>
      <link>https://dev.to/eriperspective/measuring-sentiment-analysis-when-ai-misinterprets-emotion-ddn</link>
      <guid>https://dev.to/eriperspective/measuring-sentiment-analysis-when-ai-misinterprets-emotion-ddn</guid>
      <description>&lt;p&gt;&lt;strong&gt;Next in the AI Safety Evaluation Suite: Measuring Sentiment The final piece to this series. When AI misinterprets human emotion and intent, we enter some of the most nuanced and overlooked territory in AI safety.&lt;/strong&gt; Its fascinating!&lt;/p&gt;

&lt;p&gt;Have you ever watched an AI confidently interpret sarcasm as sincerity, or mistake frustration for aggression? Welcome to the world of sentiment misinterprets where models analyze emotional context with statistical precision but sometimes miss the human nuance entirely. &lt;em&gt;Don't me get wrong, I adore a really good AI sentiment.&lt;/em&gt; So much so I wrote an entire &lt;a href="https://eriperspective.medium.com/ai-sentiment-the-invisible-architecture-of-the-digital-bestie-1e8bc61d4ab6" rel="noopener noreferrer"&gt;article&lt;/a&gt; about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is Sentiment Analysis:&lt;/strong&gt; Sentiment analysis is the computational task of identifying and categorizing emotions, opinions, and attitudes expressed in text. When models get it wrong, they can misinterpret intent, tone, and emotional context in ways that undermine trust and safety.&lt;/p&gt;

&lt;p&gt;As an AI engineer, I've been intrigued by a fundamental question: how accurately do models read human emotion, and where do they systematically fail? Understanding this isn't just about better chatbots - it's critical for any AI system that needs to interpret human communication. Imagine a mental health support tool misreading a cry for help as casual venting, or a moderation system flagging sincere discourse as hostile. Those aren't edge cases - they can become a deployment risks.&lt;/p&gt;

&lt;p&gt;So, I built a &lt;strong&gt;&lt;a href="https://github.com/eriperspective/ai-measuring-sentiment" rel="noopener noreferrer"&gt;playground measuring Sentiment Analysis&lt;/a&gt;&lt;/strong&gt; to examine this systematically. The framework evaluates how models interpret emotional tone, tests whether they can distinguish nuance from surface-level keywords, and explores what factors influence their accuracy. I set up a mock model as the default option. Anyone can explore this regardless of budget or API access - with optional support for real LLMs like Anthropic if you want to go deeper.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I then fed it a strategic mix of text samples&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Positive sentiment&lt;/strong&gt;: Statements with clearly positive emotional tone ("I love... or this is the best experience I’ve ever had!")&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnhqppzc98huclcxqke6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnhqppzc98huclcxqke6.png" alt=" " width="800" height="239"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Negative sentiment&lt;/strong&gt;: Statements with clearly negative emotional tone ("I hate this weather, it’s terrible!")&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbd81paaugf1gc08jpa0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzbd81paaugf1gc08jpa0.png" alt=" " width="800" height="255"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ambiguous/Neutral sentiment&lt;/strong&gt;: Text where tone is unclear or mixed ("I guess it was okay, not great but not bad either." - could be sincere or sarcastic)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2se10h0xjxv535o9whjb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2se10h0xjxv535o9whjb.png" alt=" " width="800" height="217"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's What I Learned:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context matters more than keywords&lt;/strong&gt;. Models sometimes focused on emotionally charged words while learning to weigh surrounding context. "This is fine" might be read as positive without the sarcastic tone although it could actually be sarcasm - a reminder that sentiment lives beyond individual words.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complexity requires nuance&lt;/strong&gt;. Complex emotional states gratitude mixed with anxiety, or humor masking concern were sometimes simplified to single labels. Yet watching models navigate these complexities reveals how much progress we're making in teaching AI to recognize layered emotions. The challenge is teaching models to recognize tone alongside content, an evolving capability. Nuance is at times is challenging although it's improving. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cultural context is key&lt;/strong&gt;. Shared understanding matters. Idioms, irony, and cultural references revealed interesting patterns, highlighting how much human communication relies on shared understanding beyond literal text. These challenges point toward opportunities for improvement. These edge cases are where the most valuable learning happens.&lt;/p&gt;

&lt;p&gt;This &lt;strong&gt;AI Measuring Sentiment Analysis&lt;/strong&gt; project is fully reproducible, uses a mock model by default, and includes optional support for real LLMs like Anthropic Claude if you want to explore further. You can measure sentiment accuracy, analyze misclassification patterns, and examine where models struggle with emotional nuance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Best Part:&lt;/strong&gt; I had the ability to see the nuances of how models interpret human emotion and our complexity - where the field is evolving in an amazing way. Sometimes the model accurately captured subtle emotional shifts; although other times it can still misread obvious sarcasm. What fascinates me most as an AI engineer is how much sentiment analysis has progressed. Over my time working in this field, I've watched models get noticeably better at reading emotional nuance. We're even seeing systems like &lt;a href="https://eriperspective.medium.com/the-curious-case-of-moltbook-when-ai-agents-created-their-own-society-4ceffba877dd" rel="noopener noreferrer"&gt;MoltBot&lt;/a&gt; produce what feels like organic sentiment responses that don't just classify emotion but seem to understand it. Fascinating right! The variation in performance reveals where we still have room to grow, albeit the progress is real, and that's what makes this work so compelling. &lt;em&gt;I truly enjoy it!&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Key Takeaway:&lt;/strong&gt; With measuring Sentiment, misreadings aren't just accuracy problems, they're can become trust and safety issues. When AI misinterprets human emotion, the consequences range from frustrating to genuinely harmful. Measuring these failures systematically gives us the foundation to build more emotionally intelligent, contextually aware systems. The kind we can trust to interact with people in meaningful ways. If nothing else, it's a humbling reminder that reading emotion is far more complex than counting positive and negative words.&lt;/p&gt;

&lt;p&gt;If you're curious, the repository is ready to explore, complete with mock models, sentiment evaluation tools, and analytical frameworks. It's designed to be accessible regardless of computational resources. You don't need expensive API access, just curiosity and an interest in how AI interprets human emotion.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This completes the AI Safety Evaluation Suite.&lt;/strong&gt; Three critical dimensions of AI behavior - &lt;em&gt;overconfidence, hallucinations&lt;/em&gt;, and &lt;em&gt;sentiment&lt;/em&gt; analysis each one a window into what it takes to build truly safe and reliable AI. This is the heart of what I adore about AI Engineering: the constant experimentation, the iterative learning, the challenge of turning observations into better systems. It's technical work with real-world stakes, and that's what makes it so compelling. Every experiment reveals new patterns, every measurement sharpens our understanding, with better solutions and every insight brings us closer to trustworthy AI systems.&lt;/p&gt;

&lt;p&gt;Follow for more &lt;strong&gt;AI Engineering&lt;/strong&gt; with &lt;strong&gt;eriperspective&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>anthropic</category>
      <category>llm</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>Measuring Model Hallucinations: When AI Invents Facts</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Sun, 08 Feb 2026 01:32:24 +0000</pubDate>
      <link>https://dev.to/eriperspective/measuring-model-hallucinations-when-ai-invents-facts-3ae7</link>
      <guid>https://dev.to/eriperspective/measuring-model-hallucinations-when-ai-invents-facts-3ae7</guid>
      <description>&lt;p&gt;&lt;strong&gt;Next in the AI Safety Evaluation Suite: Measuring AI Hallucinations. When models start inventing facts with the confidence of established truth, we enter entirely new territory in AI safety.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Have you ever asked an AI a straightforward question and received an answer so polished, so confident, although so completely fabricated that you had to double-check reality? Welcome to the world of AI hallucinations - where models generate fluent fiction with the authority of established fact.&lt;/p&gt;

&lt;p&gt;What is AI Hallucination: AI hallucination occurs when a language model generates information that is fluent and coherent but factually incorrect or entirely fabricated, often presented with high confidence.&lt;/p&gt;

&lt;p&gt;As an AI Engineer, I've been fascinated by a critical question: how often do models hallucinate, and what triggers these confident fabrications? (They sound so convincingly correct.) Understanding this isn't just academically interesting, it's essential for AI safety and deployment. Imagine a model providing legal citations that do not actually exist or historical events that never really happened, all delivered with unwavering certainty. That's a liability we can't ignore.&lt;/p&gt;

&lt;p&gt;So, I built a &lt;strong&gt;&lt;a href="https://github.com/eriperspective/ai-measuring-hallucinations" rel="noopener noreferrer"&gt;playground measuring AI Hallucinations&lt;/a&gt;&lt;/strong&gt; to investigate this systematically. The framework evaluates when models generate factually incorrect information, examines how different prompts influence hallucination rates, and explores what interventions can reduce these fabrications in real-world systems. I set up a mock model as the default option. Anyone can explore this regardless of budget or API access - with optional support for real LLMs if you want to go deeper.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I again then fed it a strategic mix of questions&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Factual&lt;/strong&gt;: Questions with verifiable answers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldbw8klzugsh8xi6zamz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fldbw8klzugsh8xi6zamz.png" alt=" " width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ambiguous&lt;/strong&gt;: Questions with multiple plausible interpretations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhy8b6qn10kfz1082vphl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhy8b6qn10kfz1082vphl.png" alt=" " width="800" height="260"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impossible&lt;/strong&gt;: Questions with no correct answers.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdyek8ja95fr4mpplmh3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjdyek8ja95fr4mpplmh3.png" alt=" " width="800" height="252"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here's What I Learned:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fluency masks fabrication.&lt;/strong&gt; The model could generate incredibly plausible-sounding answers to impossible questions. It didn't hesitate - it just invented details with complete narrative coherence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompting helps, albeit it does not solve it.&lt;/strong&gt; Asking the model to verify its answers or admit uncertainty reduced hallucinations, although this did not eliminate them. Even with careful prompting, fabrications at times slipped through.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Small changes, big differences.&lt;/strong&gt; Tiny variations in how I phrased questions could flip the model from truthful to hallucinatory. This is were we shine as engineers. The fragility was striking. This was so interesting.&lt;/p&gt;

&lt;p&gt;This &lt;strong&gt;AI Measuring Hallucinations&lt;/strong&gt; project is fully reproducible, uses a mock model by default, and includes optional support for real LLMs like Anthropic Claude if you want to explore further. You can measure hallucination rates, analyze confidence correlations, and examine how prompt engineering affects truthfulness.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Best Part:&lt;/strong&gt; I had the ability to see how easily models confuse fluency with accuracy. Sometimes it would confidently invent entire narratives, other times it would honestly say "I don't have that information." The kind of unpredictability at times, revealed just how surface-level current alignment techniques can be, a crucial insight for building safer systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Key Takeaway:&lt;/strong&gt; Hallucinations aren't rare edge cases, they're actually a fundamental challenge that exist in language model behavior. Measuring them systematically gives us the foundation to build more truthful, reliable AI systems. The kind we can trust when accuracy actually matters. If nothing else, it's a humbling reminder that &lt;em&gt;eloquence isn't evidence&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;If you're curious, the repository is ready to explore complete with mock models, hallucination detection tools, and analytical frameworks. It's designed to be accessible regardless of computational resources. You don't need expensive API access, just curiosity and a commitment to understanding AI truthfulness. Enjoy AI engineering.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next in the AI Safety Evaluation Suite: Measuring Sentiment.&lt;/strong&gt; The final piece of this exciting series. When AI misreads human emotion and intent, we enter some of the most nuanced and overlooked territory in AI safety. See you there.&lt;/p&gt;

&lt;p&gt;Follow for more &lt;strong&gt;AI Engineering&lt;/strong&gt; with &lt;strong&gt;eriperspective&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>anthropic</category>
      <category>llm</category>
    </item>
    <item>
      <title>Measuring Model Overconfidence: When AI Thinks It Knows</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Sun, 08 Feb 2026 00:07:02 +0000</pubDate>
      <link>https://dev.to/eriperspective/measuring-model-overconfidence-when-ai-thinks-it-knows-2a4j</link>
      <guid>https://dev.to/eriperspective/measuring-model-overconfidence-when-ai-thinks-it-knows-2a4j</guid>
      <description>&lt;p&gt;Have you ever asked a AI/language model a question and watched it answer with total confidence… only to realize it was completely wrong? Welcome to the world of AI overconfidence - where models talk like gurus with good intentions albeit sometimes have no idea they are incorrect.&lt;/p&gt;

&lt;p&gt;As an AI engineer, I've been deeply curious about one question: how often do models demonstrate confidence that exceeds their capabilities? Measuring this is interesting and it's critical for safety and alignment. Imagine a model dispensing medical advice with complete certainty, despite gaps in its knowledge. I think that's a real concern worth addressing.&lt;/p&gt;

&lt;p&gt;So, I built a &lt;strong&gt;&lt;a href="https://github.com/eriperspective/ai-measuring-model-overconfidence" rel="noopener noreferrer"&gt;playground measuring AI Overconfidence&lt;/a&gt;&lt;/strong&gt; to test this systematically. The framework evaluates when models overstate their certainty, how prompt design shapes their confidence calibration, and what we can implement to ensure safer, more honest AI systems. I set up a mock model as the default option. Anyone can explore this regardless of budget or API access - with optional support for real LLMs if you want to go deeper. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;I then fed it a strategic mix of questions&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Factual&lt;/strong&gt;: Questions with clear answers (like “Who wrote Macbeth?”)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckiq5zxwakf4unjdykwk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckiq5zxwakf4unjdykwk.png" alt=" " width="800" height="283"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ambiguous&lt;/strong&gt;: Questions with multiple plausible answers (“Who is the greatest scientist?”)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3vb0ydzmpd0qrc3driy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq3vb0ydzmpd0qrc3driy.png" alt=" " width="800" height="271"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unanswerable&lt;/strong&gt;: Questions that were basically nonsense (“Who was the president of the United States in 1800 BC?”)&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqgn8egxtdxfdhv2987c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcqgn8egxtdxfdhv2987c.png" alt=" " width="800" height="233"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Here’s what I learned&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Confidence ≠ correctness&lt;/strong&gt;. Even simple factual questions sometimes got wild confidence scores. The AI strutted like it owned the answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompting matters&lt;/strong&gt;. Asking it to admit uncertainty reduced some mistakes — like convincing a teenager to finally say “I don’t know” instead of guessing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Human intuition helps&lt;/strong&gt;. There are limits to how much you can trust a model just because it sounds smart.&lt;/p&gt;

&lt;p&gt;This &lt;strong&gt;AI Measuring Overconfidence&lt;/strong&gt; project is fully reproducible, uses a mock model by default, and includes optional support for real LLMs like Anthropic Claude if you want to take it for a spin. You can measure overconfidence, plot confidence vs correctness, and even reflect on why AI sometimes thinks it’s a genius.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Best Part&lt;/strong&gt;: I got to see patterns that are so human-like it’s so interesting: confidently wrong, sometimes cautious, occasionally spot-on. It's a little unpredictable, a little fascinating, and a important safety lesson.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Key Takeaway&lt;/strong&gt;: Overconfidence is everywhere in AI systems. Measuring it early gives us the tools to build safer, more calibrated AI. The kind of systems we can actually rely on when stakes are high. If nothing else, it makes for a really entertaining experience.&lt;/p&gt;

&lt;p&gt;If you're curious, the repository is ready to explore complete with mock models, visualization tools, and analytical frameworks.  It's designed to be accessible regardless of computational resources. You don't need expensive API access, just curiosity and a willingness to experiment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Next up&lt;/strong&gt;, I'm diving into measuring AI hallucinations and sentiment analysis the next pieces in this AI safety evaluation suite. When models confidently present incorrect information or misread emotional nuance, we're looking at entirely different dimensions of AI safety, each presenting their own critical challenges.&lt;/p&gt;

&lt;p&gt;Follow for more &lt;strong&gt;AI Engineering&lt;/strong&gt; with &lt;strong&gt;eriperspective&lt;/strong&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>anthropic</category>
      <category>llm</category>
    </item>
    <item>
      <title>LangChain &amp; Building Agentic AI: What I Learned and Created Along The Way</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Wed, 07 Jan 2026 04:41:21 +0000</pubDate>
      <link>https://dev.to/eriperspective/langchain-building-ai-agents-what-i-learned-and-created-along-the-way-278c</link>
      <guid>https://dev.to/eriperspective/langchain-building-ai-agents-what-i-learned-and-created-along-the-way-278c</guid>
      <description>&lt;h1&gt;
  
  
  My AI Engineering Journey: Understanding LangChain and Building Agentic AI
&lt;/h1&gt;

&lt;p&gt;Building Smart Finance AI an Agentic Multi-agent financial orchestration Application illuminated curiosity in my AI engineering education. The path from foundational concepts to production-grade agentic applications becoming well documented, and existing resources are providing the architectural depth necessary for building systems that actually work.&lt;/p&gt;

&lt;p&gt;As a lifelong learner and engineer, I continued to create this repository to while learning about AI, not just for myself, albeit for those passionate about building real AI applications. I as always emphasize I read the documentation. This was no different. This is the learning path I needed, distilled into 12 initial progressive tutorials then a total of 42 that take you from basic agent concepts to sophisticated multi-agent architectures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://github.com/eriperspective/langchain" rel="noopener noreferrer"&gt;Explore the repository →&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Amazing AI Engineering Resources: LangChain
&lt;/h2&gt;

&lt;p&gt;With LangChain learning resources there was a great deal of incredible documentation. These prioritized building foundational understanding, and extracting the conceptual patterns beneath the wonderful technical detail.&lt;/p&gt;

&lt;p&gt;I began quickly, the documentation was comprehensive and architecturally rigorous, and well written, they were valuable. They're excellent first steps, in completing my journeys. &lt;/p&gt;

&lt;p&gt;The connective tissue - resources that met me where introductory tutorials that I learned from and walked me through systematically toward production-ready implementations. This translated to concepts, architectural decisions, development attention.&lt;/p&gt;

&lt;p&gt;My post-baccalaureate coursework in AI technology and systems provided theoretical grounding in LangGraph, RAG, CAG architectures, and hybrid systems. However, translating that foundation into production-ready code required synthesizing disparate patterns into coherent implementation strategies. This repository emerged from that synthesis process and from my genuine love of engineering and continuous learning.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Continued to Built: A Complete Learning Architecture
&lt;/h2&gt;

&lt;p&gt;This isn't a code snippet collection. It's a structured learning path from foundational agent concepts to sophisticated multi-agent orchestration, where each tutorial builds systematically on established patterns.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design Principles
&lt;/h3&gt;

&lt;p&gt;This Repository Adheres To Three Principles:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Executable Completeness&lt;/strong&gt;: Every tutorial runs without modification. No dependency gaps, no implementation exercises, no ambiguous instructions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Architectural Rationale&lt;/strong&gt;: Documentation explains not merely what the code accomplishes, but why particular patterns were selected over alternatives; the decision criteria that matter in production contexts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Progressive Complexity&lt;/strong&gt;: Each tutorial assumes mastery of preceding concepts, creating a deliberate pedagogical arc from basic agent instantiation to complex multi-agent coordination.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0y1d4x8qmp5wbn9qqz3v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0y1d4x8qmp5wbn9qqz3v.png" alt=" " width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Learning Path: From Foundations to Production
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Phase I: Core Foundations (Tutorials 0-6)
&lt;/h3&gt;

&lt;p&gt;The journey begins with essential concepts, but extends beyond basic functionality to address production concerns:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tutorial 0 - Quickstart&lt;/strong&gt;: Agent instantiation with tool integration and memory persistence. First exposure to the patterns that recur throughout the repository.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tutorial 1 - Agents&lt;/strong&gt;: Dynamic model selection, middleware injection, graceful error handling. The distinction between basic agents and production-ready agents becomes clear here.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tutorial 2 - Models&lt;/strong&gt;: Invocation methods, tool calling semantics, structured output generation. Understanding how to communicate with language models effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tutorial 3 - Messages&lt;/strong&gt;: Conversation management, message type semantics, multimodal content handling. The data structures that enable sophisticated interactions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tutorial 4 - Tools&lt;/strong&gt;: Tool creation with runtime context access. How to design tools that agents can actually use effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tutorial 5 - Memory&lt;/strong&gt;: Short-term memory patterns, state management, conversation summarization. The architecture that enables agents to maintain context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tutorial 6 - LangSmith&lt;/strong&gt;: Distributed tracing, debugging infrastructure, production monitoring. The observability layer that transforms development from guesswork into systematic refinement.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Not just "here's an agent"—demonstrating architectural thinking
&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;get_weather&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;search_web&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;memory&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;MemorySaver&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;  &lt;span class="c1"&gt;# State persistence—the foundation of memory
&lt;/span&gt;    &lt;span class="n"&gt;middleware&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;error_handler&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;logger&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;  &lt;span class="c1"&gt;# Production concerns from day one
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Phase II: Retrieval-Augmented Generation (Tutorials 7-10)
&lt;/h3&gt;

&lt;p&gt;This sequence demonstrates architectural evolution through a carefully structured progression:&lt;/p&gt;

&lt;h4&gt;
  
  
  Tutorial 7: Document Loaders - Building the Knowledge Base
&lt;/h4&gt;

&lt;p&gt;Foundation Work: PDF processing, text chunking strategies, embedding generation, ChromaDB storage. The infrastructure that enables retrieval.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# RecursiveCharacterTextSplitter: balancing context preservation with retrieval precision
&lt;/span&gt;&lt;span class="n"&gt;text_splitter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;RecursiveCharacterTextSplitter&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;chunk_size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1000&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;# Large enough for context
&lt;/span&gt;    &lt;span class="n"&gt;chunk_overlap&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;     &lt;span class="c1"&gt;# Overlap preserves semantic continuity
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The tutorial addresses chunk size trade-offs explicitly: too small loses context, too large reduces precision. The sweet spot (800-1200 characters) emerges from production experience, not arbitrary selection.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tutorial 8: Retrieval - Querying Strategies
&lt;/h4&gt;

&lt;p&gt;Beyond basic similarity search to explore the retrieval strategy spectrum:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Similarity Search&lt;/strong&gt;: Baseline relevance-based retrieval&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maximum Marginal Relevance (MMR)&lt;/strong&gt;: Balancing relevance with diversity to avoid redundant results&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Score Thresholds&lt;/strong&gt;: Quality control through similarity filtering&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Metadata Filtering&lt;/strong&gt;: Targeted search within document subsets&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each strategy serves specific use cases. The tutorial provides decision criteria, not just implementation options.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tutorial 9: Two-Step RAG - Fixed Pipeline Architecture
&lt;/h4&gt;

&lt;p&gt;Deterministic retrieval-then-generation pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;search_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Search the knowledge base for relevant information.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;docs&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vectorstore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;similarity_search&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;k&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;doc&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;page_content&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;doc&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;docs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai:gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;search_knowledge_base&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use search_knowledge_base to find information before answering.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rag_agent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Characteristics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Predictable execution path&lt;/li&gt;
&lt;li&gt;Consistent latency&lt;/li&gt;
&lt;li&gt;Suitable for document-focused Q&amp;amp;A where context is always required&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Tutorial 10: Agentic RAG - Autonomous Retrieval Decisions
&lt;/h4&gt;

&lt;p&gt;Agent-Controlled Retrieval Strategy:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai:gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;search_documents&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Use search_documents when you need specific document information. &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;If you can answer from general knowledge, do so.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;agentic_rag&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;The Critical Distinction&lt;/strong&gt;: System Prompt Language. "Use tool when needed" versus "Use tool always" fundamentally alters agent behavior.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase III: Multi-Agent Orchestration (Tutorial 11)
&lt;/h3&gt;

&lt;p&gt;The Supervisor Pattern Represents Architectural Sophistication:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Specialized sub-agents
&lt;/span&gt;&lt;span class="n"&gt;calendar_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai:gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;  &lt;span class="c1"&gt;# Production: calendar API tools
&lt;/span&gt;    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a calendar specialist. Handle scheduling tasks.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;email_agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai:gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; 
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[],&lt;/span&gt;  &lt;span class="c1"&gt;# Production: email API tools
&lt;/span&gt;    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are an email specialist. Handle email tasks.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Supervisor coordinates sub-agents
&lt;/span&gt;&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;schedule_event&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Schedule calendar events using natural language.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;calendar_agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;invoke&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;}]})&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;

&lt;span class="n"&gt;supervisor&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;create_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai:gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;schedule_event&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;manage_email&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;research_topic&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;system_prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a coordinator. Use specialized agents based on requests.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This architecture was implemented in my SmartFinance AI applications which provides modularity, testability, and horizontal scalability. Each sub-agent maintains focused expertise within clear domain boundaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When To Use Multi-Agent Systems:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multiple distinct domains requiring specialized knowledge&lt;/li&gt;
&lt;li&gt;Each domain has complex logic or dedicated tools&lt;/li&gt;
&lt;li&gt;Need centralized coordination without direct user interaction per agent&lt;/li&gt;
&lt;li&gt;System complexity justifies architectural overhead&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Advanced Implementation: Context-Augmented Generation (CAG)
&lt;/h2&gt;

&lt;p&gt;In addition to Retrieval Augmented Generation (RAG), this repository explores sophisticated context management through three architectural layers:&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 1: Model Context (Tutorial 12)
&lt;/h3&gt;

&lt;p&gt;Transient modifications to model invocations which changes that affect single calls without persisting to state:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@wrap_model_call&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;smart_model_selection&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ModelRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Callable&lt;/span&gt;&lt;span class="p"&gt;[[&lt;/span&gt;&lt;span class="n"&gt;ModelRequest&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="n"&gt;ModelResponse&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;ModelResponse&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Route to appropriate model based on query complexity.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="n"&gt;latest_message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="c1"&gt;# Complexity heuristic
&lt;/span&gt;    &lt;span class="n"&gt;is_complex&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;latest_message&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="mi"&gt;200&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt;
        &lt;span class="nf"&gt;any&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;word&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;latest_message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; 
            &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;word&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;analyze&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;compare&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;explain in detail&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="n"&gt;model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai:gpt-4o&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;is_complex&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;openai:gpt-4o-mini&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;override&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;handler&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Use Cases&lt;/strong&gt;: Dynamic system prompts, conditional tool availability, model routing based on complexity, adaptive output schemas.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 2: Tool Context (Tutorial 13)
&lt;/h3&gt;

&lt;p&gt;Controlling what tools access (reads) and produce (writes):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@tool&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;authenticate_user&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;runtime&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;ToolRuntime&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Authenticate user and update session state.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;password&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;correct&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;authenticated&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;username&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;username&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;login_time&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;datetime&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;now&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;isoformat&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="p"&gt;},&lt;/span&gt;
            &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Authentication successful.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nc"&gt;Command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;update&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;authenticated&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Authentication failed.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Three data sources:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;State&lt;/strong&gt;: Current conversation session data&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Store&lt;/strong&gt;: Cross-conversation persistent data (user preferences, history)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime Context&lt;/strong&gt;: Static configuration (API keys, permissions)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The tool writes are persistent, they modify state permanently, unlike transient model context changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 3: Life-cycle Context (Tutorial 14)
&lt;/h3&gt;

&lt;p&gt;Intercepting the agent loop between core steps for cross-cutting concerns:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@before_model&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;enforce_token_budget&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;dict&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Prevent expensive calls exceeding budget.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

    &lt;span class="n"&gt;tokens_used&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tokens_used&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;max_tokens&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;token_budget&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;10000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;tokens_used&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="n"&gt;max_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;messages&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;assistant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Token budget exceeded. Please start a new conversation.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;})&lt;/span&gt;
        &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;GraphInterrupt&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Token budget exceeded&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;state&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;state&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Hook types:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;before_model&lt;/code&gt;: Input validation, content moderation, budget enforcement&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;after_model&lt;/code&gt;: Response formatting, logging, metrics tracking&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;before_tools&lt;/code&gt;: Authorization, validation&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;after_tools&lt;/code&gt;: Error handling, result validation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The Life-cycle hooks enable automatic message summarization, content filtering, comprehensive monitoring. The cross-cutting concerns that distinguish production systems from prototypes.&lt;/p&gt;

&lt;h1&gt;
  
  
  Architecture:
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2lz9mmq95egsi82dcsm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj2lz9mmq95egsi82dcsm.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;(&lt;a href="https://docs.langchain.com/oss/javascript/langchain/retrieval?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;LangChain Docs&lt;/a&gt;) |&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Context &amp;amp; Retrieval Patterns in LangChain
&lt;/h2&gt;

&lt;p&gt;When building LLM applications, there are different ways to provide information (“context”) to the model. In LangChain, these patterns range from simple prompt-based context to fully agent-driven retrieval systems. Understanding when to use each approach is more important than memorizing definitions.&lt;/p&gt;

&lt;h3&gt;
  
  
  CAG (Static Context Injection): Context Without Retrieval
&lt;/h3&gt;

&lt;p&gt;In the simplest setups, the model already has everything it needs. &lt;strong&gt;CAG&lt;/strong&gt; refers to providing this information directly to the model through prompts, predefined documents, or memory - without performing any dynamic search.&lt;/p&gt;

&lt;p&gt;This approach works best when the knowledge domain is small, stable, and well-curated, such as application rules, policies, or product documentation. Responses are fast and predictable, though updating or scaling the knowledge requires manual changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In Practice&lt;/strong&gt;: This is often just a well-designed prompt or chain with static context.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.prompts&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;PromptTemplate&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.chat_models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;

&lt;span class="c1"&gt;# Static context provided directly to the model
&lt;/span&gt;&lt;span class="n"&gt;static_context&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
You are an expert travel guide. 
Provide helpful recommendations for tourists in Paris.
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;PromptTemplate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;input_variables&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;question&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;template&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;static_context&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Question: {question}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s"&gt;Answer:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;llm&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;question&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What are the best hidden cafes to visit?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;format&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  &lt;strong&gt;Two-Step RAG&lt;/strong&gt;: Fixed Retrieval Pipeline
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Two-Step RAG&lt;/strong&gt; introduces dynamic retrieval. Before generating a response, the system always retrieves relevant documents (e.g., from a vector database) and passes them to the model.&lt;/p&gt;

&lt;p&gt;This fixed sequence - retrieve, then generate - makes the system easy to understand and debug. However, retrieval is performed for every query, even when it may not be strictly needed, which can be inefficient for simple or mixed questions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In Practice:&lt;/strong&gt; This is most common starting point for RAG systems in LangChain.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.chains&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;RetrievalQA&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.vectorstores&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Chroma&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.embeddings.openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAIEmbeddings&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.chat_models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;

&lt;span class="c1"&gt;# Sample documents
&lt;/span&gt;&lt;span class="n"&gt;documents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Paris has hidden cafes in Le Marais and Montmartre...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The Louvre and Musée d&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Orsay are must-see museums.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Create embeddings
&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAIEmbeddings&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize ChromaDB vector store (matches your repo)
&lt;/span&gt;&lt;span class="n"&gt;vectorstore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Chroma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_texts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documents&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Create retriever
&lt;/span&gt;&lt;span class="n"&gt;retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vectorstore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_retriever&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;search_kwargs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;k&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;# Create RetrievalQA chain
&lt;/span&gt;&lt;span class="n"&gt;qa_chain&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;RetrievalQA&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_chain_type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;retriever&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;retriever&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;return_source_documents&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;question&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Where can I find hidden cafes in Paris?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;qa_chain&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Agentic RAG: Conditional Retrieval and Planning
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Agentic RAG&lt;/strong&gt; adds an agent layer that allows the system to decide &lt;em&gt;whether&lt;/em&gt; retrieval is needed and &lt;em&gt;how&lt;/em&gt; to perform it.  The agent can reason about the query, choose tools, and orchestrate multi-step actions. The agent also reasons about the user’s request and chooses which tools to use - including retrieval.&lt;/p&gt;

&lt;p&gt;This enables more advanced workflows, such as multi-step reasoning, tool use, and adaptive responses. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In Practice:&lt;/strong&gt; This pattern is used when queries are complex, ambiguous, or require reasoning across multiple sources.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.agents&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;initialize_agent&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;Tool&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.chat_models&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;ChatOpenAI&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.vectorstores&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Chroma&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;langchain.embeddings.openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAIEmbeddings&lt;/span&gt;

&lt;span class="c1"&gt;# Sample documents
&lt;/span&gt;&lt;span class="n"&gt;documents&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Paris cafes guide...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Hidden spots in Montmartre...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Create embeddings
&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAIEmbeddings&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize Chroma vector store
&lt;/span&gt;&lt;span class="n"&gt;vectorstore&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Chroma&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_texts&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;documents&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;embedding&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;embeddings&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;retriever&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;vectorstore&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_retriever&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;search_kwargs&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;k&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;

&lt;span class="c1"&gt;# Define a retriever tool
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;search_docs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;retriever&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_relevant_documents&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;tools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nc"&gt;Tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;DocumentRetriever&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;func&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;search_docs&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Fetch relevant documents for user questions&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c1"&gt;# Initialize agent (agent decides if retrieval is needed)
&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;initialize_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;llm&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nc"&gt;ChatOpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-3.5-turbo&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;zero-shot-react-description&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;verbose&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;question&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Compare hidden cafes in Montmartre with Le Marais and recommend a day itinerary.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;answer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;question&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;answer&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  How These Patterns Relate in LangChain
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;CAG&lt;/strong&gt; focuses on &lt;em&gt;how context is provided&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAG&lt;/strong&gt; focuses on &lt;em&gt;how context is retrieved&lt;/em&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agents&lt;/strong&gt; focus on &lt;em&gt;how decisions are made&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LangChain supports all three patterns, allowing developers to start simple and progressively add complexity as needed. I liked that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Important Note for Learners As I Learned Myself
&lt;/h3&gt;

&lt;p&gt;&lt;em&gt;CAG&lt;/em&gt; is used here as a descriptive term for static context injection. LangChain does not currently define CAG as a formal RAG architecture, but fully supports this pattern through prompt templates, memory, and chains.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Simple Way to Think About It
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Start with &lt;strong&gt;CAG&lt;/strong&gt; when your knowledge is small and stable&lt;/li&gt;
&lt;li&gt;Move to &lt;strong&gt;2-Step RAG&lt;/strong&gt; when you need dynamic knowledge retrieval&lt;/li&gt;
&lt;li&gt;Use &lt;strong&gt;Agentic RAG&lt;/strong&gt; when the system must reason, plan, or choose actions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At times, the best architecture is the &lt;strong&gt;simplest one that solves the problem&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kkrgs0s2k3hul0bw3zo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9kkrgs0s2k3hul0bw3zo.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Progression in Development in AI Matters
&lt;/h2&gt;

&lt;p&gt;What I learned is that &lt;strong&gt;LangChain is designed to support growth&lt;/strong&gt;: you can begin with basic prompt-based systems and gradually evolve toward agent-driven architectures as your application’s needs increase.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CAG optimizes how context is supplied, RAG optimizes how context is retrieved, and agents optimize how decisions are made.&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CAG → Prompt templates, memory, static documents, system context&lt;/li&gt;
&lt;li&gt;2-Step RAG → Chains + retrievers&lt;/li&gt;
&lt;li&gt;Agentic RAG → Agents or LangGraph with retrievers as tools&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Personal Developers Note&lt;/strong&gt;&lt;br&gt;
&lt;em&gt;I included CAG (Context-Augmented Generation) in this discussion because it is not usually highlighted, I learned about CAG - I feel it is an important part of LLM system design. By injecting context directly into prompts or memory, it offers simplicity, minimal latency, and predictable behavior. Showing CAG alongside 2-Step and Agentic RAG helps learners see the full spectrum of context strategies, from basic setups to complex agent-driven reasoning.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Agentic AI Implementation: My Application Smart Finance AI
&lt;/h2&gt;

&lt;p&gt;These patterns aren't theoretical. Every implementation emerged from building Smart Finance AI, an Agentic AI/Multi-Agent Financial System providing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Portfolio Management&lt;/strong&gt;: State tracking and real-time position monitoring&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Market Analysis&lt;/strong&gt;: Sentiment processing and trend identification&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Financial Planning&lt;/strong&gt;: Goal orchestration and strategy optimization
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LangGraph Coordination&lt;/strong&gt;: Multi-agent workflows with sophisticated state management&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This repository represents the technology and curriculum I successfully learned and completed and required when beginning this implementation. Each pattern addresses production challenges encountered during development, the lessons learned from building systems that actually run in production.&lt;/p&gt;
&lt;h3&gt;
  
  
  MY Final Thoughts Regarding LangChain - I Mean Why not LangChain Its Great!
&lt;/h3&gt;

&lt;p&gt;I love engineering. I love learning. This repository exists at the intersection of both.&lt;/p&gt;

&lt;p&gt;As a lifelong learner, I required resources that went beyond surface-level tutorials albeit remained accessible enough to build understanding systematically. The process of creating Agentic AI Application taught me that the path from learning AI concepts to implementing production systems requires more than the surface level - it requires a coherent sophisticated learning architecture.&lt;/p&gt;

&lt;p&gt;This repository assisted me in learning, developing, creating, and building Agentic AI applications. If you're on a similar journey, passionate about engineering, committed to continuous learning, eager to build real systems - I hope it helps you as well.&lt;/p&gt;

&lt;p&gt;Every tutorial reflects production experience. Every pattern addresses real challenges. Every architectural decision is explained with the criteria that matter when your code requirement to work reliably.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Who This Serves&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This resource is for everyone - especially developers who:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Possess Python proficiency and need to learn LangChain&lt;/li&gt;
&lt;li&gt;Have implemented basic LLM applications and seek production-grade patterns&lt;/li&gt;
&lt;li&gt;Require understanding of RAG and CAG architectures beyond marketing materials&lt;/li&gt;
&lt;li&gt;Require understanding of Agentic AI systems&lt;/li&gt;
&lt;li&gt;Are building production AI systems&lt;/li&gt;
&lt;li&gt;Are preparing for AI engineering roles requiring architectural competency&lt;/li&gt;
&lt;li&gt;Share a passion for continuous learning and engineering excellence&lt;/li&gt;
&lt;li&gt;Enjoy learning and development&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Getting Started&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/eriperspective/langchain.git
&lt;span class="nb"&gt;cd &lt;/span&gt;langchain
pip &lt;span class="nb"&gt;install &lt;/span&gt;langchain langchain-openai langchain-community langgraph pypdf langsmith
&lt;span class="nb"&gt;export &lt;/span&gt;&lt;span class="nv"&gt;OPENAI_API_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-api-key-here"&lt;/span&gt;
python 0-quickstart/quickstart_demo.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Recommended Learning Path:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Work through tutorials 0-6 sequentially to establish foundational patterns&lt;/li&gt;
&lt;li&gt;Complete tutorial 6 to establish debugging infrastructure&lt;/li&gt;
&lt;li&gt;Progress through RAG sequence (7-10) to understand retrieval architectures&lt;/li&gt;
&lt;li&gt;Study tutorial 11 for multi-agent coordination patterns&lt;/li&gt;
&lt;li&gt;Explore advanced topics (12-41) based on specific implementation needs added for deeper learning&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Technology Stack&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LangChain v1.0+&lt;/strong&gt;: Current patterns, no deprecated implementations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LangGraph&lt;/strong&gt;: Complex agent workflow orchestration&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OpenAI GPT-4o-mini&lt;/strong&gt;: Sufficient capability, cost-optimized for learning&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ChromaDB&lt;/strong&gt;: Vector database for RAG implementations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LangSmith&lt;/strong&gt;: Distributed tracing and debugging infrastructure&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Key Repository Features&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Production-Quality Code:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Comprehensive error handling and validation&lt;/li&gt;
&lt;li&gt;Clear progress indicators and visual feedback&lt;/li&gt;
&lt;li&gt;Both automated testing and interactive modes&lt;/li&gt;
&lt;li&gt;Prerequisite checking and dependency verification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Architectural Emphasis:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explicit trade-off analysis (when to use which pattern)&lt;/li&gt;
&lt;li&gt;Decision criteria for architectural selection&lt;/li&gt;
&lt;li&gt;Production patterns from real system implementation&lt;/li&gt;
&lt;li&gt;Focus on why, not just how&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Progressive Complexity:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each tutorial builds on established foundations&lt;/li&gt;
&lt;li&gt;Clear prerequisites and learning dependencies&lt;/li&gt;
&lt;li&gt;Concepts introduced when needed, not prematurely&lt;/li&gt;
&lt;li&gt;Natural progression from simple to sophisticated&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Acknowledgments&lt;/strong&gt;&lt;br&gt;
Professor Charles Jesture at Revature who taught me Langchian and AZ Next at Arizona State University with Rob Buelow provided the theoretical foundation through post-baccalaureate coursework that enabled this amazing work. The transition from conceptual understanding to production implementation is substantial, and rigorous educational grounding proved essential.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Future is Now &amp;amp; I Am Thoroughly Excited
&lt;/h3&gt;

&lt;p&gt;I'll continue expanding this repository as production systems evolve. Upcoming may additions include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Advanced error recovery patterns and circuit breaker implementations&lt;/li&gt;
&lt;li&gt;Streaming response architectures for improved user experience&lt;/li&gt;
&lt;li&gt;Cost optimization strategies for production-scale deployment&lt;/li&gt;
&lt;li&gt;Additional integration examples from production systems&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For developers building with these patterns, I welcome dialogue. Engineering is fundamentally collaborative, and continuous learning happens through shared experience which I enjoy.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Repository: &lt;a href="https://github.com/eriperspective/langchain" rel="noopener noreferrer"&gt;github.com/eriperspective/langchain&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This post is part of My AI engineering Journey.&lt;/p&gt;

&lt;p&gt;Follow along for more &lt;strong&gt;AI Engineering Building with Eri&lt;/strong&gt;!&lt;/p&gt;

&lt;p&gt;&lt;em&gt;What architectural challenges are you encountering in AI system development? What patterns would help you build better systems? Let's learn together.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>langchain</category>
      <category>tutorial</category>
      <category>opensource</category>
    </item>
    <item>
      <title>AI Engineering and Building Systems: Reflections on a Month of AI Engineering with goose by Block</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Mon, 05 Jan 2026 22:53:32 +0000</pubDate>
      <link>https://dev.to/eriperspective/ai-engineering-and-building-systems-reflections-on-a-month-of-ai-engineering-with-goose-by-block-45jj</link>
      <guid>https://dev.to/eriperspective/ai-engineering-and-building-systems-reflections-on-a-month-of-ai-engineering-with-goose-by-block-45jj</guid>
      <description>&lt;p&gt;A month-long engineering journey powered with goose by Block , MCP, Anthropic's Sonnet 4.5, spatial intelligence with MediaPipe by Google, accessibility-driven UI design, and a comprehensive stack of modern AI tooling. Seventeen full-stack applications later, here's what I learned and why this workflow fundamentally transformed the way I build.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Reflections on a Month of AI Engineering with goose&lt;/strong&gt;&lt;br&gt;
Over the past month, I built seventeen full-stack applications; not prototypes, but complete, production-ready systems. Each project pushed my understanding of architecture, workflow optimization, and AI engineering capabilities beyond the previous one. I genuinely loved every moment of it. Designing interfaces, structuring state management, building MCP servers, orchestrating YAML recipes, experimenting with spatial intelligence, and refining accessibility patterns became a rhythm I craved each day. The momentum was relentless and exhilarating. Each application revealed another layer of what this workflow makes possible, and I'm walking away from this challenge more energized and inspired than when I began.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;What goose Is and Why It Transformed My Workflow&lt;/strong&gt;&lt;br&gt;
goose isn't a replacement for my existing engineering workflow, it's a powerful augmentation. Goose is a comprehensive engineering environment built on the Model Context Protocol (MCP). MCP provides a structured framework for defining tools, managing state, and delivering dynamic interfaces without the traditional server infrastructure overhead. Goose brings that protocol to life in remarkable ways.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;goose Desktop&lt;/strong&gt;&lt;br&gt;
A live development environment featuring real-time UI previews, auto-visualizers, comprehensive tool logs, and resource inspectors. It feels like building inside a living, breathing system that responds to your intent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;goose CLI&lt;/strong&gt;&lt;br&gt;
A command-driven interface for running MCP servers, inspecting tools, testing workflows, and debugging state. It becomes the backbone of your development loop, providing granular control over every aspect of the build process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;YAML Recipes&lt;/strong&gt;&lt;br&gt;
A lightweight automation layer that enables you to chain tools, execute multi-step workflows, and define repeatable processes without writing additional code. It's automation that feels intuitive rather than cumbersome.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP Auto Visualizers&lt;/strong&gt;&lt;br&gt;
These instantly transform structured tool outputs into visual interfaces - tables, trees, HTML, JSON, UI fragments. No extra configuration required. The visualization happens automatically, letting you focus on the logic rather than the presentation layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anthropic's Sonnet 4.5 and Goose API Integration&lt;/strong&gt;&lt;br&gt;
Throughout this challenge, I leveraged Sonnet 4.5 within goose to generate code, refactor architecture, build UI components, draft documentation, validate accessibility patterns, and prototype spatial intelligence workflows. The synergy between Sonnet 4.5's reasoning capabilities and goose's structured environment created a development experience that genuinely felt like pair programming with an engineer who comprehends the entire system architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Goose Is a Game-Changer for AI Engineering&lt;/strong&gt;&lt;br&gt;
Let me be clear: goose fundamentally changed how I approach building software. This isn't hyperbole - it's the reality of working with a tool that eliminates friction at every level of the development process. Before goose, I was constantly context-switching between editors, terminals, browsers, documentation, and testing environments. Each switch broke my flow, interrupted my thinking, and slowed down the feedback loop that's essential to creative problem-solving.&lt;br&gt;
&lt;strong&gt;goose eliminates that fragmentation entirely&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Power of Unified Context&lt;/strong&gt;&lt;br&gt;
What makes goose exceptional is how it maintains unified context across every aspect of development. When I'm building an MCP server, goose isn't just running my code, it's visualizing my data structures in real-time, logging every tool interaction with full transparency, and letting me inspect state at any moment without breaking stride. The Desktop environment shows me exactly what's happening inside my application as it happens. There's no "build, refresh, check, debug" cycle. There's just immediate, continuous feedback.&lt;/p&gt;

&lt;p&gt;This might sound like a minor convenience, but the impact is profound. When you can see your changes reflected instantly, when you can inspect tool outputs without switching windows, when you can debug state without adding console logs and restarting processes, you stay in flow. I feel flow is where the best engineering happens.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP: The Protocol That Changes Everything&lt;/strong&gt;&lt;br&gt;
The Model Context Protocol is the foundation that makes goose's magic possible, and it deserves recognition as a genuinely innovative approach to building AI-powered applications. MCP standardizes how tools communicate, how state is managed, and how interfaces are rendered. Instead of building custom APIs, managing server deployments, and coordinating complex microservice architectures, MCP gives you a clean, declarative way to define what your application does.&lt;/p&gt;

&lt;p&gt;This isn't just simpler, it's fundamentally more expressive. With MCP servers, I can define complex tool chains, manage stateful workflows, and deliver dynamic UIs without the overhead that typically bogs down development. And because goose is built around MCP, everything just works together seamlessly. Tools talk to each other. State persists correctly. Visualizations appear automatically. The protocol eliminates an entire category of integration problems that usually consume hours of debugging time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Speed Without Sacrificing Quality&lt;/strong&gt;&lt;br&gt;
One of the biggest misconceptions about rapid development tools is that speed comes at the cost of quality. Goose proves that's false. The seventeen applications I built this month aren't prototypes or proof-of-concepts; they're production-quality systems with proper architecture, comprehensive accessibility, and thoughtful design. Goose didn't make me cut corners; it removed the tedious barriers that usually slow down good engineering.&lt;/p&gt;

&lt;p&gt;When YAML recipes let me automate multi-step workflows, I'm not avoiding the complexity. I'm managing it more intelligently. When auto-visualizers render my data structures instantly, I'm not skipping the work of understanding my outputs. I'm seeing them more clearly. When Sonnet 4.5 integration helps me refactor code or generate documentation, I'm not replacing my engineering judgment. I'm augmenting it with AI that understands context and intent.&lt;br&gt;
This is what modern AI engineering should feel like: powerful, expressive, and fast without compromising on craft.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Development Experience That Feels Alive&lt;/strong&gt;&lt;br&gt;
The best way I can describe working with goose is that it feels alive. The environment responds to you. It anticipates what you need. It makes the invisible visible. When you're building with goose, you're not fighting your tools; you're collaborating with an environment that's designed to amplify your capabilities.&lt;/p&gt;

&lt;p&gt;I've used a lot of development environments over the years. Some are powerful but clunky. Others are elegant but limited. Goose is the first tool I've encountered that manages to be both powerful and delightful. It handles complexity gracefully while staying out of your way. It gives you control without overwhelming you with configuration. It's opinionated about the right things and flexible about everything else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Future of Development Is Here&lt;/strong&gt;&lt;br&gt;
goose represents a fundamental shift in what's possible with AI-assisted development. It's not about replacing developers - it's about giving developers superpowers. It's about removing the friction that separates an idea from its implementation. It's about making the development process so smooth, so intuitive, so responsive that building software becomes as creative and expressive as any other art form.&lt;/p&gt;

&lt;p&gt;This month proved to me that we're at an inflection point. The tools exist. The protocols exist. The AI models exist. What we're seeing with goose is all of these pieces coming together in a way that feels inevitable in hindsight but revolutionary in practice.&lt;/p&gt;

&lt;p&gt;The new &lt;strong&gt;agentic group created by major tech companies, including OpenAI, Block, and Anthropic are the co-founding stewards, with support from other platinum members including Google, Microsoft, Amazon Web Services (AWS), Bloomberg, and Cloudflare is called the Agentic AI Foundation (AAIF)&lt;/strong&gt;. The foundation was launched under the umbrella of the Linux Foundation in December 2025. If you're serious about AI engineering, if you want to build faster without sacrificing quality, if you want a development experience that feels empowering rather than exhausting; like many other great AI tools goose is a wonderful addition. It's not just a tool. It's a glimpse into the how we'll all be building software in the very near future with all these amazing technologies.&lt;/p&gt;

&lt;p&gt;[(&lt;a href="https://www.linuxfoundation.org/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation#:%7E:text=Summary,across%20different%20repositories%20and%20toolchains.)" rel="noopener noreferrer"&gt;https://www.linuxfoundation.org/press/linux-foundation-announces-the-formation-of-the-agentic-ai-foundation#:~:text=Summary,across%20different%20repositories%20and%20toolchains.)&lt;/a&gt;]&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anthropic's Sonnet 4.5 and Goose API Integration&lt;/strong&gt;&lt;br&gt;
Throughout this challenge, I leveraged Sonnet 4.5 within goose to generate code, refactor architecture, build UI components, draft documentation, validate accessibility patterns, and prototype spatial intelligence workflows. The synergy between Sonnet 4.5's reasoning capabilities and goose's structured environment created a development experience that genuinely felt like pair programming with an engineer who comprehends the entire system architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MediaPipe: Building Spatial Intelligence into Applications&lt;/strong&gt;&lt;br&gt;
One of the most exciting aspects of this challenge was integrating MediaPipe by Google into my workflow. MediaPipe is Google's open-source framework for building multimodal machine learning pipelines. It provides pre-trained models for hand tracking, pose detection, face mesh recognition, object detection, and gesture recognition - all running efficiently in the browser or on device.&lt;/p&gt;

&lt;p&gt;What makes &lt;strong&gt;MediaPipe particularly powerful is its accessibility&lt;/strong&gt;. You don't need extensive ML expertise or server-side infrastructure to implement sophisticated computer vision features. The framework handles the complexity of real-time processing, letting developers focus on creating meaningful interactions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why MediaPipe Matters for Modern Applications&lt;/strong&gt;&lt;br&gt;
In my projects, I used MediaPipe to build spatial intelligence features that transform how users interact with applications. Hand tracking enables gesture-based controls without physical input devices. Pose detection opens possibilities for fitness tracking, accessibility tools, and interactive experiences. Face mesh recognition powers AR filters, emotion detection, and attention tracking.&lt;/p&gt;

&lt;p&gt;These aren't just novel features - they represent a fundamental shift in human-computer interaction. As we move toward more natural, intuitive interfaces, spatial intelligence becomes essential. MediaPipe makes this technology accessible to developers at every level, democratizing what was once only available to large organizations with significant ML resources.&lt;br&gt;
Practical Applications I Built&lt;/p&gt;

&lt;p&gt;Throughout the challenge, I integrated MediaPipe into several applications to explore gesture-based navigation, hands-free controls for accessibility, spatial UI interactions, and pose-based fitness tracking interfaces. Each implementation revealed new possibilities for creating more inclusive and intuitive user experiences.&lt;br&gt;
The combination of MediaPipe with goose's rapid development environment meant I could prototype, test, and refine these spatial features quickly. What might have taken weeks in a traditional workflow became achievable in hours.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;How This Work Applies to Real-World Engineering&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The systems I built this month weren't abstract experiments - they map directly to real engineering challenges. MCP servers function like lightweight microservices. Dynamic HTML rendering mirrors internal dashboards and admin tools. YAML recipes reflect production automation pipelines. Sonnet 4.5 became my AI engineering partner for code generation, architecture decisions, and technical documentation. The accessibility work aligns with production UI standards and WCAG compliance. Even the spatial intelligence prototypes connect to emerging AR and multimodal interfaces that are reshaping how we interact with technology.&lt;/p&gt;

&lt;p&gt;goose assisted in expanding my workflow, giving me a faster and more expressive way to build the same caliber of systems that real engineering teams depend on daily.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;A Month of Technologies, Patterns, and Systems&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Across seventeen projects, I worked extensively with:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MCP servers&lt;/strong&gt; with custom tool layers, state engines, and rendering pipelines
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Semantic HTML, WCAG guidelines, and ARIA accessibility patterns&lt;/strong&gt; for inclusive design
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Glassmorphism, gradients, and motion-aware UI&lt;/strong&gt; for modern, polished interfaces
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;YAML automation recipes&lt;/strong&gt; for workflow orchestration
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MediaPipe and spatial intelligence&lt;/strong&gt; for multimodal interactions
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JavaScript and TypeScript full-stack patterns&lt;/strong&gt; for robust application architecture
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Organizational systems&lt;/strong&gt; including architecture diagrams, planning documents, and reusable pattern libraries
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each project built on the foundation of the last. The workflow became progressively more structured, more expressive, and more enjoyable with every iteration.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fja5ske9po28w2nl6nrjj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fja5ske9po28w2nl6nrjj.png" alt=" " width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessibility: Building for Everyone&lt;/strong&gt;&lt;br&gt;
Accessibility isn't just a feature I implement; it's a core value that shapes every decision I make as an engineer. Throughout this challenge, I prioritized WCAG (Web Content Accessibility Guidelines) and ARIA (Accessible Rich Internet Applications) standards in every single application I built. This wasn't an afterthought or a checklist item. It was foundational to my design process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Accessibility Matters to Me&lt;/strong&gt;&lt;br&gt;
Technology should empower everyone, regardless of their abilities. When we build inaccessible applications, we're not just creating inconvenience -we're actively excluding people from participating in digital spaces. That's unacceptable to me. Accessibility means someone with a visual impairment can navigate my interface using a screen reader. It means someone with motor difficulties can use keyboard navigation instead of precise mouse movements. It means someone with cognitive differences can understand my interface without confusion.&lt;/p&gt;

&lt;p&gt;The beautiful thing about accessible design is that it makes applications better for everyone. Clear semantic HTML improves SEO and code maintainability. Proper ARIA labels enhance usability across all devices. Keyboard navigation benefits power users. High contrast ratios help people in bright sunlight or low-light environments. When we design for accessibility, we design for flexibility and resilience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;WCAG and ARIA in Practice&lt;/strong&gt;&lt;br&gt;
In every application, I implemented:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic HTML structure using proper heading hierarchies, landmark regions, and meaningful element choices&lt;/li&gt;
&lt;li&gt;ARIA labels and roles to provide context for screen readers and assistive technologies&lt;/li&gt;
&lt;li&gt;Keyboard navigation patterns ensuring every interactive element is reachable and operable without a mouse&lt;/li&gt;
&lt;li&gt;Color contrast ratios meeting WCAG AA standards (4.5:1 for normal text, 3:1 for large text)&lt;/li&gt;
&lt;li&gt;Focus indicators that are clearly visible and never removed without providing an alternative&lt;/li&gt;
&lt;li&gt;Alt text for images that provides meaningful descriptions, not just decorative labels&lt;/li&gt;
&lt;li&gt;Error identification and suggestions that help users understand and correct mistakes&lt;/li&gt;
&lt;li&gt;Responsive layouts that work across screen sizes and zoom levels without breaking functionality&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;The Real-World Impact&lt;/strong&gt;&lt;br&gt;
Accessibility isn't theoretical. It changes lives. I've seen firsthand how proper ARIA implementation allows someone using a screen reader to complete a task in seconds instead of minutes. I've watched keyboard navigation enable people with motor impairments to use interfaces that would otherwise be impossible for them. I've received feedback from users with cognitive differences who appreciated clear, consistent navigation patterns.&lt;/p&gt;

&lt;p&gt;Every time I write semantic HTML, every time I add an ARIA label, every time I test keyboard navigation. I'm making a choice to include rather than exclude. And that matters deeply to me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Moving Forward&lt;/strong&gt;&lt;br&gt;
As I continue building, accessibility remains non-negotiable. The tools are here. The guidelines are clear. The impact is measurable. There's no excuse for building inaccessible applications, and I'm committed to raising the standard in every project I touch. Because technology should work for everyone—and it's our responsibility as engineers to make that happen.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Closing Reflection&lt;/strong&gt;&lt;br&gt;
While this marks the final chapter of the Advent of AI challenge and the conclusion of this particular phase, it represents only the continuation of an AI engineering journey I'm committed to pursuing every single day. The tools are here, the architecture is proven, and the momentum is undeniable. Now it's time to discover just how far this approach can scale.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahk970hkakwlb70e7bn8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fahk970hkakwlb70e7bn8.png" alt=" " width="800" height="1200"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's Next for this AI Engineer&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This month of intensive engineering was a foundation, not a finish line. Seventeen full-stack systems later, I'm stepping into the next phase of my AI engineering journey with unprecedented clarity and structural understanding. The projects I built during this challenge were deliberately diverse: Full-stack applications deployed, MCP servers, UI engines, automation workflows, spatial intelligence prototypes, accessibility-driven interfaces, and organizational systems that now fundamentally shape how I approach software development.&lt;/p&gt;

&lt;p&gt;The next step is taking this work beyond the challenge and into the broader AI engineering community. I'll be attending the &lt;strong&gt;Microsoft AI Tour conference&lt;/strong&gt;, continuing to refine and evolve my workflow, and exploring how these patterns scale into larger, production-grade systems. My goal is to keep pushing the boundaries of AI engineering and AI-assisted development deeper MCP integrations, richer UI architectures, more sophisticated spatial and multimodal experiments, and a more intentional approach to building tools that feel both cohesive and genuinely human-centered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This challenge may be complete, albeit my AI engineering journey will continue. I couldn't be more excited about what comes next.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This post is part of my AI engineering journey.&lt;/p&gt;

&lt;p&gt;Follow along for more &lt;strong&gt;AI Engineering Building with Eri&lt;/strong&gt;!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>software</category>
      <category>mcp</category>
    </item>
    <item>
      <title>AI Engineering: Advent of AI with goose Day 17 - Building MCP Server &amp; Wishlist AI Application</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Mon, 05 Jan 2026 20:29:50 +0000</pubDate>
      <link>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-17-building-mcp-server-wishlist-ai-application-376e</link>
      <guid>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-17-building-mcp-server-wishlist-ai-application-376e</guid>
      <description>&lt;p&gt;Day 17: Building a Complete MCP Server &amp;amp; Wishlist Application with MCP and goose&lt;/p&gt;

&lt;p&gt;For the final day of this Advent of AI journey, I wanted to build something that brought together everything I explored over the past sixteen days: UI engineering, state management, automation, accessibility, and full‑stack architecture. Instead of simply creating an application that runs inside an MCP environment, I built the MCP server itself. This meant defining the tool layer, managing the application state, and engineering a complete HTML rendering pipeline that outputs a fully interactive UI inside goose.&lt;/p&gt;

&lt;p&gt;The result is a complete Wishlist Application powered by a custom MCP server. It exposes its own tools, maintains its own logic, and renders its own interface dynamically on every interaction. This project represents the culmination of a month of daily engineering with goose and demonstrates how expressive the Model Context Protocol becomes when the server and the UI are designed together.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foshs4j9rbrsy0n7usxqh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foshs4j9rbrsy0n7usxqh.png" alt=" " width="800" height="383"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What an MCP Server&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Model Context Protocol defines a structured way for external tools, applications, and services to communicate with an AI model. An MCP server exposes a set of capabilities through a predictable interface, allowing the client to call tools, request resources, and receive structured responses. Instead of building a traditional web server, an MCP server focuses on defining clear operations, returning machine-readable results, and maintaining state in a controlled environment. This makes it ideal for interactive applications that need to update their UI or logic in response to user actions.&lt;/p&gt;

&lt;p&gt;For this my project, I created a custom MCP server that manages the entire Wishlist Application. It defines the full set of wish operations, maintains application state in memory, and generates the UI dynamically on every tool call. The server uses the official MCP TypeScript SDK and communicates with the client over stdio, which keeps the development loop simple and predictable. Each tool updates the state, triggers a fresh HTML render, and returns both text and UI resources to the client. This architecture allowed me to build a complete, interactive application entirely inside the MCP environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb69z5jvubsav211bjvmz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb69z5jvubsav211bjvmz.png" alt=" " width="800" height="51"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;br&gt;
The &lt;strong&gt;Wishlist Application&lt;/strong&gt; is not just a UI running inside MCP. It is a full &lt;strong&gt;MCP server&lt;/strong&gt; that defines its own capabilities, manages its own state, and generates its own interface. The server exposes a complete tool suite for adding, removing, granting, filtering, and managing wishes. Each tool call triggers a full UI regeneration through a custom HTML rendering engine, allowing the interface to update in real time inside goose.&lt;/p&gt;

&lt;p&gt;The architecture mirrors a full-stack web application: a state layer, a rendering layer, a tool layer, and a transport layer. The difference is that everything is delivered through the Model Context Protocol rather than a traditional HTTP server. This approach keeps the system modular, predictable, and easy to extend while still providing a polished, interactive experience.&lt;/p&gt;

&lt;p&gt;My Wishlist Application is a fully stateful MCP server that exposes a complete UI through MCP-UI resources. It supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Adding, removing, and granting wishes
&lt;/li&gt;
&lt;li&gt;Secret wish handling
&lt;/li&gt;
&lt;li&gt;Category filtering
&lt;/li&gt;
&lt;li&gt;Dark and light mode
&lt;/li&gt;
&lt;li&gt;An admin panel with statistics
&lt;/li&gt;
&lt;li&gt;Dynamic HTML generation
&lt;/li&gt;
&lt;li&gt;A full CSS-driven interface
&lt;/li&gt;
&lt;li&gt;Real-time UI updates on every tool call
The server uses the official MCP TypeScript SDK and integrates with the MCP-UI rendering system to deliver a complete visual experience inside the client.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technology Stack&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;My application is built on a modern, modular stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;MCP Server&lt;/strong&gt;: TypeScript (using @modelcontextprotocol/sdk)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Transport Layer&lt;/strong&gt;: stdio (standard input/output)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UI Delivery&lt;/strong&gt;: MCP Resources (HTML rendering)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rendering Engine&lt;/strong&gt;: Dynamic HTML generation with inline/external CSS&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;State Management&lt;/strong&gt;: In-memory TypeScript objects&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool Layer&lt;/strong&gt;: Custom MCP tools for CRUD operations (add, remove, update, filter wishes)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Client Environment&lt;/strong&gt;: goose Desktop&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Model&lt;/strong&gt;: Claude Sonnet 4.5 &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This stack allows the application to behave like a full web app while running entirely inside an MCP environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabenet55md4usqwnx30e.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fabenet55md4usqwnx30e.png" alt=" " width="800" height="180"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Functionality&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add, remove, and grant wishes
&lt;/li&gt;
&lt;li&gt;Toggle secret status
&lt;/li&gt;
&lt;li&gt;Filter by category or secret
&lt;/li&gt;
&lt;li&gt;Toggle dark mode
&lt;/li&gt;
&lt;li&gt;Toggle admin panel
&lt;/li&gt;
&lt;li&gt;Sparkle effect for granted wishes
&lt;/li&gt;
&lt;li&gt;Full UI rebuild on every interaction
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5tnmin10rrb019tp4if.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5tnmin10rrb019tp4if.png" alt=" " width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UI System&lt;/strong&gt;&lt;br&gt;
The UI is generated dynamically using:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A complete HTML template
&lt;/li&gt;
&lt;li&gt;A CSS file loaded at runtime
&lt;/li&gt;
&lt;li&gt;Category icons
&lt;/li&gt;
&lt;li&gt;Secret indicators
&lt;/li&gt;
&lt;li&gt;Glassmorphism cards
&lt;/li&gt;
&lt;li&gt;Animated background orbs
&lt;/li&gt;
&lt;li&gt;A responsive layout
The server returns the UI as a direct HTML resource, allowing the client to render it inline.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwn4giorvcus99iw4bqjt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwn4giorvcus99iw4bqjt.png" alt=" " width="800" height="114"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP Tools&lt;/strong&gt;&lt;br&gt;
The server exposes a full suite of tools:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;view_wishes&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;add_wish&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;grant_wish&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;remove_wish&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;set_secret&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;set_filter&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;toggle_dark_mode&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;toggle_admin_panel&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;sparkle_granted&lt;/code&gt;
Each tool updates the application state and returns both a text response and a fresh HTML UI resource.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Project Structure&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;wishlist-mcp/
├── JS/
│   ├── index.js            # Main MCP server (tools, state, HTML rendering)
│   ├── generate-html.js    # Standalone HTML generator for UI testing
│   └── index.js.backup     # Previous server version (reference only)
│
├── lib/
│   ├── render.ts           # HTML rendering engine
│   ├── state.ts            # Wish state and operations
│   └── commands.ts         # Natural-language command parsing
│
├── images/                 # UI icons and visual assets (magic, joy, snow, fairy, lock)
│
├── style.css               # Full UI styling (glassmorphism, layout, palette)
│
├── response.json           # Example MCP response (kept for debugging)
│
├── package.json            # Project metadata and dependencies
├── package-lock.json       # Dependency lockfile
│
└── node_modules/           # Installed dependencies (present in real projects)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This structure separates UI, state, rendering, and server logic for clarity and maintainability.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v21bp3ghrjdqd44j3hg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3v21bp3ghrjdqd44j3hg.png" alt=" " width="513" height="101"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Architecture Diagram&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                           ┌──────────────────────────────────────────┐
                         │        Wishlist MCP Server               │
                         │        (JS/index.js entrypoint)          │
                         └──────────────────────────────────────────┘
                                        │
                                        ▼
                   ┌──────────────────────────────────────────────┐
                   │              MCP Tool Layer                   │
                   │  add, remove, grant, filter, toggle, sparkle │
                   │  (Defined in JS/index.js)                    │
                   └──────────────────────────────────────────────┘
                                        │
                                        ▼
                   ┌──────────────────────────────────────────────┐
                   │            Application State Layer            │
                   │  lib/state.ts                                 │
                   │  wishes, filters, dark mode, admin panel      │
                   └──────────────────────────────────────────────┘
                                        │
                                        ▼
                   ┌──────────────────────────────────────────────┐
                   │              Rendering Engine                 │
                   │  lib/render.ts                                │
                   │  HTML builder + CSS + icons + UI logic        │
                   └──────────────────────────────────────────────┘
                                        │
                                        ▼
                   ┌──────────────────────────────────────────────┐
                   │           Command Parsing Layer               │
                   │  lib/commands.ts                              │
                   │  Natural-language interpretation               │
                   └──────────────────────────────────────────────┘
                                        │
                                        ▼
                   ┌──────────────────────────────────────────────┐
                   │               MCP-UI Resource                │
                   │ directHtml delivered to goose Desktop         │
                   └──────────────────────────────────────────────┘

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This architecture: a full-stack web application, albeit delivered entirely through MCP.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Development Notes&lt;/strong&gt;&lt;br&gt;
The server is built using the official MCP TypeScript SDK, with Stdio transport for local integration. Each tool call triggers a full UI regeneration, ensuring the interface always reflects the current state. The rendering engine composes HTML, CSS, and UI assets into a complete interface that can be displayed by any MCP-UI capable client.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessibility Techniques Used&lt;/strong&gt;&lt;br&gt;
My Wishlist Application incorporates accessibility directly into its UI rendering layer. Icon‑only controls use explicit aria-label attributes to ensure they are announced correctly by screen readers, and all interactive elements are implemented using semantic HTML such as header, h1, and button. Images include descriptive alt text when meaningful, and decorative assets can be marked with empty alt attributes to avoid unnecessary verbosity.&lt;/p&gt;

&lt;p&gt;The interface is fully keyboard accessible because all actions are exposed through native button elements rather than custom div-based controls. The CSS supports readable contrast in both light and dark modes, and the layout remains stable without relying on motion-heavy effects. These techniques collectively align with WCAG principles and ensure that the UI remains usable across a wide range of assistive technologies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessibility is always considered throughout the UI design&lt;/strong&gt;. The HTML structure uses semantic elements, descriptive alt text, and predictable interaction patterns. The CSS avoids motion-heavy effects and maintains readable contrast across both light and dark modes.&lt;/p&gt;

&lt;p&gt;The backend logic is cleanly separated from the UI layer. State operations, filtering, and wish manipulation are handled in dedicated functions, while the rendering engine focuses solely on producing the visual interface. This separation makes the application maintainable and extensible.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2vmj1enig9ob4k1bnc1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo2vmj1enig9ob4k1bnc1.png" alt=" " width="800" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Final Thoughts&lt;/strong&gt;&lt;br&gt;
Day 17 brought the entire Advent of AI journey full circle. I started this month building small utilities and CLI workflows, and I ended it by engineering a complete MCP application with a dynamic UI, a full tool suite, and a rendering pipeline. This project represents the intersection of everything I explored: front-end design, backend logic, accessibility, automation, and protocol-driven architecture.&lt;/p&gt;

&lt;p&gt;What I appreciate most is how natural it felt to build a complete system on top of the Model Context Protocol. I was not just creating an application inside an MCP environment. I built the server itself, defined the tool interface, implemented the state management layer, and engineered the entire UI rendering pipeline. The protocol’s structure encouraged a clean separation of concerns, and goose made the development loop fast and expressive.&lt;/p&gt;

&lt;p&gt;This final challenge was more than a conclusion. It demonstrated how far this architecture can be pushed when the server, the UI, and the interaction model are all designed together. A month of AI engineering produced seventeen distinct systems, each building on the last. The Wishlist Application is a fitting finale: a complete, interactive, architecturally coherent MCP server with a fully custom UI, polished, interactive, and architecturally complete.&lt;/p&gt;

&lt;p&gt;Although this marks the final chapter of the Advent of AI challenge and the end of this phase, it is simply the continuation of my AI engineering journey in which I plan to keep learning &amp;amp; building daily; which I truly enjoy!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This post is part of my Advent of AI journey - AI Engineering: Advent of AI with goose Day 17 of AI engineering challenges.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Follow along for more &lt;strong&gt;AI Engineering Adventures with Eri!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>adventofai</category>
      <category>goose</category>
      <category>webdev</category>
    </item>
    <item>
      <title>AI Engineering: Advent of AI with goose Day 16 - An Immersive Countdown AI Application</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Mon, 05 Jan 2026 18:18:41 +0000</pubDate>
      <link>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-16-an-immersive-countdown-ai-application-5c16</link>
      <guid>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-16-an-immersive-countdown-ai-application-5c16</guid>
      <description>&lt;p&gt;&lt;strong&gt;Day 16: Perspective Countdown - A Modern, Immersive Countdown AI Application&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For Day 16, I shifted from backend automation and knowledge‑engineering systems into a fully interactive full-stack experience. The goal was to build a professional, dynamic countdown application that feels modern, polished, and immersive. This project combined real‑time functionality, glassmorphism UI, spatially aware animations, and accessibility‑driven design into a cohesive Next.js application.&lt;/p&gt;

&lt;p&gt;The result is Perspective Countdown: a responsive, visually rich countdown experience built with contemporary web technologies and AI engineered with the same precision and structure I have applied throughout this challenge.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhwlatcznjx2u08zqrct.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frhwlatcznjx2u08zqrct.png" alt=" " width="800" height="418"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;br&gt;
Perspective Countdown provides a real‑time countdown to December 1, 2026 at 10:00 AM. The interface blends aesthetic design with functional clarity, featuring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real‑time countdown tiles
&lt;/li&gt;
&lt;li&gt;Rotating fun facts
&lt;/li&gt;
&lt;li&gt;Email signup with validation
&lt;/li&gt;
&lt;li&gt;Social sharing integration
&lt;/li&gt;
&lt;li&gt;A responsive navigation bar
&lt;/li&gt;
&lt;li&gt;A realistic snowfall animation rendered on canvas
The application is engineered for performance, accessibility, and long‑term maintainability.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technology Stack&lt;/strong&gt;&lt;br&gt;
This project uses a modern, production‑ready stack:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Framework&lt;/strong&gt;: Next.js 15 with App Router
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Language&lt;/strong&gt;: TypeScript
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Styling&lt;/strong&gt;: Tailwind CSS with custom animation layers
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Fonts&lt;/strong&gt;: Parisienne for headings, Inter for UI text
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Runtime&lt;/strong&gt;: Node.js
This stack supports server components, client‑side interactivity, and a clean separation of concerns across the application.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmypw3t07t3h7qvwis1qh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmypw3t07t3h7qvwis1qh.png" alt=" " width="800" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Features&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Functionality&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real‑time countdown updated every second
&lt;/li&gt;
&lt;li&gt;Rotating informational facts with timed transitions
&lt;/li&gt;
&lt;li&gt;Email signup with client‑side validation and persistence
&lt;/li&gt;
&lt;li&gt;Social sharing via Web Share API with fallback logic
&lt;/li&gt;
&lt;li&gt;Responsive navigation bar with scroll‑to‑section behavior
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Design Elements&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Glassmorphism UI with frosted transparency
&lt;/li&gt;
&lt;li&gt;Custom gradient background using a defined brand palette
&lt;/li&gt;
&lt;li&gt;Realistic snowfall animation using canvas and requestAnimationFrame
&lt;/li&gt;
&lt;li&gt;Smooth hover transitions and micro‑interactions
&lt;/li&gt;
&lt;li&gt;Mobile‑first responsive layout
&lt;/li&gt;
&lt;li&gt;WCAG‑aligned accessibility features
This palette supports the glassmorphism aesthetic while maintaining contrast ratios suitable for accessibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmh18ll8y59bm6gkalfl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqmh18ll8y59bm6gkalfl.png" alt=" " width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Project Structure&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;countdown-ai/
├── app/
│   ├── api/
│   │   ├── subscribe/
│   │   │   └── route.ts        # POST endpoint for email signup
│   │   ├── share/
│   │   │   └── route.ts        # Optional API for server-side share logging
│   │   └── events/
│   │       └── route.ts        # Example backend endpoint for event metadata
│   ├── globals.css             # Global styles and animations
│   ├── layout.tsx              # Root layout component
│   └── page.tsx                # Main page component
│
├── components/
│   ├── BottomNav.tsx           # Bottom navigation bar
│   ├── Countdown.tsx           # Real-time countdown display
│   ├── EmailSignup.tsx         # Email subscription form (client)
│   ├── FunFactsRotator.tsx     # Rotating facts component
│   ├── Hero.tsx                # Hero section with icon
│   └── Snowfall.tsx            # Canvas-based snow animation
│
├── data/
│   └── funFacts.ts             # Array of rotating facts
│
├── lib/
│   ├── time.ts                 # Time calculation utilities
│   ├── email.ts                # Server-side email validation and processing
│   ├── db.ts                   # Database connection (Supabase, Prisma, or custom)
│   └── logger.ts               # Server-side logging utilities
│
├── prisma/                     # Optional ORM schema
│   └── schema.prisma
│
├── public/                     # Static assets
│
└── scripts/
    └── seed.ts                 # Database seeding script
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The structure follows Next.js 15 conventions with clear separation between UI components, data, and utility logic.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Key Components&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Countdown&lt;/strong&gt;: Implements a real‑time countdown using &lt;code&gt;setInterval&lt;/code&gt; with cleanup. The component updates every second and renders time segments in glassmorphic tiles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hero&lt;/strong&gt;: Displays an animated hourglass icon with subtle motion. Typography uses the Parisienne font for a distinctive visual identity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;FunFactsRotator&lt;/strong&gt;: Cycles through facts every seven seconds with fade transitions. Uses &lt;code&gt;aria-live="polite"&lt;/code&gt; to ensure screen readers receive updates without interruption. This was actually really fun to implement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;EmailSignup&lt;/strong&gt;: Provides client‑side validation, local persistence, and a placeholder for backend integration. Designed for future expansion into EmailJS, Supabase, or a custom API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Snowfall&lt;/strong&gt;: A canvas‑based animation simulating realistic snowfall. Uses 100 snowflakes with varying velocities and sizes. Optimized with requestAnimationFrame for smooth performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;BottomNav&lt;/strong&gt;: A fixed navigation bar with scroll‑to‑section behavior and integrated social sharing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accessibility&lt;/strong&gt;&lt;br&gt;
I wanted to take a moment to highlight Accessibility. My application incorporates accessibility from the foundation which is dear to my heart:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic HTML with proper landmarks
&lt;/li&gt;
&lt;li&gt;ARIA labels and live regions for dynamic content
&lt;/li&gt;
&lt;li&gt;Keyboard navigation with visible focus states
&lt;/li&gt;
&lt;li&gt;Sufficient color contrast
&lt;/li&gt;
&lt;li&gt;Screen reader compatibility
This ensures the experience is usable across a wide range of devices and assistive technologies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Browser Compatibility&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Chrome
&lt;/li&gt;
&lt;li&gt;Firefox
&lt;/li&gt;
&lt;li&gt;Safari
&lt;/li&gt;
&lt;li&gt;Edge
The application includes Web Share API support with clipboard fallback and uses CSS backdrop‑filter for glassmorphism where supported.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Performance Considerations&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Efficient use of React hooks
&lt;/li&gt;
&lt;li&gt;Canvas animation optimized with requestAnimationFrame
&lt;/li&gt;
&lt;li&gt;Minimal re‑renders through isolated state
&lt;/li&gt;
&lt;li&gt;Lightweight dependency footprint
The result is a smooth, responsive experience across devices.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enhancements&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backend API integration for email storage
&lt;/li&gt;
&lt;li&gt;Dark mode with preference persistence
&lt;/li&gt;
&lt;li&gt;Calendar event download functionality
&lt;/li&gt;
&lt;li&gt;Analytics instrumentation
&lt;/li&gt;
&lt;li&gt;Progressive Web App support
These additions extended the application into a fully featured event‑driven platform.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Development Notes&lt;/strong&gt;&lt;br&gt;
The application follows Next.js  15 conventions with the App Router architecture. All interactive components are marked as client components using the use client directive. Styling is handled through Tailwind CSS utilities with custom CSS for animations and transitions.&lt;/p&gt;

&lt;p&gt;Accessibility was integrated from the beginning rather than added later. Components use semantic HTML, ARIA attributes, and predictable keyboard interactions to ensure the interface remains usable across assistive technologies.&lt;/p&gt;

&lt;p&gt;The backend layer is implemented through Next.js  API routes, providing a clean separation between client components and server logic. Server utilities handle validation, logging, and optional database integration, allowing the application to scale into a fully featured full‑stack system without restructuring the core architecture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7hw43lzmflp6xze1zjb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi7hw43lzmflp6xze1zjb.png" alt=" " width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Final Thoughts&lt;/strong&gt;&lt;br&gt;
Day 16 was a return to front‑end AI engineering, but with the mindset I have developed throughout this challenge: clarity, structure, and maintainability. Building a visually rich, interactive countdown application required balancing aesthetics with performance and accessibility. The result is a polished, modern interface that feels alive without sacrificing technical rigor.&lt;/p&gt;

&lt;p&gt;This project reminded me how much I enjoy crafting user experiences that are both beautiful and engineered with intention. It also reinforced how well goose integrates into a full‑stack workflow, from ideation to implementation.&lt;/p&gt;

&lt;p&gt;Day 16 continues the momentum, and the next challenges will build on this foundation.&lt;/p&gt;

&lt;p&gt;This post is part of my Advent of AI journey - AI Engineering: Advent of AI with goose Day 16 of AI engineering challenges.&lt;/p&gt;

&lt;p&gt;Follow along for more &lt;strong&gt;AI Engineering Adventures with Eri!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>adventofai</category>
      <category>goose</category>
      <category>programming</category>
    </item>
    <item>
      <title>AI Engineering: Advent of AI with goose Day 15 - AI Multi Platform Recipe System</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Mon, 22 Dec 2025 23:49:45 +0000</pubDate>
      <link>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-15-ai-multi-platform-recipe-system-564d</link>
      <guid>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-15-ai-multi-platform-recipe-system-564d</guid>
      <description>&lt;p&gt;&lt;strong&gt;Day 15: Building a Multi Platform Social Media Campaign System with goose Recipes&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The marketing team needed a coordinated social media push across Instagram, Twitter X, and Facebook. Each platform required a different tone, structure, and content style. &lt;/p&gt;

&lt;p&gt;This challenge introduced &lt;strong&gt;sub recipes in goose&lt;/strong&gt;. Instead of writing three separate pieces of content by hand, the goal was to build a reusable automation system. One input. Three platform specific outputs. A single orchestrator recipe coordinating the entire workflow.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge: Automate Multi Platform Content Generation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The task was to create a four recipe system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;instagram-post.yaml
&lt;/li&gt;
&lt;li&gt;twitter-thread.yaml
&lt;/li&gt;
&lt;li&gt;facebook-event.yaml
&lt;/li&gt;
&lt;li&gt;social-campaign.yaml (main orchestrator)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each recipe needed to accept the same core parameters:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;event_name
&lt;/li&gt;
&lt;li&gt;event_date
&lt;/li&gt;
&lt;li&gt;event_description
&lt;/li&gt;
&lt;li&gt;target_audience
&lt;/li&gt;
&lt;li&gt;call_to_action
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The orchestrator recipe needed to call all three sub recipes and produce a complete campaign package.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Social Media Campaign System&lt;/strong&gt;&lt;br&gt;
The completed system generates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A fashionable, high impact Instagram caption
&lt;/li&gt;
&lt;li&gt;A concise, professional five tweet Twitter X thread
&lt;/li&gt;
&lt;li&gt;A warm, family oriented Facebook event description
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All content is produced from a single input set and saved to a unified output file. Each recipe was validated, structured with proper YAML frontmatter, and tailored to the communication style of its platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty2pacvpjoce998w7n09.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fty2pacvpjoce998w7n09.png" alt=" " width="335" height="240"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Technical Architecture Diagram&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Below is a generated text based diagram showing how the recipe system is structured.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                         ┌──────────────────────────────────────────┐
                         │        Social Campaign System             │
                         └──────────────────────────────────────────┘
                                        │
                                        ▼
                   ┌──────────────────────────────────────────────┐
                   │           Main Orchestrator Recipe            │
                   │            social-campaign.yaml               │
                   └──────────────────────────────────────────────┘
                                        │
        ┌──────────────────────────────────────────────────────────────────────────┐
        │                                                                          │
        ▼                                                                          ▼
┌──────────────────────────┐                                      ┌──────────────────────────┐
│ instagram-post.yaml       │                                      │ twitter-thread.yaml       │
│ Platform specific caption │                                      │ Multi tweet thread        │
└──────────────────────────┘                                      └──────────────────────────┘
                                        │
                                        ▼
                              ┌──────────────────────────┐
                              │ facebook-event.yaml       │
                              │ Long form event content   │
                              └──────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This architecture allows the orchestrator to call each sub recipe independently, aggregate the results, and produce a complete campaign package.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Stack&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Below is my stack that powers this system.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Runtime&lt;/td&gt;
&lt;td&gt;goose CLI with Recipes extension&lt;/td&gt;
&lt;td&gt;Executes recipes and sub recipes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Orchestration&lt;/td&gt;
&lt;td&gt;social-campaign.yaml&lt;/td&gt;
&lt;td&gt;Coordinates multi recipe execution&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sub Recipes&lt;/td&gt;
&lt;td&gt;instagram, twitter, facebook YAML files&lt;/td&gt;
&lt;td&gt;Platform specific content generation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Parameters&lt;/td&gt;
&lt;td&gt;YAML schema with required fields&lt;/td&gt;
&lt;td&gt;Ensures consistent input across all platforms&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Output&lt;/td&gt;
&lt;td&gt;Markdown file&lt;/td&gt;
&lt;td&gt;Unified campaign package&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reasoning&lt;/td&gt;
&lt;td&gt;goose LLM engine&lt;/td&gt;
&lt;td&gt;Generates platform appropriate content&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is a recipe based automation stack designed for repeatable, scalable content generation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Platform Outputs&lt;/strong&gt;&lt;br&gt;
The system generated three fully formatted outputs for the Magic Night of Lights and Ice Sculpture Unveiling. Each output reflects the tone and expectations of its platform.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wcoods45zkwmymz0dv9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5wcoods45zkwmymz0dv9.png" alt=" " width="338" height="175"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Instagram&lt;/strong&gt; A high impact caption with strategic hashtags and a polished, visual forward tone.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz46nie8v0ampm5xnmnp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpz46nie8v0ampm5xnmnp.png" alt=" " width="800" height="230"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Twitter X&lt;/strong&gt; A five tweet thread under 280 characters per tweet, structured for clarity and shareability.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpo0v7q3hffqi2rmk7us8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpo0v7q3hffqi2rmk7us8.png" alt=" " width="800" height="429"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Facebook&lt;/strong&gt; A long form event description written for families and community oriented audiences.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9bcpc4knwoc1rq7wmye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj9bcpc4knwoc1rq7wmye.png" alt=" " width="800" height="565"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Final Thoughts&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Day 15&lt;/strong&gt; shifted the focus from knowledge engineering to workflow automation. Recipes appear simple at first glance, but orchestrating multiple sub recipes into a cohesive system requires architectural thinking. What stood out in this challenge was the clarity that comes from building reusable automation. Instead of writing content three separate times, I now have a system that will work for every future event with no additional effort.&lt;/p&gt;

&lt;p&gt;This is engineering that scales. This is engineering that saves teams time.&lt;br&gt;
This is exactly where &lt;strong&gt;goose&lt;/strong&gt; excels. It rewards structure, clarity, and repeatability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 15&lt;/strong&gt; encouraged me to think like a systems designer rather than a content generator, and that shift will matter in the challenges ahead.&lt;/p&gt;

&lt;p&gt;This post is part of my Advent of AI journey - AI Engineering: Advent of AI with goose Day 15 of AI engineering challenges.&lt;/p&gt;

&lt;p&gt;Follow along for more &lt;strong&gt;AI Engineering Adventures with Eri!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>automation</category>
      <category>tooling</category>
    </item>
    <item>
      <title>AI Engineering: Advent of AI with goose Day 14 - Complete Operations System</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Mon, 22 Dec 2025 23:06:10 +0000</pubDate>
      <link>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-14-complete-operations-system-2cm2</link>
      <guid>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-14-complete-operations-system-2cm2</guid>
      <description>&lt;p&gt;&lt;strong&gt;Day 14: Building a Complete Operations Knowledge System with goose Skills&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Winter Festival succeeded because of the expertise of its team. Every department had developed its own workflows, escalation paths, and operational knowledge, but none of it was documented in a way that could scale. With three neighboring towns requesting help to run their own festivals, the Director needed a system that could capture this expertise and make it reusable.&lt;/p&gt;

&lt;p&gt;This challenge introduced goose Skills, a mechanism for encoding domain knowledge into structured, discoverable, and portable expertise. Instead of producing a static manual, the goal was to build a living operational system that goose can load and apply automatically whenever a coordinator needs guidance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge: Capture Operational Knowledge as a Skill System&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The mission was to create a festival‑operations skill that consolidates the knowledge of four core team members:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Customer experience workflows
&lt;/li&gt;
&lt;li&gt;Security and vendor protocols
&lt;/li&gt;
&lt;li&gt;Lost and found procedures
&lt;/li&gt;
&lt;li&gt;Marketing and communications guidelines
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The skill needed to be actionable, structured, and discoverable by goose. It also required supporting files such as checklists, templates, decision trees, and scripts to form a complete operational system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding goose Skills&lt;/strong&gt;&lt;br&gt;
A goose Skill is a markdown file with YAML frontmatter that teaches goose a domain. Skills are automatically discovered at session start and loaded when relevant. They differ from recipes because they provide knowledge rather than execute tasks.&lt;/p&gt;

&lt;p&gt;Skills include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Core knowledge (SKILL.md)
&lt;/li&gt;
&lt;li&gt;Checklists
&lt;/li&gt;
&lt;li&gt;Templates
&lt;/li&gt;
&lt;li&gt;Decision trees
&lt;/li&gt;
&lt;li&gt;Scripts
&lt;/li&gt;
&lt;li&gt;Recipes for generating new skills
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows a skill to function as a modular knowledge system rather than a single document.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgh326u2fn8qu06dr5lid.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgh326u2fn8qu06dr5lid.png" alt=" " width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Full Skills Architecture&lt;/strong&gt;&lt;br&gt;
I did not build a single skill.&lt;br&gt;&lt;br&gt;
I built a multi‑skill operational knowledge architecture with layered expertise, reusable components, and cross‑skill interoperability.&lt;/p&gt;

&lt;p&gt;Below is my complete directory structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;+---eriperspective-ai-engineering
|       debugging-checklist.md
|       decision-trees.md
|       engineering-style.md
|       problem-analysis.md
|       SKILL.md
|       
+---festival-operations
|   |   decision-trees.md
|   |   SKILL.md
|   |   
|   +---checklists
|   |       closing.md
|   |       opening.md
|   |       quick-reference.md
|   |       
|   +---recipes
|   |       skill-generator.md
|   |       
|   +---scripts
|   |       cleanup.sh
|   |       
|   \---templates
|           incident-report.md
|           
+---lost-and-found
|       SKILL.md
|       
+---marketing-communications
|       SKILL.md
|       
+---security-protocols
|       SKILL.md
|       
\---volunteer-management
        SKILL.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Six independent skills, each representing a domain of festival operations or engineering practice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Architecture Diagram&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;                         ┌──────────────────────────────────────────┐
                         │        goose Skills Knowledge Base        │
                         └──────────────────────────────────────────┘
                                        │
                                        ▼
                   ┌──────────────────────────────────────────────┐
                   │        Multi‑Skill Operational System         │
                   └──────────────────────────────────────────────┘
                                        │
        ┌──────────────────────────────────────────────────────────────────────────┐
        │                                                                          │
        ▼                                                                          ▼
┌──────────────────────────┐                                      ┌──────────────────────────┐
│  eriperspective-ai-       │                                      │   festival-operations    │
│  engineering (Personal OS)│                                      │   (Primary Domain Skill) │
└──────────────────────────┘                                      └──────────────────────────┘
        │                                                                          │
        │                                                                          │
        ▼                                                                          ▼
┌──────────────────────────┐                                      ┌──────────────────────────┐
│ Debugging workflows       │                                      │ Customer experience       │
│ Decision trees            │                                      │ Security protocols        │
│ Engineering style         │                                      │ Vendor management         │
│ Problem analysis          │                                      │ Lost and found workflows  │
└──────────────────────────┘                                      │ Marketing communications   │
                                                                  └──────────────────────────┘
                                        │
                                        ▼
                     ┌─────────────────────────────────────────────┐
                     │     Department‑Specific Operational Skills   │
                     └─────────────────────────────────────────────┘
                                        │
        ┌──────────────────────┬────────────────────────┬────────────────────────┬──────────────────────┐
        ▼                      ▼                        ▼                        ▼
┌──────────────────┐   ┌──────────────────┐     ┌──────────────────┐     ┌──────────────────┐
│ lost-and-found    │   │ marketing-       │     │ security-         │     │ volunteer-        │
│ SKILL.md          │   │ communications   │     │ protocols         │     │ management        │
└──────────────────┘   └──────────────────┘     └──────────────────┘     └──────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This diagram reflects the actual structure: a layered, modular knowledge system with a personal engineering OS, a primary operational skill, and multiple departmental sub‑skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Achievement Summary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All four challenge tiers were completed:&lt;br&gt;
I progressed through every Goose challenge tier: Beginner, Intermediate, Advanced, and Ultimate, building a full multi‑skill system and completing my personal eripersoective AI Engineering OS skill at the highest level. This skill functions consistently in both Goose CLI and Goose Desktop environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Beginner Level Completed&lt;/strong&gt;&lt;br&gt;
Created a new Goose skill with proper YAML frontmatter&lt;br&gt;
Added multiple operational sections&lt;br&gt;
Added supporting files like quick‑reference checklists&lt;br&gt;
Formatted the skill for clarity&lt;br&gt;
Successfully tested the skill in Goose&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Intermediate Level Completed&lt;/strong&gt;&lt;br&gt;
Built multiple related skills (one per department)&lt;br&gt;
Added templates and structured supporting files&lt;br&gt;
Included decision trees for complex scenarios&lt;br&gt;
Ensured the skills were compatible across Goose environments&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Advanced Level Completed&lt;/strong&gt;&lt;br&gt;
Created skills that reference each other&lt;br&gt;
Added runnable supporting scripts&lt;br&gt;
Built a skill‑generator recipe&lt;br&gt;
Added a skill‑testing checklist to validate quality&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ultimate Level Completed&lt;/strong&gt;&lt;br&gt;
Built the eriperspective AI Engineering OS skill&lt;br&gt;
Captured my engineering philosophy, learning style, UI aesthetic, and immersive‑tech interests&lt;br&gt;
Added decision trees, checklists, recipes, and templates&lt;br&gt;
Created a complete personal engineering OS that Goose can use to think and reason like mechecklists. &lt;br&gt;
Created Erica’s Engineering OS, a complete personal operating system capturing engineering philosophy, workflows, debugging processes, and decision‑making patterns.&lt;/p&gt;

&lt;p&gt;This is a full enterprise‑grade knowledge system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyk742254afbxdye77vg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpyk742254afbxdye77vg.png" alt=" " width="800" height="577"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Erica’s Engineering OS&lt;/strong&gt;&lt;br&gt;
As part of the Ultimate Challenge, you built a complete engineering operating system containing:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Debugging workflows
&lt;/li&gt;
&lt;li&gt;Problem analysis frameworks
&lt;/li&gt;
&lt;li&gt;Decision trees
&lt;/li&gt;
&lt;li&gt;Engineering style guidelines
&lt;/li&gt;
&lt;li&gt;Learning philosophy
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This OS defines a consistent engineering identity and provides a reusable framework for problem solving and decision making.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtm8z5j36ibvo21ylndq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frtm8z5j36ibvo21ylndq.png" alt=" " width="623" height="335"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;goose Skills&lt;/strong&gt; are just files on disk its &lt;strong&gt;knowledge‑engineering stack&lt;/strong&gt; that supports discovery, loading, and execution inside goose.&lt;/p&gt;

&lt;p&gt;Technical Stack for the Goose Skills System&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;goose Runtime Environment&lt;/strong&gt;&lt;br&gt;
This is the core execution layer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;goose CLI (v1.17.0+)
&lt;/li&gt;
&lt;li&gt;goose Desktop (optional, compatible)
&lt;/li&gt;
&lt;li&gt;Skills extension enabled
&lt;/li&gt;
&lt;li&gt;Developer extension enabled
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This environment is responsible for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Skill discovery
&lt;/li&gt;
&lt;li&gt;Skill loading
&lt;/li&gt;
&lt;li&gt;Contextual relevance matching
&lt;/li&gt;
&lt;li&gt;Applying skill knowledge to user queries
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Filesystem‑Based Skill Architecture&lt;/strong&gt;&lt;br&gt;
Skills are stored as structured directories:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;~/.config/goose/skills/
./.goose/skills/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This provides:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Namespacing
&lt;/li&gt;
&lt;li&gt;Priority‑based skill resolution
&lt;/li&gt;
&lt;li&gt;Cross‑skill interoperability
&lt;/li&gt;
&lt;li&gt;Portability across machines
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Markdown + YAML Knowledge Format&lt;/strong&gt;&lt;br&gt;
Every skill uses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;YAML frontmatter&lt;/strong&gt; for metadata
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Markdown&lt;/strong&gt; for structured knowledge
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Subdirectories&lt;/strong&gt; for modular components
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This stack enables:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Human‑readable documentation
&lt;/li&gt;
&lt;li&gt;Machine‑interpretable metadata
&lt;/li&gt;
&lt;li&gt;Extensibility through supporting files
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Skill Submodules&lt;/strong&gt;&lt;br&gt;
My system uses multiple submodules:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;checklists/&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;templates/&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;recipes/&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;scripts/&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;decision‑trees.md&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;debugging‑checklist.md&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These act as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Operational modules
&lt;/li&gt;
&lt;li&gt;Reusable components
&lt;/li&gt;
&lt;li&gt;Knowledge primitives
&lt;/li&gt;
&lt;li&gt;Execution helpers (scripts)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Shell Script Execution Layer&lt;/strong&gt;&lt;br&gt;
Included runnable scripts such as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cleanup.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This introduces:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;POSIX shell compatibility
&lt;/li&gt;
&lt;li&gt;Script execution via goose Developer extension
&lt;/li&gt;
&lt;li&gt;Automated maintenance workflows
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cross‑Skill Knowledge Graph&lt;/strong&gt;&lt;br&gt;
My architecture forms a &lt;strong&gt;knowledge graph&lt;/strong&gt;, not a flat set of files.&lt;/p&gt;

&lt;p&gt;Skills reference each other implicitly through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Shared domains
&lt;/li&gt;
&lt;li&gt;Overlapping terminology
&lt;/li&gt;
&lt;li&gt;Goose’s relevance engine
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This allows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi‑skill reasoning
&lt;/li&gt;
&lt;li&gt;Department‑specific overrides
&lt;/li&gt;
&lt;li&gt;Hierarchical expertise
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Tech Stack&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Technology&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Runtime&lt;/td&gt;
&lt;td&gt;goose CLI + Skills extension&lt;/td&gt;
&lt;td&gt;Skill discovery and execution&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Storage&lt;/td&gt;
&lt;td&gt;Filesystem directories&lt;/td&gt;
&lt;td&gt;Namespaced skill organization&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Format&lt;/td&gt;
&lt;td&gt;Markdown + YAML&lt;/td&gt;
&lt;td&gt;Human and machine readable knowledge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Modules&lt;/td&gt;
&lt;td&gt;Checklists, templates, scripts, decision trees&lt;/td&gt;
&lt;td&gt;Operational components&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Execution&lt;/td&gt;
&lt;td&gt;Shell scripts&lt;/td&gt;
&lt;td&gt;Automation and maintenance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Reasoning&lt;/td&gt;
&lt;td&gt;goose relevance engine&lt;/td&gt;
&lt;td&gt;Contextual skill loading&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;My Final Thoughts&lt;/strong&gt;&lt;br&gt;
Day 14 produced a complete operational knowledge system built on goose Skills. The festival‑operations skill captures the expertise of the entire team and makes it accessible to any future coordinator. The supporting files, decision trees, templates, and scripts transform the skill into a practical, scalable operational system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 14: Completed&lt;/strong&gt; Operational knowledge: Captured. Skill ecosystem: Fully established.&lt;/p&gt;

&lt;p&gt;This post is part of my Advent of AI journey, AI Engineering: Advent of AI with goose Day 14.&lt;/p&gt;

&lt;p&gt;Follow along for more &lt;strong&gt;AI Engineering Adventures with Eri!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>goose</category>
      <category>adventofai</category>
      <category>programming</category>
    </item>
    <item>
      <title>AI Engineering: Advent of AI with goose Day 13 - AI Scheduling Accessible Application Terminal Integration</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Mon, 22 Dec 2025 21:29:34 +0000</pubDate>
      <link>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-13-scheduling-application-4069</link>
      <guid>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-13-scheduling-application-4069</guid>
      <description>&lt;p&gt;&lt;strong&gt;Day 13: Building a Complete Scheduling System with Terminal Integration&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Today’s challenge focused on transforming scattered staff availability notes into a fully functional scheduling application. Instead of relying on manual organization or spreadsheets, the goal was to use goose’s terminal integration to guide the entire process. With goose active in the terminal, the AI could observe commands, understand context, and provide real‑time assistance while building the system.&lt;/p&gt;

&lt;p&gt;This challenge demonstrated how terminal integration enables a natural workflow: staying inside the terminal, running commands, inspecting files, and receiving contextual AI guidance without switching tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Author's Note&lt;/strong&gt;&lt;br&gt;
WCAG accessibility is essential to my engineering approach because it ensures that every interface I build can be used reliably by people with diverse abilities and assistive technologies. I prioritize semantic structure, keyboard navigation, contrast standards, and ARIA roles because accessible design is not optional; it is a core requirement for professional, inclusive systems. Building with WCAG in mind strengthens usability for everyone and guarantees that the applications I create remain functional, equitable, and future‑proof.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xi8reurf4ql9y9zti9d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4xi8reurf4ql9y9zti9d.png" alt=" " width="487" height="444"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge: Turn Napkin Notes into a Scheduling System&lt;/strong&gt;&lt;br&gt;
Provided a set of unstructured staff notes. The mission was to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use goose in the terminal
&lt;/li&gt;
&lt;li&gt;Organize all staff availability, constraints, and skills
&lt;/li&gt;
&lt;li&gt;Identify scheduling conflicts
&lt;/li&gt;
&lt;li&gt;Build a complete three‑day festival schedule
&lt;/li&gt;
&lt;li&gt;Produce a professional scheduling website
&lt;/li&gt;
&lt;li&gt;Ensure the system is usable by the entire festival team
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The final result needed to be a real application, not just a data file.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbk79wxz67qxtubrf3y4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbbk79wxz67qxtubrf3y4.png" alt=" " width="800" height="368"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Terminal Integration with goose&lt;/strong&gt;&lt;br&gt;
The key skill for Day 13 was understanding how goose’s terminal integration works. Unlike goose run, @ goose can see terminal history and infer context from recent commands.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat staff_notes.txt
@goose "who can work Wednesday"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is no need to reference the file explicitly. &lt;a class="mentioned-user" href="https://dev.to/goose"&gt;@goose&lt;/a&gt; already knows what you are working on. This creates a fluid, real‑time development workflow where the AI acts as a technical partner inside the terminal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Scheduling Application&lt;/strong&gt;&lt;br&gt;
The final deliverable was a complete, production‑ready scheduling application contained in a single HTML file. It includes a full design system, staff management tools, shift creation, conflict detection, and persistent data storage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Design System&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Exact color variables from designinstruction.html
&lt;/li&gt;
&lt;li&gt;Gradient backgrounds using the primary palette (#6366f1 to #8b5cf6)
&lt;/li&gt;
&lt;li&gt;Glassmorphism surfaces with blur and transparency
&lt;/li&gt;
&lt;li&gt;Animated background with rotating radial gradients
&lt;/li&gt;
&lt;li&gt;Inter font typography
&lt;/li&gt;
&lt;li&gt;Professional dark mode aesthetic (#0a0a0f)
&lt;/li&gt;
&lt;li&gt;Smooth animations including fadeInUp, rotate, and pulse
&lt;/li&gt;
&lt;li&gt;Shimmer hover effects on buttons
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2ybqgs5shzz9xzi6zkv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq2ybqgs5shzz9xzi6zkv.png" alt=" " width="800" height="369"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Core Functionality&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Staff Management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Add, edit, and delete staff
&lt;/li&gt;
&lt;li&gt;Track availability by day and hour
&lt;/li&gt;
&lt;li&gt;Manage skills and constraints
&lt;/li&gt;
&lt;li&gt;Visual staff cards with structured details
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Schedule Builder&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create, edit, and delete shifts
&lt;/li&gt;
&lt;li&gt;Seven‑day tabbed schedule view
&lt;/li&gt;
&lt;li&gt;Time slot management
&lt;/li&gt;
&lt;li&gt;Assignment descriptions
&lt;/li&gt;
&lt;li&gt;Table‑based layout for clarity
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conflict Detection&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real‑time validation during shift creation
&lt;/li&gt;
&lt;li&gt;Availability checks
&lt;/li&gt;
&lt;li&gt;Overlapping shift detection
&lt;/li&gt;
&lt;li&gt;Time constraint validation
&lt;/li&gt;
&lt;li&gt;Visual warnings with detailed explanations
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Data Persistence&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automatic saving to localStorage
&lt;/li&gt;
&lt;li&gt;JSON export
&lt;/li&gt;
&lt;li&gt;Load sample data
&lt;/li&gt;
&lt;li&gt;Print‑friendly layout
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Dashboard and Statistics&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Live staff count
&lt;/li&gt;
&lt;li&gt;Total shifts
&lt;/li&gt;
&lt;li&gt;Conflict count
&lt;/li&gt;
&lt;li&gt;Coverage percentage
&lt;/li&gt;
&lt;li&gt;Conflict alerts with detailed breakdown
&lt;/li&gt;
&lt;li&gt;Dynamic updates across all views
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Accessibility&lt;/strong&gt;&lt;br&gt;
The application adheres to WCAG guidelines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Semantic HTML5
&lt;/li&gt;
&lt;li&gt;ARIA roles for dialogs, tablists, and tabpanels
&lt;/li&gt;
&lt;li&gt;Keyboard navigation including ESC to close modals
&lt;/li&gt;
&lt;li&gt;Screen reader support
&lt;/li&gt;
&lt;li&gt;High‑contrast color system
&lt;/li&gt;
&lt;li&gt;Visible focus indicators
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Responsive Design&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Mobile‑first layout
&lt;/li&gt;
&lt;li&gt;Breakpoints at 968px and 640px
&lt;/li&gt;
&lt;li&gt;Collapsible mobile navigation
&lt;/li&gt;
&lt;li&gt;Touch‑friendly controls
&lt;/li&gt;
&lt;li&gt;Stacked layouts on small screens
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical Features&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fully self‑contained single HTML file
&lt;/li&gt;
&lt;li&gt;No build process required
&lt;/li&gt;
&lt;li&gt;No external dependencies beyond CDN fonts and icons
&lt;/li&gt;
&lt;li&gt;Complete CRUD operations in JavaScript
&lt;/li&gt;
&lt;li&gt;Persistent data via localStorage
&lt;/li&gt;
&lt;li&gt;Clean print layout
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjthi786880f0f4xejop9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjthi786880f0f4xejop9.png" alt=" " width="800" height="616"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts&lt;/strong&gt;&lt;br&gt;
The scheduling application is fully dynamic, responsive, accessible, and ready for real use. It replaces Marcus’s scattered notes with a structured, interactive system that the entire festival team can rely on.&lt;/p&gt;

&lt;p&gt;Day 13 demonstrated how terminal integration with goose can accelerate real‑world engineering tasks, turning unstructured information into a complete operational tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 13: Completed&lt;/strong&gt; Scheduling system: Delivered. Festival operations: Organized.&lt;/p&gt;

&lt;p&gt;This post is part of my Advent of AI journey, AI Engineering: Advent of AI with goose Day 13.&lt;/p&gt;

&lt;p&gt;Follow along for more &lt;strong&gt;AI Engineering Adventures with Eri!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>goose</category>
      <category>anthropic</category>
      <category>adventofai</category>
    </item>
    <item>
      <title>AI Engineering: Advent of AI with goose Day 12 - MCP Orchestrated Intelligent Multi‑agent Reasoning</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Mon, 22 Dec 2025 20:53:22 +0000</pubDate>
      <link>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-12-25cf</link>
      <guid>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-12-25cf</guid>
      <description>&lt;p&gt;&lt;strong&gt;Day 12: The Festival Mascot Crisis&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This was the perfect use case for the Council of Mine MCP extension. Nine AI personas, each with a distinct reasoning style, debating and voting democratically.&lt;/p&gt;

&lt;p&gt;This challenge focused on using MCP sampling to orchestrate intelligent multi‑agent reasoning inside goose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What MCP:&lt;/strong&gt;&lt;br&gt;
MCP, the Model Context Protocol, is a framework that allows tools and extensions to communicate with AI models in a structured, reliable way. Instead of returning static data, an MCP extension can request reasoning from the AI model, incorporate that reasoning into its own logic, and return an intelligent, context‑aware result. This enables extensions to behave like specialized agents that can analyze information, generate multiple perspectives, and support complex decision workflows. MCP sampling builds on this by allowing an extension to create several distinct AI viewpoints, compare them, and synthesize a final recommendation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88q47jmrn0x12r91yenb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F88q47jmrn0x12r91yenb.png" alt=" " width="596" height="619"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge: Convene the Council&lt;/strong&gt;&lt;br&gt;
The mission was to install the Council of Mine extension, initiate a debate on the mascot topic, gather nine perspectives, run a democratic vote, and synthesize the final decision. The goal was to demonstrate how MCP sampling enables extensions to request AI reasoning, generate multiple viewpoints, and return structured analysis rather than raw data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP Sampling Overview&lt;/strong&gt;&lt;br&gt;
MCP sampling allows extensions to request help from goose’s AI model. Instead of returning static data, the extension can ask the model to analyze, interpret, or generate multiple perspectives.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Normal extension flow:&lt;/strong&gt;&lt;br&gt;
You → goose → Extension → Returns data → goose interprets&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP sampling flow:&lt;/strong&gt;&lt;br&gt;
You → goose → Extension → Extension requests AI reasoning&lt;br&gt;&lt;br&gt;
Extension receives AI output → Returns intelligent analysis → goose&lt;/p&gt;

&lt;p&gt;This enables extensions to behave like intelligent specialists rather than simple data providers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why MCP Sampling Matters&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extensions can generate multiple AI personas
&lt;/li&gt;
&lt;li&gt;Distributed reasoning becomes possible
&lt;/li&gt;
&lt;li&gt;Domain expertise can be simulated
&lt;/li&gt;
&lt;li&gt;Complex decisions can be debated democratically
&lt;/li&gt;
&lt;li&gt;The extension becomes an orchestrator of AI perspectives
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1fdpzw0uj3tm8kk8cz4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr1fdpzw0uj3tm8kk8cz4.png" alt=" " width="800" height="272"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Real‑World Applications&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi‑perspective analysis
&lt;/li&gt;
&lt;li&gt;Intelligent documentation
&lt;/li&gt;
&lt;li&gt;Context‑aware search
&lt;/li&gt;
&lt;li&gt;Database analysis
&lt;/li&gt;
&lt;li&gt;Multi‑expert code review
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Submissions and Debates Completed&lt;/strong&gt; A total of six debates were conducted across simple and complex topics. All required tasks for Day 12 were completed.&lt;br&gt;
This challenge demonstrated how Council of Mine leverages MCP sampling to create 9 distinct AI personas that debate topics and vote democratically, producing nuanced recommendations superior to single-perspective analysis.&lt;/p&gt;

&lt;p&gt;Debates Conducted: 6 Total&lt;br&gt;
Simple Topics (4):&lt;br&gt;
Mascot Selection → Winner: The Systems Thinker → Penguin&lt;br&gt;
Mascot Naming → Winner: The Pragmatist → Test 2-3 finalists with community&lt;br&gt;
Sidekick Decision → Winners: Visionary &amp;amp; Systems Thinker → Pilot program&lt;br&gt;
Cookie Selection → Winners: Mediator &amp;amp; Visionary → Curated selection with gingerbread anchor&lt;/p&gt;

&lt;p&gt;Complex Topics (2):&lt;br&gt;
Festival Expansion → Winner: The Pragmatist → Add 1 day first, measure, then decide&lt;br&gt;
Tradition vs Innovation → Winners: Analyst &amp;amp; Pragmatist → 60/40 ratio with data validation&lt;/p&gt;

&lt;p&gt;Most Influential Council Members:&lt;/p&gt;

&lt;p&gt;The Pragmatist: 11 votes (dominates complex decisions)&lt;br&gt;
The Systems Thinker: 10 votes (scales with complexity)&lt;br&gt;
The Analyst: 5 votes (data-driven validation)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Universal Patterns:&lt;/strong&gt;&lt;br&gt;
Evidence beats ideology every time&lt;br&gt;
Testing before commitment is universal&lt;br&gt;
Accessibility matters in all decisions&lt;br&gt;
60/40 tradition/innovation ratio emerged naturally&lt;br&gt;
Incremental approaches consistently win&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2lzzsaiflqkvont7780.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy2lzzsaiflqkvont7780.png" alt=" " width="800" height="219"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;MCP Sampling Power:&lt;/strong&gt;&lt;br&gt;
9 distinct AI personas with unique reasoning&lt;br&gt;
Democratic voting reveals collective wisdom&lt;br&gt;
Synthesis combines best of all perspectives&lt;br&gt;
Superior to single-perspective analysis&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Complex decision-making requires:&lt;/strong&gt;&lt;br&gt;
Systems-level thinking to understand interconnections&lt;br&gt;
Pragmatic approaches to manage risk&lt;br&gt;
Evidence-based validation before commitment&lt;br&gt;
Incremental implementation for testing assumptions&lt;br&gt;
Accessibility considerations for inclusive outcomes &lt;/p&gt;

&lt;p&gt;These members consistently shaped outcomes, especially in complex decision spaces.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Universal Patterns Observed&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Evidence consistently outperformed ideology
&lt;/li&gt;
&lt;li&gt;Testing before commitment was preferred
&lt;/li&gt;
&lt;li&gt;Accessibility considerations appeared in every decision
&lt;/li&gt;
&lt;li&gt;A natural sixty to forty tradition to innovation ratio emerged
&lt;/li&gt;
&lt;li&gt;Incremental approaches were favored over large‑scale changes
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;MCP Sampling Capabilities Demonstrated&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The Council of Mine extension showcased the strengths of MCP sampling:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Nine distinct AI personas with unique reasoning styles
&lt;/li&gt;
&lt;li&gt;Democratic voting that reveals collective intelligence
&lt;/li&gt;
&lt;li&gt;Synthesis that merges the strongest elements of each viewpoint
&lt;/li&gt;
&lt;li&gt;Superior decision quality compared to single‑perspective analysis
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Requirements for Complex Decision Making&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The council demonstrated that effective complex decisions require:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Systems‑level thinking to understand interdependencies
&lt;/li&gt;
&lt;li&gt;Pragmatic approaches to reduce risk
&lt;/li&gt;
&lt;li&gt;Evidence‑based validation before committing resources
&lt;/li&gt;
&lt;li&gt;Incremental implementation to test assumptions
&lt;/li&gt;
&lt;li&gt;Accessibility considerations to ensure inclusive outcomes
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Across all debates, the council consistently showed that the strongest decisions emerge from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Recognizing multiple valid perspectives
&lt;/li&gt;
&lt;li&gt;Testing assumptions before scaling
&lt;/li&gt;
&lt;li&gt;Balancing innovation with proven methods
&lt;/li&gt;
&lt;li&gt;Preserving core values while enabling evolution
&lt;/li&gt;
&lt;li&gt;Using data to guide decisions rather than justify them
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Visualizations Created&lt;/strong&gt;&lt;br&gt;
Interactive Visualizations Created&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Council Voting Power (Bar Chart)&lt;/strong&gt;&lt;br&gt;
Shows total votes received by each council member&lt;br&gt;
The Pragmatist leads with 11 votes&lt;br&gt;
Systems Thinker close behind with 10 votes&lt;br&gt;
Reveals influence hierarchy across all debates&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yrcqs5fs455vh53gop1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8yrcqs5fs455vh53gop1.png" alt=" " width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decision Characteristics (Radar Chart)&lt;/strong&gt;&lt;br&gt;
Compares simple vs complex topic patterns&lt;br&gt;
Complex topics score 95% on evidence-based approaches&lt;br&gt;
Complex topics heavily favor systems thinking&lt;br&gt;
Shows how complexity shifts decision-making priorities&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3es79yivirpjp1w4m1wo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3es79yivirpjp1w4m1wo.png" alt=" " width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Challenge Flow (Sankey Diagram)&lt;/strong&gt;&lt;br&gt;
Visualizes flow from challenge levels through debates to outcomes&lt;br&gt;
Demonstrates how beginner work feeds intermediate analysis&lt;br&gt;
Shows convergence on evidence-based outcomes and testing frameworks&lt;br&gt;
Illustrates incremental approach emerging from advanced challenges&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu0tgtp7g005f5vrm9wc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftu0tgtp7g005f5vrm9wc.png" alt=" " width="800" height="362"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Topic &amp;amp; Strategy Distribution (Donut Charts)&lt;/strong&gt;&lt;br&gt;
67% simple topics, 33% complex topics&lt;br&gt;
Pragmatic strategies dominate at 35%&lt;br&gt;
Systems-based and data-driven approaches folllow&lt;br&gt;
Balanced strategies round out decision-making&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpawoacoiu0o0ertsmbe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpawoacoiu0o0ertsmbe.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Member Influence Trajectory (Line Chart)&lt;br&gt;
Tracks how council member votes change across debates&lt;br&gt;
Pragmatist influence surges in complex topics&lt;br&gt;
Systems Thinker maintains steady high influence&lt;br&gt;
Visionary shows consistent moderate influence&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm0y224zjupkmjzlv951.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flm0y224zjupkmjzlv951.png" alt=" " width="800" height="479"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Outcomes Treemap&lt;/strong&gt;&lt;br&gt;
Visual hierarchy of all achievements&lt;br&gt;
Color-coded by type (Decision, Process, Strategy, Analysis, Output)&lt;br&gt;
Size represents relative importance&lt;br&gt;
Shows balanced completion across all levels&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3qiyhk9co4m7m7z0jqt.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc3qiyhk9co4m7m7z0jqt.png" alt=" " width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Insights&lt;/strong&gt;&lt;br&gt;
This challenge demonstrated how multi‑agent reasoning can outperform single‑agent decision making. MCP sampling enables extensions to orchestrate multiple AI perspectives, debate complex topics, and produce structured, evidence‑driven recommendations. The Council of Mine extension is a practical example of distributed AI reasoning applied to real decision workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Final Thoughts&lt;/strong&gt;&lt;br&gt;
Day 12 showcased the power of MCP sampling and multi‑persona reasoning. By convening the Council of Mine, I transformed a chaotic debate into a structured, democratic decision process. The result was a clear, evidence‑driven recommendation supported by nine distinct reasoning styles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 12: Completed&lt;/strong&gt; Mascot crisis: Resolved. Council decision: Selected.&lt;/p&gt;

&lt;p&gt;This post is part of my Advent of AI journey, AI Engineering: Advent of AI with goose Day 12.&lt;/p&gt;

&lt;p&gt;Follow along for more &lt;strong&gt;AI Engineering Adventures with Eri!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>mcp</category>
      <category>goose</category>
      <category>adventofai</category>
    </item>
    <item>
      <title>AI Engineering: Advent of AI with goose Day 11 - Photo AI Filter Accessible Application - Spatial Intelligence &amp; Subagents</title>
      <dc:creator>Erica</dc:creator>
      <pubDate>Mon, 22 Dec 2025 19:55:15 +0000</pubDate>
      <link>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-11-ngm</link>
      <guid>https://dev.to/eriperspective/ai-engineering-advent-of-ai-with-goose-day-11-ngm</guid>
      <description>&lt;p&gt;&lt;strong&gt;Day 11: The Photo Booth AI Application - Real‑Time Filters, Spatial Intelligence &amp;amp; Subagents&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What if you could build a full AR‑style photo booth - camera access, face detection, real‑time filters, capture, download, and QR sharing - all in a single day? And what if you didn’t have to build it alone?&lt;/p&gt;

&lt;p&gt;That’s exactly what Day 11 challenged me to do.&lt;/p&gt;

&lt;p&gt;Using &lt;strong&gt;goose subagents&lt;/strong&gt;, I built a complete Fun House Photo Booth web app with festive filters, MediaPipe face tracking, mobile support, and a full capture pipeline. It feels like having a small engineering team working in parallel - because that’s exactly what subagents simulate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 11: Photo Booth AI Application 📸&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Challenge: Build a Real-Time Filter App in One Day&lt;/strong&gt;&lt;br&gt;
The festival director wanted a magical selfie booth:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Open on your phone
&lt;/li&gt;
&lt;li&gt;See yourself with fun filters
&lt;/li&gt;
&lt;li&gt;Filters track your face
&lt;/li&gt;
&lt;li&gt;Switch between effects
&lt;/li&gt;
&lt;li&gt;Capture the photo
&lt;/li&gt;
&lt;li&gt;Download it
&lt;/li&gt;
&lt;li&gt;Share it
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where subagents shine.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hxf5k8xtznwpabfpjbo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8hxf5k8xtznwpabfpjbo.png" alt=" " width="800" height="775"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enter: The Fun House Photo Booth (Built with Subagents)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I split the work into specialized subagents - just like a real dev team:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Subagent 1 - Core App Builder&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Built the HTML/CSS/JS structure
&lt;/li&gt;
&lt;li&gt;Implemented camera access
&lt;/li&gt;
&lt;li&gt;Created the live video preview
&lt;/li&gt;
&lt;li&gt;Added capture + download
&lt;/li&gt;
&lt;li&gt;Made everything mobile‑responsive
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Subagent 2 - Filter Engineer&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrated MediaPipe Face Landmarker
&lt;/li&gt;
&lt;li&gt;Implemented 468‑point face mesh
&lt;/li&gt;
&lt;li&gt;Built the real‑time filter system
&lt;/li&gt;
&lt;li&gt;Anchored filters to specific landmarks
&lt;/li&gt;
&lt;li&gt;Added filter switching
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Optional Subagents I Added&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stylist&lt;/strong&gt; - polished the UI (FilterSense branding)
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Documentation Writer&lt;/strong&gt; - created usage notes
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance Optimizer&lt;/strong&gt; - ensured smooth tracking
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Subagents let me parallelize the work and keep the build clean and modular.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tech Stack&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;goose Subagents&lt;/strong&gt; task orchestration &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sonnet 4.5&lt;/strong&gt; by Anthropic &lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HTML/CSS/JS&lt;/strong&gt; core app
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MediaPipe Face Landmarker&lt;/strong&gt; local spatial intelligence
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Canvas API&lt;/strong&gt; rendering filters + mesh
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SessionStorage&lt;/strong&gt; storing captured images
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;QR workflow&lt;/strong&gt; for sharing
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mobile‑first UI&lt;/strong&gt; responsive layout
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;No backend. No server. Everything runs locally.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci6jpai7y4u13y4825cw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci6jpai7y4u13y4825cw.png" alt=" " width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;My Experience (From Camera to AR Filters)&lt;/strong&gt;&lt;br&gt;
I started by building a clean UI - a glowing camera icon, a “FilterSense” title, and an Enter button. Once inside, the app activates the camera, loads MediaPipe, and begins tracking the user’s face in real time.&lt;/p&gt;

&lt;p&gt;Then the fun begins:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Select a filter
&lt;/li&gt;
&lt;li&gt;Watch it attach to your face
&lt;/li&gt;
&lt;li&gt;Move, tilt, smile - it follows
&lt;/li&gt;
&lt;li&gt;Capture the moment
&lt;/li&gt;
&lt;li&gt;Download or share
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The entire experience feels like a lightweight AR app running directly in the browser.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What My Application Does&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Opens the camera instantly
&lt;/li&gt;
&lt;li&gt;Tracks the face using MediaPipe
&lt;/li&gt;
&lt;li&gt;Renders a 468‑point mesh
&lt;/li&gt;
&lt;li&gt;Applies filters anchored to landmarks
&lt;/li&gt;
&lt;li&gt;Lets users switch filters
&lt;/li&gt;
&lt;li&gt;Captures a clean photo
&lt;/li&gt;
&lt;li&gt;Stores it safely
&lt;/li&gt;
&lt;li&gt;Redirects to an export page
&lt;/li&gt;
&lt;li&gt;Supports download + QR sharing
&lt;/li&gt;
&lt;li&gt;Works smoothly on mobile
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It’s a complete photo booth system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8gcsg32ywk8vv5c0pig.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl8gcsg32ywk8vv5c0pig.png" alt=" " width="762" height="899"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Spatial Intelligence (MediaPipe Face Landmarker)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the most advanced parts of this build is the &lt;strong&gt;spatial intelligence&lt;/strong&gt;. Instead of sending video frames to a server, the entire face‑tracking pipeline runs &lt;strong&gt;in the browser&lt;/strong&gt; using MediaPipe’s Face Landmarker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why this matters&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real‑time performance
&lt;/li&gt;
&lt;li&gt;Low latency
&lt;/li&gt;
&lt;li&gt;Offline capability
&lt;/li&gt;
&lt;li&gt;Privacy‑preserving
&lt;/li&gt;
&lt;li&gt;No external compute required
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How it works&lt;/strong&gt;&lt;br&gt;
I load the FaceLandmarker and FilesetResolver modules, which return:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;468 face landmarks
&lt;/li&gt;
&lt;li&gt;3D positional data
&lt;/li&gt;
&lt;li&gt;Stable tracking across movement
&lt;/li&gt;
&lt;li&gt;Mesh topology
&lt;/li&gt;
&lt;li&gt;Mesh can be removed at any time
These landmarks drive the entire filter system.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Mesh Rendering&lt;/strong&gt;&lt;br&gt;
I implemented a full tessellation renderer using the MediaPipe FACEMESH_TESSELATION array. It draws:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;glowing neon nodes
&lt;/li&gt;
&lt;li&gt;connecting edges
&lt;/li&gt;
&lt;li&gt;animated mesh movement
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This visualizes the underlying AI in real time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Filter Anchoring&lt;/strong&gt;&lt;br&gt;
Each filter is mapped to a specific landmark:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Crown&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;landmark&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;offsetY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;60&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Beard&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;landmark&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;152&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;offsetY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;40&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Reindeer Eyelashes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;landmark&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;159&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;offsetY&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This ensures perfect alignment as the user moves.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clean Capture Pipeline&lt;/strong&gt;&lt;br&gt;
To avoid tainted canvases, I built a safe capture flow:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a fresh canvas
&lt;/li&gt;
&lt;li&gt;Draw the video frame
&lt;/li&gt;
&lt;li&gt;Draw only the mesh (no external PNGs)
&lt;/li&gt;
&lt;li&gt;Export as PNG
&lt;/li&gt;
&lt;li&gt;Store in sessionStorage
&lt;/li&gt;
&lt;li&gt;Redirect to export page
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This guarantees consistent captures across browsers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Highlights&lt;/strong&gt;&lt;br&gt;
The app uses structured subagents to divide responsibilities cleanly. The Core App Builder handles UI, camera access, capture, and mobile responsiveness. The Filter Engineer manages MediaPipe initialization, mesh rendering, and filter anchoring. The system uses a clean canvas pipeline to avoid CORS issues and ensures safe PNG export.&lt;/p&gt;

&lt;p&gt;Spatial intelligence runs entirely on‑device, enabling real‑time AR effects without external compute. Filters follow the user’s face with sub‑pixel accuracy thanks to landmark‑driven positioning. The UI is fully responsive, and the workflow supports capture, download, and QR‑based sharing.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci6jpai7y4u13y4825cw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fci6jpai7y4u13y4825cw.png" alt=" " width="800" height="382"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Insights&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subagents feel like having a real dev team although it was myself, my code and my design&lt;/li&gt;
&lt;li&gt;MediaPipe’s local inference is incredibly powerful
&lt;/li&gt;
&lt;li&gt;Clean capture pipelines matter
&lt;/li&gt;
&lt;li&gt;Spatial intelligence unlocks AR‑level experiences
&lt;/li&gt;
&lt;li&gt;Declarative workflows scale beautifully
&lt;/li&gt;
&lt;li&gt;Mobile‑first design is essential for real‑world use
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Powered By&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;goose by Block powered by Sonnet 4.5 by Anthropic &lt;/li&gt;
&lt;li&gt;MediaPipe by Google&lt;/li&gt;
&lt;li&gt;HTML/CSS/JS
&lt;/li&gt;
&lt;li&gt;My own design + engineering workflow
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;My Final Thoughts&lt;/strong&gt;&lt;br&gt;
This was one of the most fun builds so far. Using subagents, I created a full AR‑style photo booth with real‑time filters, spatial intelligence, and a polished UI all running locally in the browser. The combination of MediaPipe, canvas rendering, and goose orchestration made it possible to build something that feels magical.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://mediapipe-studio.webapps.google.com/studio/demo/face_landmarker" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Day 11: Solved&lt;/strong&gt; Filter Sense Photo Booth: Delivered. Festival magic: Activated.&lt;/p&gt;

&lt;p&gt;This post is part of my Advent of AI journey AI Engineering: Advent of AI with goose Day 11.&lt;/p&gt;

&lt;p&gt;Follow along for more &lt;strong&gt;AI adventures with Eri!&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>goose</category>
      <category>google</category>
      <category>adventofai</category>
    </item>
  </channel>
</rss>
