<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Stephen McCullough </title>
    <description>The latest articles on DEV Community by Stephen McCullough  (@swmcc).</description>
    <link>https://dev.to/swmcc</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/swmcc"/>
    <language>en</language>
    <item>
      <title>Indexatron Update: Context-Aware Analysis with Local Vision Models</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Mon, 06 Apr 2026 01:36:02 +0000</pubDate>
      <link>https://dev.to/swmcc/indexatron-update-context-aware-analysis-with-local-vision-models-57l8</link>
      <guid>https://dev.to/swmcc/indexatron-update-context-aware-analysis-with-local-vision-models-57l8</guid>
      <description>&lt;p&gt;&lt;em&gt;An update to &lt;a href="https://dev.to/writing/indexatron-local-llm-photo-analysis/"&gt;the original Indexatron experiment&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  From Proof of Concept to Production
&lt;/h2&gt;

&lt;p&gt;The original Indexatron answered a simple question: can local LLMs extract meaningful metadata from family photos? The answer was yes, but with caveats.&lt;/p&gt;

&lt;p&gt;Feeding a vision model an image with zero context is like asking someone to describe a photo with no knowledge of the subjects, the era, or the occasion. The AI sees pixels. It doesn't see &lt;em&gt;your&lt;/em&gt; grandmother's wedding or &lt;em&gt;your&lt;/em&gt; family's Christmas traditions.&lt;/p&gt;

&lt;p&gt;This update explores what happens when you bridge that gap by injecting domain knowledge into the analysis pipeline and transforming a generic image classifier into something that understands your specific archive.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Context-Free Analysis
&lt;/h2&gt;

&lt;p&gt;My first runs were disappointing. The AI would look at a 1974 photo titled "Auntie Wilma, Mum, Dad and Uncle Sam" and completely ignore that context. It knew there were people in the photo. It had no idea &lt;em&gt;who&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Worse, I tried Llama 3.2 Vision (the newer, supposedly better model) and got output like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"categories"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"family"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"children"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"family"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"children"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"family"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"children"&lt;/span&gt;&lt;span class="err"&gt;...&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The model had entered a repetition loop, producing thousands of repeated tags. It also generated oddly-phrased descriptions that weren't suitable for a family website. Not ideal.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Context-Aware Prompting
&lt;/h2&gt;

&lt;p&gt;Instead of asking "what's in this photo?", I started telling the AI what it should already know:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IMPORTANT: This photo includes Edmund McCullough, Isobel McCullough.
Use these REAL names in the 'people' array.

IMPORTANT: This photo is from 1974-08-14 (1970s).
Use this as the era decade with 'high' confidence.

This photo is from the album: "Old 35mm Slides"
Caption says: "Auntie Wilma, Mum, Dad and Uncle Sam"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The results improved dramatically. The AI now had guardrails.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompt Injection: Teaching the AI Your Domain
&lt;/h2&gt;

&lt;p&gt;The real breakthrough was realising I could inject domain knowledge into the prompt itself. The AI doesn't know my family, but I can &lt;em&gt;tell&lt;/em&gt; it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Alias Resolution
&lt;/h3&gt;

&lt;p&gt;Every family has nicknames. Rather than expecting the AI to understand "Mamie" means my Mum, I built an alias resolver that runs &lt;em&gt;before&lt;/em&gt; the prompt is constructed:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;FAMILY_ALIASES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;nickname&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Real Name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="c1"&gt;# ... your family's nicknames
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The system scans titles, captions, and gallery names for known aliases and injects the real names into the prompt. The AI then uses these names in its output.&lt;/p&gt;

&lt;h3&gt;
  
  
  Metadata as Directives
&lt;/h3&gt;

&lt;p&gt;The prompt isn't just "analyse this photo". It's structured with explicit directives:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;IMPORTANT: This photo includes [extracted names]. Use these names in the 'people' array.
IMPORTANT: This photo is from [date] ([decade]). Use this era with 'high' confidence.
This photo is from the album: "[gallery name]" - [gallery description]
Title: "[photo title]"
Caption: "[photo caption]"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This transforms the AI from a generic image analyser into something that understands &lt;em&gt;your&lt;/em&gt; photos. It knows who might be in the frame before it even looks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model Comparison: LLaVA 7b vs Llama 3.2 Vision
&lt;/h2&gt;

&lt;p&gt;I tested both models extensively. Llama 3.2 Vision is newer, larger (7.8GB vs 4.7GB), and benchmarks suggest it should outperform LLaVA on vision tasks. Reality proved more nuanced.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;LLaVA 7b&lt;/th&gt;
&lt;th&gt;Llama 3.2 Vision&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Speed&lt;/td&gt;
&lt;td&gt;~27s per image&lt;/td&gt;
&lt;td&gt;~60s+ per image&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;JSON Output&lt;/td&gt;
&lt;td&gt;Mostly valid, occasional truncation&lt;/td&gt;
&lt;td&gt;More verbose, sometimes malformed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Structured Output&lt;/td&gt;
&lt;td&gt;Follows schema reliably&lt;/td&gt;
&lt;td&gt;Occasionally enters repetition loops&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context Adherence&lt;/td&gt;
&lt;td&gt;Good with explicit prompts&lt;/td&gt;
&lt;td&gt;Variable, may need different prompting&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;LLaVA 7b's tighter, more predictable outputs made it better suited for this structured extraction pipeline. Llama 3.2 Vision's additional capabilities (stronger reasoning, better multi-turn dialogue) might shine in conversational or open-ended analysis tasks.&lt;/p&gt;

&lt;p&gt;The lesson isn't that newer models are worse. It's that benchmarks don't tell the whole story. For constrained, schema-driven outputs on a local machine, the smaller model proved more reliable. Your mileage may vary with different prompting approaches or GPU acceleration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Optimisations
&lt;/h2&gt;

&lt;p&gt;Several changes made the pipeline faster:&lt;/p&gt;

&lt;h3&gt;
  
  
  Image Resizing
&lt;/h3&gt;

&lt;p&gt;Vision models don't need 4000x3000 pixel photos to understand what's in them. Resizing to 1024px max before analysis cut processing time without affecting quality:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;max_dim&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1024&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;max_dim&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;thumbnail&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;max_dim&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_dim&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Resampling&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;LANCZOS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Using Pre-Generated Variants
&lt;/h3&gt;

&lt;p&gt;The Rails app already generates 1024px WebP variants for web display. Why download the original when a smaller version exists?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="ss"&gt;image_url: &lt;/span&gt;&lt;span class="n"&gt;upload_variant_url&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;upload&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="ss"&gt;:medium&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# 1024px, not :large (2048px)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  WebP to JPG Conversion
&lt;/h3&gt;

&lt;p&gt;LLaVA crashes on WebP images (segfault). Converting to JPG before analysis fixed this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;output_path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;suffix&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;.webp&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;Image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output_path&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;jpg_path&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;JPEG&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;quality&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;85&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Era Override: Trust the Metadata
&lt;/h2&gt;

&lt;p&gt;The AI guesses photo era from visual cues: clothing, image quality, colour palette. It's often wrong. But I &lt;em&gt;have&lt;/em&gt; the actual date for many photos from EXIF data or manual entry.&lt;/p&gt;

&lt;p&gt;Rather than trust the AI's guess, I override it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;date_taken&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;decade&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;extract_decade&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;date_taken&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="n"&gt;analysis_data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;era&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;decade&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;decade&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;confidence&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;high&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;reasoning&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;From actual date: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;metadata&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;date_taken&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Now a photo from 1974 is correctly tagged as 1970s with high confidence, regardless of what the AI thought.&lt;/p&gt;

&lt;h2&gt;
  
  
  Safety Filters
&lt;/h2&gt;

&lt;p&gt;After the Llama 3.2 Vision incident, I added safeguards:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;BLOCKED_TERMS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;inappropriate&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;terms&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;filtered&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;here&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="n"&gt;MAX_CATEGORIES&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt;  &lt;span class="c1"&gt;# Prevents runaway repetition
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Terms that shouldn't appear in a family photo context get filtered. Category arrays get capped to prevent repetition loops. The AI can still hallucinate, but it can't flood my family website with unsuitable content.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Real Example
&lt;/h2&gt;

&lt;p&gt;Here's an actual processing run:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="err"&gt;Analyzing:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;YsbAlt.jpg&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;Context:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Title:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Auntie&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Wilma,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Mum,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Dad&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;and&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Uncle&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Sam,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Date:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1974-08-14&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;Waiting&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;for&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;llava:&lt;/span&gt;&lt;span class="mi"&gt;7&lt;/span&gt;&lt;span class="err"&gt;b...&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;Response:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;511&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;chars&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;in&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mf"&gt;27.0&lt;/span&gt;&lt;span class="err"&gt;s&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;LLaVA&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;Response:&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"description"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Wedding group photo with bride and groom and guests"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"location"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"setting"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"outdoor"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"park"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"people"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="nl"&gt;"name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Auntie Wilma, Mum, Dad and Uncle Sam"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"estimated_age"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"adults"&lt;/span&gt;&lt;span class="p"&gt;}],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"categories"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"wedding"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"celebration"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"family"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"special occasion"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1970s fashion"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"era"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nl"&gt;"decade"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"1974"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="nl"&gt;"confidence"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"medium"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mood"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"happy"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;

&lt;/span&gt;&lt;span class="err"&gt;Overrode&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;era&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;with&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;actual&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;date:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;1970&lt;/span&gt;&lt;span class="err"&gt;s&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;Generated&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;768&lt;/span&gt;&lt;span class="err"&gt;-dimensional&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;embedding&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="err"&gt;Posted&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;to&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="err"&gt;API&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI saw the context, identified it as a wedding photo, and the era was corrected to use the actual date. Not perfect (it put all four names in one person object) but usable.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/swmcc/the-mcculllughs.org/issues/99" rel="noopener noreferrer"&gt;search API issue&lt;/a&gt; is queued up. Once implemented, I'll be able to:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight http"&gt;&lt;code&gt;&lt;span class="err"&gt;GET /api/uploads/search?person=Isobel+McCullough
GET /api/uploads/search?category=wedding&amp;amp;decade=1970s
GET /api/uploads/search?location=beach
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Semantic search using the embeddings is also on the roadmap: find photos &lt;em&gt;similar to&lt;/em&gt; a given photo, regardless of tags.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Context transforms capability.&lt;/strong&gt; The same model produces dramatically different results when given domain knowledge. Don't ask AI to guess what you already know. Inject it into the prompt.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Model selection is task-specific.&lt;/strong&gt; Benchmarks measure general capability, not fitness for your specific use case. Test with your actual data and requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Hybrid approaches win.&lt;/strong&gt; Let AI do what it's good at (visual analysis) while overriding it with ground truth where available (dates, known names). The best results come from human knowledge augmented by machine perception.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Defensive programming matters.&lt;/strong&gt; Vision models can produce unexpected outputs: malformed JSON, repetition loops, unsuitable content. Build robust parsing and filtering from day one.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Optimise the right layer.&lt;/strong&gt; Resizing images and using pre-generated variants had more impact on performance than model tweaking. Sometimes the boring optimisations are the most effective.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Code
&lt;/h2&gt;

&lt;p&gt;Both projects are on GitHub:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/swmcc/indexatron" rel="noopener noreferrer"&gt;indexatron&lt;/a&gt; - The Python analysis service&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/swmcc/the-mcculllughs.org" rel="noopener noreferrer"&gt;the-mcculloughs.org&lt;/a&gt; - The Rails family photo site&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Local vision models aren't magic. They're tools. The magic happens when you give them the context to understand what they're looking at.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>ollama</category>
      <category>llm</category>
    </item>
    <item>
      <title>A Self-Hosted Image Sharing Pipeline</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:46:43 +0000</pubDate>
      <link>https://dev.to/swmcc/a-self-hosted-image-sharing-pipeline-5bb3</link>
      <guid>https://dev.to/swmcc/a-self-hosted-image-sharing-pipeline-5bb3</guid>
      <description>&lt;p&gt;Image URLs break. You paste a screenshot into Teams, share the link, and six months later it's gone. Corporate firewalls block Imgur. Third-party services sunset features. The URLs you thought were permanent quietly rot.&lt;/p&gt;

&lt;p&gt;I built &lt;a href="https://github.com/swmcc/jotter" rel="noopener noreferrer"&gt;Jotter&lt;/a&gt; to fix this — a Rails app that handles image uploads and returns short, stable URLs I control.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Flow
&lt;/h2&gt;

&lt;p&gt;Drop an image onto a macOS droplet (or run a CLI command), get a short URL on your clipboard. That's it.&lt;/p&gt;

&lt;p&gt;The upload endpoint accepts multipart form data and base64 JSON (for iOS Shortcuts):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;create&lt;/span&gt;
  &lt;span class="n"&gt;album&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;current_user&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;albums&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find_or_create_by!&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;title: &lt;/span&gt;&lt;span class="s2"&gt;"Uploads"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;photo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;album&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;photos&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;build&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;user: &lt;/span&gt;&lt;span class="n"&gt;current_user&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

  &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:image_base64&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;present?&lt;/span&gt;
    &lt;span class="n"&gt;decoded&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;decode64&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:image_base64&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
    &lt;span class="c1"&gt;# attach from decoded bytes...&lt;/span&gt;
  &lt;span class="k"&gt;else&lt;/span&gt;
    &lt;span class="n"&gt;photo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;attach&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="ss"&gt;:image&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Authentication uses bearer tokens — &lt;code&gt;SecureRandom.hex(32)&lt;/code&gt;. CSRF verification is skipped for JSON requests with a valid token, so scripts and native apps don't need to fuss with form authenticity tokens.&lt;/p&gt;

&lt;h2&gt;
  
  
  Short URLs
&lt;/h2&gt;

&lt;p&gt;Each photo gets a 6-character alphanumeric code with collision detection:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_short_code&lt;/span&gt;
  &lt;span class="kp"&gt;loop&lt;/span&gt; &lt;span class="k"&gt;do&lt;/span&gt;
    &lt;span class="nb"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;short_code&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;SecureRandom&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;alphanumeric&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;6&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;break&lt;/span&gt; &lt;span class="k"&gt;unless&lt;/span&gt; &lt;span class="no"&gt;Photo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;exists?&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;short_code: &lt;/span&gt;&lt;span class="n"&gt;short_code&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;62 characters, 6 positions — roughly 56 billion combinations. Won't be a problem for a personal tool, but the loop handles it gracefully anyway.&lt;/p&gt;

&lt;p&gt;The short URL controller serves the actual image blob with &lt;code&gt;disposition: :inline&lt;/code&gt;, so Slack and Twitter unfurl it properly without any OpenGraph gymnastics.&lt;/p&gt;

&lt;h2&gt;
  
  
  The CLI Glue
&lt;/h2&gt;

&lt;p&gt;A small bash script ties it together:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;response&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JOTTER_URL&lt;/span&gt;&lt;span class="s2"&gt;/u.json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Authorization: Bearer &lt;/span&gt;&lt;span class="nv"&gt;$JOTTER_TOKEN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-F&lt;/span&gt; &lt;span class="s2"&gt;"image=@&lt;/span&gt;&lt;span class="nv"&gt;$file&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="nv"&gt;short_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$response&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.photo.short_url'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$short_url&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | pbcopy
osascript &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="s2"&gt;"display notification &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="nv"&gt;$short_url&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt; with title &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;Jotter&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There's also a compiled AppleScript droplet for drag-and-drop, plus an iOS Shortcut that base64-encodes photos from the share sheet.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background Variants
&lt;/h2&gt;

&lt;p&gt;Uploads return immediately. A Solid Queue job generates three variants — thumbnail (200px), medium (800px), large (1600px) — so the response feels instant even on larger files.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight ruby"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ProcessPhotoJob&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="no"&gt;ApplicationJob&lt;/span&gt;
  &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;perform&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;photo_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;photo&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="no"&gt;Photo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;find&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;photo_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;photo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;variant&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:thumbnail&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;processed&lt;/span&gt;
    &lt;span class="n"&gt;photo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;variant&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:medium&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;processed&lt;/span&gt;
    &lt;span class="n"&gt;photo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;image&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;variant&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="ss"&gt;:large&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;processed&lt;/span&gt;
  &lt;span class="k"&gt;end&lt;/span&gt;
&lt;span class="k"&gt;end&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Worth It?
&lt;/h2&gt;

&lt;p&gt;The whole thing runs on a single VPS. Every screenshot I share now has a URL I own, that won't expire, that isn't blocked by corporate proxies, and that I can move wherever I like. The friction went from "upload somewhere, copy link, hope it lasts" to "drop file, paste link."&lt;/p&gt;

&lt;p&gt;Sometimes the best tool is the one you run yourself.&lt;/p&gt;

</description>
      <category>rails</category>
      <category>ruby</category>
      <category>automation</category>
      <category>macos</category>
    </item>
    <item>
      <title>Maintaining Open Source in the AI Era</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:45:25 +0000</pubDate>
      <link>https://dev.to/swmcc/maintaining-open-source-in-the-ai-era-361n</link>
      <guid>https://dev.to/swmcc/maintaining-open-source-in-the-ai-era-361n</guid>
      <description>&lt;p&gt;I've been maintaining a handful of open source packages lately: &lt;a href="https://pypi.org/project/mailview/" rel="noopener noreferrer"&gt;mailview&lt;/a&gt;, &lt;a href="https://pypi.org/project/mailjunky/" rel="noopener noreferrer"&gt;mailjunky&lt;/a&gt; (in both Python and Ruby), and recently dusted off an old Ruby gem called &lt;a href="https://rubygems.org/gems/tvdb_api/" rel="noopener noreferrer"&gt;tvdb_api&lt;/a&gt;. The experience has been illuminating - not just about package management, but about how AI is changing open source development in ways I'm still processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Packages
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;mailview&lt;/strong&gt; started because I missed &lt;a href="https://github.com/ryanb/letter_opener" rel="noopener noreferrer"&gt;letter_opener&lt;/a&gt; from the Ruby world. When you're developing a web application, you don't want emails actually being sent - you want to inspect them locally. In Rails, letter_opener handles this beautifully. In Python? The options were less elegant. So I built mailview: add the middleware to your FastAPI or ASGI app, and every outgoing email gets captured and displayed in a clean browser UI at &lt;code&gt;/_mail&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;mailjunky&lt;/strong&gt; is the SDK for a transactional email service I use. I wrote both Python and Ruby versions because I work across both ecosystems and wanted a consistent interface. The Python version powers the email notifications in &lt;a href="https://whatisonthe.tv" rel="noopener noreferrer"&gt;whatisonthe.tv&lt;/a&gt;, sending watchlist digests and update notifications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;tvdb_api&lt;/strong&gt; is older. I wrote it years ago when I needed to fetch TV show metadata from TheTVDB. Recently I came back to it and found... well, it had aged poorly.&lt;/p&gt;

&lt;h2&gt;
  
  
  When Code Goes Stale
&lt;/h2&gt;

&lt;p&gt;Opening tvdb_api after several years was humbling. The code still worked, technically, but it was written for a different era of Ruby. No keyword arguments where they'd make sense. Inconsistent error handling. Dependencies that had moved on. The API it wrapped had evolved through multiple versions.&lt;/p&gt;

&lt;p&gt;This is the reality of open source maintenance that doesn't get discussed enough. You release something, people use it, and then life happens. You move to different projects. The ecosystem evolves. What was idiomatic becomes dated.&lt;/p&gt;

&lt;p&gt;I spent a weekend modernising tvdb_api. Keyword arguments throughout. Proper exception hierarchies. Updated API support. Modern testing practices. The gem that emerged was recognisably the same tool but felt contemporary rather than archaeological.&lt;/p&gt;

&lt;p&gt;The irony isn't lost on me that I did this modernisation with Claude's help. More on that shortly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ruby vs Python: A Tale of Two Package Managers
&lt;/h2&gt;

&lt;p&gt;Here's where things get interesting. Publishing to RubyGems versus PyPI in 2026 is a study in contrasts.&lt;/p&gt;

&lt;p&gt;RubyGems feels... creaky. The authentication flow is clunky. The web interface looks dated. I've hit mysterious failures that resolved themselves without explanation. The documentation assumes knowledge that new maintainers might not have. It works, but it feels like infrastructure that's been maintained rather than evolved.&lt;/p&gt;

&lt;p&gt;PyPI, meanwhile, has embraced &lt;a href="https://docs.pypi.org/trusted-publishers/" rel="noopener noreferrer"&gt;Trusted Publishing&lt;/a&gt;. You configure your GitHub repository as a trusted publisher, and your GitHub Actions workflow can publish packages without storing API tokens as secrets. The authentication happens via OpenID Connect - GitHub attests to the identity of the workflow, PyPI trusts that attestation.&lt;/p&gt;

&lt;p&gt;The practical difference is significant:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# PyPI with Trusted Publishing - no secrets needed&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Publish to PyPI&lt;/span&gt;
  &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pypa/gh-action-pypi-publish@release/v1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Compare this to RubyGems, where you're still managing API keys, storing them as secrets, and hoping you've configured the authentication correctly.&lt;/p&gt;

&lt;p&gt;For mailview and mailjunky-python, I set up Trusted Publishing and the release process is now: tag a release, push the tag, and GitHub Actions handles the rest. For the Ruby packages, there's more ceremony involved, and I've had releases fail for reasons that weren't immediately clear.&lt;/p&gt;

&lt;p&gt;I don't want to overstate this - RubyGems works and millions of packages depend on it. But PyPI's investment in modern authentication patterns has made the maintainer experience noticeably better.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Changed Everything (And I'm Not Sure How I Feel About It)
&lt;/h2&gt;

&lt;p&gt;Here's where I need to be honest about something uncomfortable.&lt;/p&gt;

&lt;p&gt;All of these packages were built with significant AI assistance. Claude helped write the initial implementations. It helped with test coverage. It modernised tvdb_api's Ruby patterns. It debugged edge cases I would have spent hours tracking down.&lt;/p&gt;

&lt;p&gt;The productivity gains are real. What might have taken weeks of evening and weekend work happened in days. Features that I might have cut for time made it in. Test coverage that I might have skipped got written.&lt;/p&gt;

&lt;p&gt;But.&lt;/p&gt;

&lt;p&gt;I look at open source differently now, and I'm not sure it's for the better.&lt;/p&gt;

&lt;p&gt;When I review pull requests on other projects, I find myself wondering: did a human think through this change, or did an AI generate it and a human click approve? When I encounter a bug in a library, I wonder: was this edge case missed because the AI-generated tests didn't think to cover it?&lt;/p&gt;

&lt;p&gt;The social contract of open source has always been implicit: someone cared enough about this problem to spend their limited time solving it. That investment of human attention was a signal of quality, of thoughtfulness. It's why we trusted small libraries maintained by individuals - someone was paying attention.&lt;/p&gt;

&lt;p&gt;AI disrupts this calculus. I can now create a package in an afternoon that would have taken weeks. But has the thoughtfulness kept pace with the speed? I've tried to maintain the same standards I would have without AI assistance, but I'm not certain I've succeeded. And I definitely can't evaluate whether other maintainers have.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Testing Paradox
&lt;/h2&gt;

&lt;p&gt;This concern crystallises around testing. AI is excellent at generating tests. Given a function, Claude will produce comprehensive test cases covering happy paths, edge cases, error conditions. The coverage numbers look great.&lt;/p&gt;

&lt;p&gt;But test generation isn't the same as test thinking. When I write tests manually, I'm forced to think about how the code will be used. What assumptions am I making? What could go wrong? What would a user reasonably expect?&lt;/p&gt;

&lt;p&gt;AI-generated tests cover the code that exists. Human-written tests often reveal the code that should exist. There's a subtle but important difference.&lt;/p&gt;

&lt;p&gt;I've tried to mitigate this by reviewing AI-generated tests carefully and adding cases that emerge from my understanding of how the package will be used in practice. But I'd be lying if I said every test got that level of attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I've Learned
&lt;/h2&gt;

&lt;p&gt;A few things have become clearer through this experience:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintenance is the hard part.&lt;/strong&gt; Writing the initial code is the easy bit. Keeping it current, fixing bugs, responding to issues, updating dependencies - that's where open source lives or dies. AI helps with maintenance too, but it doesn't solve the fundamental problem of limited human attention.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Modern tooling matters.&lt;/strong&gt; PyPI's Trusted Publishing removed a category of release friction entirely. When releasing is easy, releases happen more often. When it's painful, packages go stale. This is boring infrastructure work, but it has real effects on ecosystem health.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI is a force multiplier, not a replacement.&lt;/strong&gt; The packages I'm happiest with are ones where I used AI to handle the mechanical work while staying engaged with the design decisions. The ones where I let AI drive too much feel... hollow, somehow. Technically correct but lacking in soul.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transparency matters.&lt;/strong&gt; I've started noting in my projects when AI was used significantly in development. Not as a warning, but as context. Users can decide for themselves what that means.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Leaves Me
&lt;/h2&gt;

&lt;p&gt;I'm going to keep maintaining these packages. People use them, and I use them myself. The experience has made me more thoughtful about what I'm building and why.&lt;/p&gt;

&lt;p&gt;But I watch the open source ecosystem with more concern than I used to. The barriers to creating packages have dropped dramatically. That's good for getting solutions out there quickly. I'm less sure it's good for the long-term health of software we all depend on.&lt;/p&gt;

&lt;p&gt;Maybe I'm worrying about nothing. Maybe AI-assisted development will become so normal that these concerns seem quaint. Maybe the tooling will evolve to help us distinguish thoughtful work from generated slop.&lt;/p&gt;

&lt;p&gt;For now, I'm trying to hold myself to the standards I had before AI assistance was available, while being honest that the assistance exists. It's an imperfect balance, but it's the best I've found.&lt;/p&gt;

&lt;p&gt;The packages are on PyPI and RubyGems if you want to use them. They work. I've tested them. Claude helped.&lt;/p&gt;

&lt;p&gt;Make of that what you will.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>ruby</category>
      <category>python</category>
    </item>
    <item>
      <title>The LLM Is the New Parser</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:43:53 +0000</pubDate>
      <link>https://dev.to/swmcc/the-llm-is-the-new-parser-82p</link>
      <guid>https://dev.to/swmcc/the-llm-is-the-new-parser-82p</guid>
      <description>&lt;p&gt;I spent the early 2000s writing parsers. HTML scrapers with regex that would make you cry. XML deserializers that handled seventeen flavours of "valid". CSV readers that knew a comma inside quotes wasn't a delimiter.&lt;/p&gt;

&lt;p&gt;The pattern was always the same: the world gives you garbage, you write defensive code to extract meaning.&lt;/p&gt;

&lt;p&gt;Then APIs won. JSON with schemas. Type-safe clients. The parsing era ended. We'd civilised the machines.&lt;/p&gt;

&lt;p&gt;Now I'm building &lt;a href="https://github.com/swmcc/indexatron" rel="noopener noreferrer"&gt;Indexatron&lt;/a&gt;, a local LLM pipeline for analysing family photos. LLaVA looks at an image, I ask for JSON, and I get... this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
json&lt;br&gt;
{&lt;br&gt;
  "description": "A dog sitting on a wooden floor",&lt;br&gt;
  "categories": ["dog"],&lt;br&gt;
  "people": [&lt;br&gt;
    {"estimated_age": "Beer is an alcoholic beverage"}&lt;br&gt;
  ]&lt;br&gt;
}&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;&lt;br&gt;
python&lt;/p&gt;

&lt;p&gt;The model wrapped JSON in markdown code fences. It put beer in the &lt;code&gt;people&lt;/code&gt; array with an age field containing a Wikipedia definition. Sometimes the braces don't balance. Sometimes it returns YAML when you asked for JSON.&lt;/p&gt;

&lt;p&gt;Sound familiar?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We're back to parsing unreliable output.&lt;/strong&gt; The only difference is the garbage now comes from a neural network instead of a web server. The defensive patterns are identical:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Strip markdown code blocks
&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startswith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;```

&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;

```&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startswith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;:]&lt;/span&gt;

&lt;span class="c1"&gt;# Balance braces
&lt;/span&gt;&lt;span class="n"&gt;open_braces&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;count&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="n"&gt;close_braces&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;count&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;open_braces&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;close_braces&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;open_braces&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;close_braces&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Parse and pray
&lt;/span&gt;&lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;loads&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;JSONDecodeError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;raw&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;parsed&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Twenty years of progress and I'm back to "try to parse it, catch the exception, return something usable anyway."&lt;/p&gt;

&lt;p&gt;The irony isn't lost on me. We built trillion-parameter models that can write poetry and explain quantum physics, but they can't reliably close a curly brace. The solution? Wrap them in the same defensive parsing code we wrote for Internet Explorer's HTML in 2003.&lt;/p&gt;

&lt;p&gt;The LLM is the new parser. It turns unstructured data (images, documents, audio) into semi-structured output that you then parse into actually-structured data.&lt;/p&gt;

&lt;p&gt;The more things change.&lt;/p&gt;

</description>
      <category>llm</category>
      <category>ollama</category>
      <category>parsing</category>
      <category>ai</category>
    </item>
    <item>
      <title>Lazy Loading Cache for whatisonthe.tv</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:43:06 +0000</pubDate>
      <link>https://dev.to/swmcc/lazy-loading-cache-for-whatisonthetv-58j1</link>
      <guid>https://dev.to/swmcc/lazy-loading-cache-for-whatisonthetv-58j1</guid>
      <description>&lt;p&gt;Working on &lt;a href="https://whatisonthe.tv" rel="noopener noreferrer"&gt;whatisonthe.tv&lt;/a&gt;, I needed a caching pattern for film and star metadata lookups. The app pulls data from external APIs (TMDb for cast lists, film details) but only when the data doesn't exist in the database. A background worker fetches from the API and saves it to the database, avoiding repeated expensive API calls. The cache sits in front of the database to speed up reads.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;The app queries film and star metadata frequently - same films, same actors, multiple users browsing. Without caching:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Repeated database reads slow things down&lt;/li&gt;
&lt;li&gt;API rate limits get hit quickly&lt;/li&gt;
&lt;li&gt;External API calls are expensive (latency and cost)&lt;/li&gt;
&lt;li&gt;Preloading everything wastes resources on data no one requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The flow without caching:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User requests film metadata&lt;/li&gt;
&lt;li&gt;Check database&lt;/li&gt;
&lt;li&gt;If not in database, queue background worker&lt;/li&gt;
&lt;li&gt;Worker hits API, saves to database&lt;/li&gt;
&lt;li&gt;Return data to user&lt;/li&gt;
&lt;li&gt;Next user requesting same film hits database again (slow)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How Lazy Loading Cache Works
&lt;/h2&gt;

&lt;p&gt;The cache sits in front of the database and only the database. API lookups happen separately via background workers.&lt;/p&gt;

&lt;p&gt;The flow with caching:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User requests film metadata&lt;/li&gt;
&lt;li&gt;Check the cache first&lt;/li&gt;
&lt;li&gt;If in cache and fresh, return it immediately (fast)&lt;/li&gt;
&lt;li&gt;If not in cache, check the database&lt;/li&gt;
&lt;li&gt;If in database, store it in cache and return it&lt;/li&gt;
&lt;li&gt;If not in database, queue background worker to fetch from API&lt;/li&gt;
&lt;li&gt;Worker fetches from API, saves to database&lt;/li&gt;
&lt;li&gt;Next request will find it in cache or database (no API call needed)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The cache grows based on real usage. Popular films stay in cache, obscure ones get cached on first database hit. The background worker only runs when data is completely missing, avoiding expensive API calls for already-known films.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation in Python
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;workers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;enqueue_fetch_film&lt;/span&gt;

&lt;span class="n"&gt;CACHE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;
&lt;span class="n"&gt;TTL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;300&lt;/span&gt;  &lt;span class="c1"&gt;# 5 minutes for film/star metadata
&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_film&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;film_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;cache_key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;film:&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;film_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;time&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="c1"&gt;# Check cache first
&lt;/span&gt;    &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;CACHE&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ts&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;TTL&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

    &lt;span class="c1"&gt;# Cache miss - check database
&lt;/span&gt;    &lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;query_film&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;film_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Found in database - cache it
&lt;/span&gt;        &lt;span class="n"&gt;CACHE&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;cache_key&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;data&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;ts&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;now&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;

    &lt;span class="c1"&gt;# Not in database - queue background worker to fetch from API
&lt;/span&gt;    &lt;span class="nf"&gt;enqueue_fetch_film&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;film_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;  &lt;span class="c1"&gt;# or return placeholder/loading state
&lt;/span&gt;
&lt;span class="c1"&gt;# Background worker (runs separately)
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;fetch_film_worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;film_id&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Hit external API (TMDb, etc)
&lt;/span&gt;    &lt;span class="n"&gt;api_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;tmdb_api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get_film&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;film_id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Save to database
&lt;/span&gt;    &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;save_film&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;film_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;api_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Next request will find it in database and cache it
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For whatisonthe.tv, this meant:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Popular films stay cached (no database or API hits)&lt;/li&gt;
&lt;li&gt;First request for a film hits database only&lt;/li&gt;
&lt;li&gt;Missing films trigger one API call via worker&lt;/li&gt;
&lt;li&gt;All subsequent requests are cached&lt;/li&gt;
&lt;li&gt;Saves API rate limits and costs&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Use It
&lt;/h2&gt;

&lt;p&gt;This pattern works well when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;External API calls are expensive (rate limits, latency, cost)&lt;/li&gt;
&lt;li&gt;Reads are more common than writes&lt;/li&gt;
&lt;li&gt;Slightly stale data is acceptable (film metadata doesn't change frequently)&lt;/li&gt;
&lt;li&gt;You want a cache that maintains itself&lt;/li&gt;
&lt;li&gt;You want to avoid extra infrastructure (Redis, Memcached)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key benefit: API lookups only happen once per film. The database stores it permanently, and the cache speeds up repeated reads. If the cache is wiped (server restart, deployment), everything keeps working. It just warms up again based on traffic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Trade-offs
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Advantages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Self-maintaining - no explicit invalidation logic&lt;/li&gt;
&lt;li&gt;Minimal infrastructure&lt;/li&gt;
&lt;li&gt;Database remains authoritative&lt;/li&gt;
&lt;li&gt;Graceful degradation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Disadvantages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;First request after TTL expiry is slow (cache miss)&lt;/li&gt;
&lt;li&gt;Memory grows unbounded without eviction policy&lt;/li&gt;
&lt;li&gt;Not suitable for distributed systems (in-memory only)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For whatisonthe.tv, an in-memory cache with occasional misses is fine. If it scales beyond a single instance, Redis with the same pattern would work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;Lazy loading cache combined with database-backed storage and background workers is a safe and predictable pattern. API lookups only happen once per entity, the database stores it permanently, and the cache speeds up repeated reads.&lt;/p&gt;

&lt;p&gt;For whatisonthe.tv, this means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One API call per film (ever)&lt;/li&gt;
&lt;li&gt;Database hits for first request after cache expiry&lt;/li&gt;
&lt;li&gt;Cache hits for everything else&lt;/li&gt;
&lt;li&gt;No wasted API calls&lt;/li&gt;
&lt;li&gt;No rate limit issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It's the right level of sophistication for a side project - solves the problem without introducing new ones.&lt;/p&gt;

</description>
      <category>caching</category>
      <category>architecture</category>
      <category>python</category>
    </item>
    <item>
      <title>Git Worktree for Multiple Branches</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:42:56 +0000</pubDate>
      <link>https://dev.to/swmcc/git-worktree-for-multiple-branches-4ak6</link>
      <guid>https://dev.to/swmcc/git-worktree-for-multiple-branches-4ak6</guid>
      <description>&lt;p&gt;Just learned about &lt;code&gt;git worktree&lt;/code&gt; and it's genuinely useful.&lt;/p&gt;

&lt;p&gt;Problem: Need to work on feature branch whilst also reviewing a PR or checking main. Usually means stashing changes and switching branches.&lt;/p&gt;

&lt;p&gt;Solution: Multiple working directories for the same repo.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Create a new worktree for a branch&lt;/span&gt;
git worktree add ../myrepo-feature feature-branch

&lt;span class="c"&gt;# Now you have:&lt;/span&gt;
&lt;span class="c"&gt;# ~/code/myrepo       (main branch)&lt;/span&gt;
&lt;span class="c"&gt;# ~/code/myrepo-feature (feature branch)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Can have both open in different editors, run different dev servers, etc. Much cleaner than branch switching or having multiple clones.&lt;/p&gt;

&lt;p&gt;Clean up when done:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git worktree remove ../myrepo-feature
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;List all worktrees:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git worktree list
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Simple but effective.&lt;/p&gt;

</description>
      <category>git</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Cypress Component Isolation Issues</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:42:34 +0000</pubDate>
      <link>https://dev.to/swmcc/cypress-component-isolation-issues-54bm</link>
      <guid>https://dev.to/swmcc/cypress-component-isolation-issues-54bm</guid>
      <description>&lt;p&gt;Working on a personal project, I hit a frustrating limitation with Cypress: component state bleeding between tests.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem
&lt;/h2&gt;

&lt;p&gt;Cypress mounts components in the same browser context across tests. Even with &lt;code&gt;beforeEach&lt;/code&gt; cleanup, some state persists:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Event listeners accumulate&lt;/li&gt;
&lt;li&gt;CSS-in-JS styles duplicate&lt;/li&gt;
&lt;li&gt;Global window objects leak between tests&lt;/li&gt;
&lt;li&gt;Memory usage grows linearly with test count&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example that failed intermittently:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;User form&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nf"&gt;beforeEach&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;mount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;UserForm&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

  &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;validates email&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[data-testid="email"]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;invalid&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[data-testid="error"]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;should&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;contain&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Invalid email&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;

  &lt;span class="nf"&gt;it&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;submits successfully&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[data-testid="email"]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;type&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;valid@example.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;[data-testid="submit"]&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;click&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="c1"&gt;// Fails intermittently - previous test's error message still in DOM&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Why This Happens
&lt;/h2&gt;

&lt;p&gt;Cypress reuses the browser instance. Components unmount, but:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The iframe stays alive&lt;/li&gt;
&lt;li&gt;JavaScript heap isn't cleared&lt;/li&gt;
&lt;li&gt;Event listeners require explicit cleanup&lt;/li&gt;
&lt;li&gt;Third-party libraries may not clean up properly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Workaround
&lt;/h2&gt;

&lt;p&gt;Force full remount between tests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;afterEach&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;cy&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;window&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;then&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="nx"&gt;win&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;win&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;location&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;reload&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="p"&gt;})&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This works but adds ~500ms per test. With 200+ component tests, that's 100 seconds of wasted time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Playwright Does This Better
&lt;/h2&gt;

&lt;p&gt;Playwright's component testing uses isolated browser contexts per test. Each test gets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Fresh browser context&lt;/li&gt;
&lt;li&gt;Clean JavaScript heap&lt;/li&gt;
&lt;li&gt;No state leakage&lt;/li&gt;
&lt;li&gt;Parallel execution by default&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Same test in Playwright:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="nf"&gt;test&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;submits successfully&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;mount&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;component&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;mount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;UserForm&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;component&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getByTestId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;email&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;fill&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;valid@example.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;component&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getByTestId&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;submit&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;click&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
  &lt;span class="c1"&gt;// Always works - completely isolated from previous test&lt;/span&gt;
&lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;No manual cleanup. No intermittent failures. No performance workaround.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migration Considerations
&lt;/h2&gt;

&lt;p&gt;Switching to Playwright would mean:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rewriting 200+ Cypress tests (different API)&lt;/li&gt;
&lt;li&gt;Learning new assertion patterns&lt;/li&gt;
&lt;li&gt;Different debugging workflow&lt;/li&gt;
&lt;li&gt;But: genuine test isolation and faster execution&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The isolation model is compelling. Cypress is great for E2E, but for component testing, Playwright's architecture is superior.&lt;/p&gt;

&lt;p&gt;Might be time to migrate.&lt;/p&gt;

</description>
      <category>cypress</category>
      <category>testing</category>
      <category>playwright</category>
      <category>e2e</category>
    </item>
    <item>
      <title>Working with Claude: A Senior Developer's Honest Take</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:42:12 +0000</pubDate>
      <link>https://dev.to/swmcc/working-with-claude-a-senior-developers-honest-take-2kkj</link>
      <guid>https://dev.to/swmcc/working-with-claude-a-senior-developers-honest-take-2kkj</guid>
      <description>&lt;p&gt;I've been using Claude Code as part of my daily development workflow for several months now. This isn't a breathless endorsement or a dismissive rejection. It's an honest assessment from someone who's been writing software professionally for over two decades.&lt;/p&gt;

&lt;h2&gt;
  
  
  It's a Tool. Treat It Like One.
&lt;/h2&gt;

&lt;p&gt;Claude is a tool. A genuinely impressive one, but still a tool. It sits in the same category as my text editor, my terminal, and my version control system. I find this framing helpful - not to diminish what it can do, but to approach it practically.&lt;/p&gt;

&lt;p&gt;Some of the hype around AI assistants oversells what they are. Junior developers aren't obsolete. Senior developers aren't being replaced. What's actually happening is more interesting: certain categories of work have become dramatically faster, and that changes what's practical to attempt.&lt;/p&gt;

&lt;p&gt;AI assistants are particularly good at boilerplate, repetitive transformations, exploring unfamiliar codebases, and acting as a thinking partner. They still need human judgement for architecture decisions and business context. That's not a criticism - it's just understanding where the tool excels.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Workflow
&lt;/h2&gt;

&lt;p&gt;My setup has evolved over these months. I've finally moved away from tmux after years of muscle memory. Ghostty with native splits handles my terminal needs now, and honestly, it's simpler. One less abstraction layer, one less thing to configure.&lt;/p&gt;

&lt;p&gt;A typical session looks like this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Left pane&lt;/strong&gt;: Claude Code running in the terminal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Right pane&lt;/strong&gt;: Shell for running tests, git operations, server logs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Editor&lt;/strong&gt;: Neovim in a separate window (some habits don't change)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Claude handles the tedious bits. Writing test scaffolding. Generating boilerplate for new modules. Explaining what some legacy code does before I refactor it. Drafting commit messages (which I usually edit - it's verbose by default, but that's easy to fix).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Game Changers
&lt;/h2&gt;

&lt;p&gt;Three features have fundamentally changed how I work: planning mode, skills, and tasks. These aren't just conveniences - they've shifted how I approach problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Planning Mode
&lt;/h3&gt;

&lt;p&gt;Before I discovered planning mode, I'd sometimes watch Claude charge off implementing something before I'd fully thought through the approach. Now, for anything non-trivial, I start in planning mode.&lt;/p&gt;

&lt;p&gt;The workflow is: describe what I want to achieve, let Claude explore the codebase, and then review a structured plan before any code gets written. This catches architectural issues early. It surfaces questions I hadn't considered. It means we're aligned on approach before investing time in implementation.&lt;/p&gt;

&lt;p&gt;For complex features, I'll spend fifteen minutes in planning mode refining the approach. That investment pays back tenfold by avoiding wrong turns and rework. It's like having a technical design review built into the workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Skills
&lt;/h3&gt;

&lt;p&gt;Skills are reusable prompts that encode workflow knowledge. I've built custom skills for my common tasks: code review, security audit, test generation, and more. Instead of explaining what I want each time, I invoke a skill and it applies consistent standards.&lt;/p&gt;

&lt;p&gt;My &lt;code&gt;/review&lt;/code&gt; skill knows how to examine git changes against the patterns I care about. My &lt;code&gt;/audit&lt;/code&gt; skill applies security checks specific to my tech stack. These aren't just time savers - they're consistency enforcers. The review I get at 6pm after a long day is the same quality as the one at 9am.&lt;/p&gt;

&lt;p&gt;The ability to chain skills is powerful too. I can run security audit, code review, and test coverage analysis in parallel, then synthesise the results. What used to be a morning's work happens while I grab a coffee.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tasks
&lt;/h3&gt;

&lt;p&gt;The task system changed how I approach larger pieces of work. When I'm implementing a feature that spans multiple files or requires several steps, I ask Claude to break it down into tasks.&lt;/p&gt;

&lt;p&gt;Each task becomes a tracked unit of work. I can see what's done, what's in progress, and what's blocked. Claude updates task status as it works, so I always know where we are. When I step away and come back, the context is preserved in the task list.&lt;/p&gt;

&lt;p&gt;For a recent project, I had a twelve-task implementation plan. Being able to work through it systematically, with clear progress tracking, made a complex change manageable. It's project management integrated directly into the coding workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Helps
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Exploration&lt;/strong&gt;: When I'm dropped into an unfamiliar codebase, Claude can trace through call paths faster than I can grep. "Where does this function get called from?" is a question I used to spend twenty minutes on. Now it's seconds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Boilerplate&lt;/strong&gt;: Writing the fourteenth variation of a CRUD endpoint is mind-numbing. Claude handles it, I review and adjust. The code isn't clever, but it doesn't need to be.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documentation&lt;/strong&gt;: Drafting docstrings, README sections, or API documentation. I edit to match my voice, but having a solid starting point beats a blank page.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thinking partner&lt;/strong&gt;: Sometimes I explain a problem to Claude and realise the solution myself while typing. Other times, it suggests an approach I hadn't considered. Either way, it's valuable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning unfamiliar libraries&lt;/strong&gt;: When I need to use an API I haven't touched before, Claude provides working starting points. Better than Stack Overflow answers from 2019 that may or may not work with current versions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Human Judgement Matters
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Architecture&lt;/strong&gt;: Claude can suggest architectures, and the suggestions are reasonable starting points. But the "right" architecture depends on team size, existing infrastructure, deployment targets, and factors that require human judgement to weigh.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Business logic&lt;/strong&gt;: When the correct behaviour depends on understanding why the business works a certain way, I need to provide that context. Claude works with what it can see in the code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security-critical code&lt;/strong&gt;: I review generated code carefully when it touches authentication, authorisation, or sensitive data. Trust but verify.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Biggest Paradigm Shift I've Seen
&lt;/h2&gt;

&lt;p&gt;I've been doing this long enough to have opinions about paradigm shifts. I remember when version control went from "nice to have" to essential. I watched the industry move from on-premise to cloud. I've seen languages rise and fall.&lt;/p&gt;

&lt;p&gt;This is different. Not because AI is magic - it isn't. But because it changes the economics of certain tasks. Things that weren't worth doing because the effort exceeded the benefit are now tractable.&lt;/p&gt;

&lt;p&gt;Writing tests for legacy code without tests? Used to be a multi-day investment that teams avoided. Now it's a few hours of guided generation and review. Documenting that internal tool nobody wants to maintain? Tedious but now practical. Exploring a new codebase before making changes? Used to take days of reading. Now I have a knowledgeable guide.&lt;/p&gt;

&lt;p&gt;The shift isn't "AI writes code for you". It's "the effort required for certain tasks dropped significantly". That changes which projects are viable and which maintenance tasks actually get done.&lt;/p&gt;

&lt;h2&gt;
  
  
  Human in the Loop
&lt;/h2&gt;

&lt;p&gt;Every piece of generated code gets reviewed. Every suggestion gets evaluated against what I know about the system. I modify a fair amount of what Claude produces - sometimes because it's wrong, more often because I want it slightly different.&lt;/p&gt;

&lt;p&gt;This isn't a complaint about Claude. It's the nature of working with any tool that doesn't have your full context. Claude doesn't know that this codebase has a quirk where that function behaves differently than its signature suggests. It doesn't know that the team agreed to avoid that pattern last month. It doesn't know that this feature will be deprecated next quarter.&lt;/p&gt;

&lt;p&gt;The value is in the collaboration. Claude moves fast on the mechanical work. I provide context, make judgement calls, and catch issues that require institutional knowledge. Neither of us would be as effective alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Advice
&lt;/h2&gt;

&lt;p&gt;If you're considering integrating AI tools into your workflow:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invest in planning mode.&lt;/strong&gt; It's tempting to skip straight to implementation, but the upfront investment in planning pays dividends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build skills for your common workflows.&lt;/strong&gt; The time spent encoding your standards into reusable skills comes back quickly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use tasks for complex work.&lt;/strong&gt; Breaking work into tracked tasks provides structure and makes progress visible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learn the tool's patterns.&lt;/strong&gt; Claude has tendencies - certain stylistic preferences, default approaches. Knowing these helps you guide it effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stay engaged.&lt;/strong&gt; The fundamentals still matter. Understanding algorithms, system design, debugging techniques - these inform how you evaluate and direct what the AI produces.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Claude Code has genuinely improved my productivity. The combination of planning mode, skills, and tasks has created a workflow that's more than the sum of its parts. It hasn't replaced thinking, judgement, or experience. It's amplified them.&lt;/p&gt;

&lt;p&gt;This is the most significant shift in how I work since I started using version control. Not because the AI is doing my job - it isn't. Because it's handling the mechanical work well enough that I can focus on the parts that actually require experience and judgement.&lt;/p&gt;

&lt;p&gt;It's a tool. A good one. And like any good tool, it rewards learning how to use it well.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>workflow</category>
      <category>developertools</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Indexatron: Teaching Local LLMs to See Family Photos</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:41:26 +0000</pubDate>
      <link>https://dev.to/swmcc/indexatron-teaching-local-llms-to-see-family-photos-3npa</link>
      <guid>https://dev.to/swmcc/indexatron-teaching-local-llms-to-see-family-photos-3npa</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Status:&lt;/strong&gt; ✅ SUCCESS&lt;br&gt;
&lt;strong&gt;Hypothesis:&lt;/strong&gt; Local LLMs can analyse family photos with useful metadata extraction&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I've been building &lt;a href="https://the-mcculloughs.org" rel="noopener noreferrer"&gt;the-mcculloughs.org&lt;/a&gt; - a &lt;a href="https://dev.to/projects/the-mcculloughs-org"&gt;family photo sharing app&lt;/a&gt;. The Rails side handles uploads, galleries, and all the usual stuff. But I wanted semantic search - not just "photos from 2015" but "photos at the beach" or "pictures with grandma."&lt;/p&gt;

&lt;p&gt;The cloud APIs exist. But uploading decades of family photos to someone else's servers? Hard pass.&lt;/p&gt;

&lt;p&gt;Time for a science experiment - two apps working together.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Experiment
&lt;/h2&gt;

&lt;p&gt;I called it &lt;a href="https://github.com/swmcc/indexatron" rel="noopener noreferrer"&gt;&lt;strong&gt;Indexatron&lt;/strong&gt;&lt;/a&gt; 🤖&lt;/p&gt;

&lt;p&gt;The goal: prove that Ollama running locally with LLaVA:7b and nomic-embed-text can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Analyse photos&lt;/strong&gt; - Extract descriptions, detect people/objects, estimate era&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Generate embeddings&lt;/strong&gt; - Create 768-dimensional vectors for similarity search&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Process batches&lt;/strong&gt; - Handle multiple images with progress tracking&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Test Results
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Images Processed&lt;/td&gt;
&lt;td&gt;3/3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Failed&lt;/td&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Total Time&lt;/td&gt;
&lt;td&gt;40.82s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Avg Time/Image&lt;/td&gt;
&lt;td&gt;~13.6s&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Sample Outputs
&lt;/h3&gt;

&lt;h4&gt;
  
  
  🐕 family_photo_03.jpg
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Description:&lt;/strong&gt; "A tan-coloured Labrador Retriever is sitting on a wooden floor indoors"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Categories:&lt;/strong&gt; &lt;code&gt;["dog"]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mood:&lt;/strong&gt; calm&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processing Time:&lt;/strong&gt; 14.73s&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  🍺 family_photo_02.jpg
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Description:&lt;/strong&gt; "A photo of a bottle of beer and a glass with frothy white head on top, placed on a table at a restaurant"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Location:&lt;/strong&gt; Indoor restaurant&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Objects Detected:&lt;/strong&gt; Beer bottle (Kingfisher brand), glass with beer&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Categories:&lt;/strong&gt; &lt;code&gt;["beer", "restaurant"]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processing Time:&lt;/strong&gt; 14.2s&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  👔 family_photo_01.jpg
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Description:&lt;/strong&gt; "A man standing in an indoor conference room during a wedding reception"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Era Detected:&lt;/strong&gt; 2010s (medium confidence)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Person:&lt;/strong&gt; Male guest, 30s, wearing suit and tie&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Categories:&lt;/strong&gt; &lt;code&gt;["wedding"]&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Processing Time:&lt;/strong&gt; 11.89s&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Worked Well
&lt;/h2&gt;

&lt;h3&gt;
  
  
  LLaVA Vision Analysis
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Correctly identified subjects (dog, beer, person)&lt;/li&gt;
&lt;li&gt;Detected specific brands (Kingfisher)&lt;/li&gt;
&lt;li&gt;Estimated era from visual cues&lt;/li&gt;
&lt;li&gt;Provided useful mood/atmosphere descriptions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Embedding Generation
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;768-dimensional embeddings generated for all images&lt;/li&gt;
&lt;li&gt;Based on analysis descriptions (semantic meaning)&lt;/li&gt;
&lt;li&gt;Ready for similarity search when needed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Batch Processing
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Progress bar with Rich library&lt;/li&gt;
&lt;li&gt;Skip existing functionality&lt;/li&gt;
&lt;li&gt;Combined JSON output&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Quirks &amp;amp; Learnings
&lt;/h2&gt;

&lt;h3&gt;
  
  
  JSON Parsing Required Repair
&lt;/h3&gt;

&lt;p&gt;LLaVA doesn't always output clean JSON. The analyser needed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code block stripping&lt;/li&gt;
&lt;li&gt;Brace balancing&lt;/li&gt;
&lt;li&gt;Type coercion for nested objects&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Model Hallucinations
&lt;/h3&gt;

&lt;p&gt;Some amusing observations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The dog photo mentioned "clothing" and "fashion trends for pets" (the dog had no clothes)&lt;/li&gt;
&lt;li&gt;Beer was classified under &lt;code&gt;people&lt;/code&gt; array with &lt;code&gt;estimated_age: "Beer is an alcoholic beverage"&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These quirks don't break the system - robust parsing handles them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Processing Time
&lt;/h3&gt;

&lt;p&gt;~13.6 seconds per image is acceptable for batch processing. Real-time analysis would need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smaller model (llava:7b is the smallest)&lt;/li&gt;
&lt;li&gt;GPU acceleration&lt;/li&gt;
&lt;li&gt;Or async processing with user feedback&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Development Approach
&lt;/h2&gt;

&lt;p&gt;This was parallel development across two codebases - with very different approaches for each.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Boring Bits: AI Agents for CRUD
&lt;/h3&gt;

&lt;p&gt;The Rails API work? It's not exciting. Setting up API endpoints, adding pgvector, writing migrations, CRUD operations - I've done this hundreds of times. It's necessary scaffolding, but it's not where I want to spend my brain cycles.&lt;/p&gt;

&lt;p&gt;So I let AI agents handle it. Claude Code with custom agents for code review, test writing, and documentation. The agents handled the boilerplate while I reviewed and approved. This is exactly what AI assistance is good for - augmenting the repetitive work so you can focus on what matters.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Interesting Bits: README-Driven Development
&lt;/h3&gt;

&lt;p&gt;Indexatron was different. This was an experiment - I needed to understand every piece, make deliberate choices, and document as I went. For this, I used README-driven development:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Write the README first&lt;/strong&gt; - Document what the code should do before writing it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One branch per milestone&lt;/strong&gt; - Each branch proves one thing works&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Merge only when it works&lt;/strong&gt; - No moving on until the milestone is complete&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI for documentation&lt;/strong&gt; - Let agents help write up the results&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;README-driven development forces you to think through the design before coding. It's slower, but you end up with working code &lt;em&gt;and&lt;/em&gt; documentation. Perfect for experiments where you need to prove something works.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development Progress
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Indexatron (Python) - The Experiment
&lt;/h3&gt;

&lt;p&gt;README-driven development with one branch per milestone:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;PR&lt;/th&gt;
&lt;th&gt;Milestone&lt;/th&gt;
&lt;th&gt;What It Proved&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/swmcc/indexatron/pull/5" rel="noopener noreferrer"&gt;#5&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Project Setup&lt;/td&gt;
&lt;td&gt;Foundation ready&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/swmcc/indexatron/pull/1" rel="noopener noreferrer"&gt;#1&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Ollama Connection&lt;/td&gt;
&lt;td&gt;Local LLM runtime accessible&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/swmcc/indexatron/pull/2" rel="noopener noreferrer"&gt;#2&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Image Analysis&lt;/td&gt;
&lt;td&gt;LLaVA extracts useful metadata&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/swmcc/indexatron/pull/3" rel="noopener noreferrer"&gt;#3&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Embeddings&lt;/td&gt;
&lt;td&gt;768-dim vectors for similarity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/swmcc/indexatron/pull/4" rel="noopener noreferrer"&gt;#4&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Batch Processing&lt;/td&gt;
&lt;td&gt;Scalable to many images&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each branch had to work before moving on. Prove it, merge it, move on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Rails App - The Integration (Agent-Assisted)
&lt;/h3&gt;

&lt;p&gt;While I focused on Indexatron, AI agents handled the Rails infrastructure:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;PR&lt;/th&gt;
&lt;th&gt;Feature&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/swmcc/the-mcculloughs.org/pull/60" rel="noopener noreferrer"&gt;#60&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;AI Photo Analysis API with pgvector&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Standard API endpoint, database migration, pgvector setup - all the CRUD that's been done a thousand times before. The agents wrote the code, I reviewed it, tests passed, merged. That's the right division of labour: agents handle the predictable, humans handle the novel.&lt;/p&gt;

&lt;h2&gt;
  
  
  Technical Stack
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Ollama (local runtime)
├── llava:7b (~4.7GB) - Vision analysis
└── nomic-embed-text (~274MB) - Embeddings

Python 3.11+
├── ollama - API client
├── pydantic - Data validation
├── pillow - Image handling
└── rich - Console output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Next Steps
&lt;/h2&gt;

&lt;p&gt;This proves the concept works. Future integration:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Rails API&lt;/strong&gt; - Add endpoint for on-demand analysis&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Database Storage&lt;/strong&gt; - Save embeddings in PostgreSQL (pgvector)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Similarity Search&lt;/strong&gt; - Find "photos like this one"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Face Recognition&lt;/strong&gt; - Cluster photos by person (future model)&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;🤖 &lt;strong&gt;The robots can see our photos.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Local LLMs provide a privacy-preserving alternative to cloud APIs for photo analysis. The quality is good enough for family photo organisation, and the 768-dimensional embeddings enable future similarity search features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Repository&lt;/th&gt;
&lt;th&gt;Description&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/swmcc/indexatron" rel="noopener noreferrer"&gt;swmcc/indexatron&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Python service for local LLM photo analysis&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;a href="https://github.com/swmcc/the-mcculloughs.org" rel="noopener noreferrer"&gt;swmcc/the-mcculloughs.org&lt;/a&gt;&lt;/td&gt;
&lt;td&gt;Rails family photo sharing app&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Full experiment results:&lt;/strong&gt; &lt;a href="https://github.com/swmcc/indexatron/blob/main/RESULTS.md" rel="noopener noreferrer"&gt;RESULTS.md&lt;/a&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Built with Ollama, LLaVA, and a healthy scepticism of cloud APIs.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>ollama</category>
      <category>llm</category>
    </item>
    <item>
      <title>Building a Personal Site with Astro</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:41:15 +0000</pubDate>
      <link>https://dev.to/swmcc/building-a-personal-site-with-astro-30oc</link>
      <guid>https://dev.to/swmcc/building-a-personal-site-with-astro-30oc</guid>
      <description>&lt;p&gt;After years of having various iterations of personal websites (and letting them languish), I decided to rebuild from scratch with Astro. Here's why and what I learned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Astro?
&lt;/h2&gt;

&lt;p&gt;I've built sites with Next.js, Gatsby, and plain HTML/CSS over the years. Each has its strengths, but for a personal site that's primarily static content, Astro hits a sweet spot:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Speed by default&lt;/strong&gt; - Ships minimal JavaScript unless you explicitly need it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content-focused&lt;/strong&gt; - Built-in support for Markdown and content collections&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexible&lt;/strong&gt; - Can use React, Vue, or other frameworks when needed&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simple deployment&lt;/strong&gt; - Static output works anywhere&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Setup
&lt;/h2&gt;

&lt;p&gt;The project structure is straightforward:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;src&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;content&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;     &lt;span class="c1"&gt;// Markdown content&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;layouts&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;     &lt;span class="c1"&gt;// Page layouts&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;├──&lt;/span&gt; &lt;span class="nx"&gt;components&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;  &lt;span class="c1"&gt;// Reusable components&lt;/span&gt;
&lt;span class="err"&gt;│&lt;/span&gt;   &lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="nx"&gt;pages&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;       &lt;span class="c1"&gt;// Routes&lt;/span&gt;
&lt;span class="err"&gt;└──&lt;/span&gt; &lt;span class="k"&gt;public&lt;/span&gt;&lt;span class="sr"&gt;/          /&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="nx"&gt;Static&lt;/span&gt; &lt;span class="nx"&gt;assets&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Content collections make it easy to define schemas for different types of content (blog posts, notes, etc.) and get TypeScript validation for free.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Like
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Content workflow is excellent.&lt;/strong&gt; Write Markdown, commit, push, done. No CMS, no database, no complexity.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Performance is genuinely good.&lt;/strong&gt; The homepage loads in under a second on a 3G connection. That's without any optimisation beyond Astro's defaults.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dark mode was trivial.&lt;/strong&gt; A bit of CSS custom properties and localStorage. No heavy theme provider or context needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Could Be Better
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;TypeScript strictness can be overzealous.&lt;/strong&gt; Sometimes the content collection types get in the way when you know what you're doing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The docs assume familiarity with build tools.&lt;/strong&gt; If you're coming from a simple HTML/CSS background, the learning curve might be steeper than necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons Learned
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Start with content.&lt;/strong&gt; I wrote the About and Now pages before styling anything. Helped clarify what the site actually needed to do.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Resist adding features.&lt;/strong&gt; My first draft had tags, categories, search, and analytics. Stripped it all back. Can always add later if needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Automate deployment early.&lt;/strong&gt; Set up GitHub Actions from day one. Removes friction from publishing.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Would I Recommend It?
&lt;/h2&gt;

&lt;p&gt;For a personal site focused on writing? Absolutely. For a complex web app? Probably not the right tool.&lt;/p&gt;

&lt;p&gt;Astro excels at what it's designed for: content-heavy sites that should be fast and simple to maintain. That's exactly what I needed.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you're interested in the implementation details, the source code is &lt;a href="https://github.com/swmcc/swmcc.github.io" rel="noopener noreferrer"&gt;on GitHub&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>astro</category>
      <category>webdev</category>
      <category>meta</category>
    </item>
    <item>
      <title>TypeScript Conditional Types for API Responses</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:40:15 +0000</pubDate>
      <link>https://dev.to/swmcc/typescript-conditional-types-for-api-responses-2ha5</link>
      <guid>https://dev.to/swmcc/typescript-conditional-types-for-api-responses-2ha5</guid>
      <description>&lt;p&gt;Quick pattern I keep using for API responses that can be either successful or error states:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;type&lt;/span&gt; &lt;span class="nx"&gt;ApiResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;T&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;error&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;string&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="c1"&gt;// Helper to narrow the type&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;isSuccess&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ApiResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;T&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt; &lt;span class="k"&gt;is&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="na"&gt;success&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="nl"&gt;data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;T&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nx"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;success&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Usage&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ApiResponse&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;User&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetchUser&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;isSuccess&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// TypeScript knows result.data exists here&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;name&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// And knows result.error exists here&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The type guard makes the discriminated union much nicer to work with. No need for optional chaining or non-null assertions.&lt;/p&gt;

</description>
      <category>typescript</category>
      <category>types</category>
    </item>
    <item>
      <title>Dropping Down to Raw ASGI</title>
      <dc:creator>Stephen McCullough </dc:creator>
      <pubDate>Wed, 01 Apr 2026 22:39:49 +0000</pubDate>
      <link>https://dev.to/swmcc/dropping-down-to-raw-asgi-1bhp</link>
      <guid>https://dev.to/swmcc/dropping-down-to-raw-asgi-1bhp</guid>
      <description>&lt;p&gt;Building &lt;a href="https://github.com/swmcc/mailview" rel="noopener noreferrer"&gt;mailview&lt;/a&gt;, &lt;code&gt;Mount&lt;/code&gt; looked like the obvious choice for attaching routes at &lt;code&gt;/_mail&lt;/code&gt;. It wasn't.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with Static Mounting
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;Mount&lt;/code&gt; ties routing to application structure:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;routes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="nc"&gt;Mount&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/_mail&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;mailview_app&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But mailview shouldn't exist in production. It captures emails, useful in development, a liability anywhere else. With &lt;code&gt;Mount&lt;/code&gt;, you either include the routes or you don't. Conditional mounting means conditional route definitions, which leaks environment logic into your route table.&lt;/p&gt;

&lt;p&gt;That's the deeper issue: &lt;code&gt;Mount&lt;/code&gt; conflates &lt;em&gt;what paths exist&lt;/em&gt; with &lt;em&gt;what behaviour runs&lt;/em&gt;. Those are separate concerns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Middleware Separates Routing from Runtime
&lt;/h2&gt;

&lt;p&gt;Raw ASGI middleware moves the decision to runtime:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__call__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;receive&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;send&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;enabled&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;app&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;receive&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;send&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;

    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;http&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_is_mailview_path&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]):&lt;/span&gt;
        &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;_mailview_app&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;receive&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;send&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;

    &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;app&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;scope&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;receive&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;send&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The enable/disable logic lives in the middleware, not the route table. Add the middleware unconditionally; it handles the rest. In production, it's a single boolean check that passes everything through.&lt;/p&gt;

&lt;p&gt;The payoff isn't just cleaner code. When disabled:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No routes registered&lt;/strong&gt;, nothing to accidentally expose&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No OpenAPI pollution&lt;/strong&gt;, &lt;code&gt;/_mail&lt;/code&gt; doesn't appear in your schema&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;No security surface&lt;/strong&gt;, the endpoints don't exist, not just "protected"&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Surprised Me
&lt;/h2&gt;

&lt;p&gt;Coming from Ruby's Rack, I expected more ceremony. Rack middleware is similar, &lt;code&gt;call(env)&lt;/code&gt; returns &lt;code&gt;[status, headers, body]&lt;/code&gt;, but the response is synchronous and the contract is more rigid.&lt;/p&gt;

&lt;p&gt;ASGI's receive/send pattern felt odd at first. You're not returning a response; you're calling &lt;code&gt;send&lt;/code&gt; with message dicts. But it means you can stream, intercept partway through, do things that Rack makes awkward.&lt;/p&gt;

&lt;p&gt;The other surprise: how little code it takes. The entire middleware is 40 lines, half of that docstrings and type hints. I expected to miss Starlette's conveniences more than I did.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Boundary Bug Worth Remembering
&lt;/h2&gt;

&lt;p&gt;One subtlety that bit me:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Wrong, matches /_mail-archive, /_mailbox, etc.
&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startswith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/_mail&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;

&lt;span class="c1"&gt;# Right, exact match or child paths only
&lt;/span&gt;&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/_mail&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;or&lt;/span&gt; &lt;span class="n"&gt;path&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;startswith&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/_mail/&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Obvious in hindsight. Easy to miss when you're pattern-matching paths.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Drop Down
&lt;/h2&gt;

&lt;p&gt;I'd reach for raw ASGI middleware again when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The sub-app needs conditional activation based on environment&lt;/li&gt;
&lt;li&gt;You want zero footprint when disabled, no routes, no schema, no surface&lt;/li&gt;
&lt;li&gt;The logic is simple enough that Starlette's abstractions add more than they save&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For anything more complex, authentication, request modification, response transformation, I'd stick with Starlette's &lt;code&gt;BaseHTTPMiddleware&lt;/code&gt;. But for "intercept these paths, let everything else through," raw ASGI is cleaner than I expected.&lt;/p&gt;

</description>
      <category>python</category>
      <category>asgi</category>
      <category>fastapi</category>
    </item>
  </channel>
</rss>
