<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: J.S_Falcon</title>
    <description>The latest articles on DEV Community by J.S_Falcon (@_d3709cf9e80fc6babbff).</description>
    <link>https://dev.to/_d3709cf9e80fc6babbff</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/_d3709cf9e80fc6babbff"/>
    <language>en</language>
    <item>
      <title>SaaS Hacking with an AI Director: Build a Sample, Hand It Off to Specialists</title>
      <dc:creator>J.S_Falcon</dc:creator>
      <pubDate>Thu, 14 May 2026 21:24:47 +0000</pubDate>
      <link>https://dev.to/_d3709cf9e80fc6babbff/why-trivial-tech-travels-across-industries-the-5-layer-diagonal-axis-engineer-framework-n2p</link>
      <guid>https://dev.to/_d3709cf9e80fc6babbff/why-trivial-tech-travels-across-industries-the-5-layer-diagonal-axis-engineer-framework-n2p</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;A renewal notice arrived for our attendance SaaS: 300 people, ¥200/person/month, ¥720,000 annually.&lt;/li&gt;
&lt;li&gt;The number was acceptable. The problem was fit, not cost — two requirements (actual travel-cost reconciliation; network-based in-office determination) fell outside the standard SaaS spec.&lt;/li&gt;
&lt;li&gt;I rebuilt the system in Google Apps Script (GAS) in a few hours, with AI as the coding resource. &lt;strong&gt;I call this human role an "AI Director."&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;Beyond cost: agility (same-day requirement-to-system), data ownership (lock-in avoidance), and a new development flow — build a working sample, then hand it off to a SIer for production lift.&lt;/li&gt;
&lt;li&gt;Bonus thoughts on where programmer knowledge stays valuable (specialist domains, post-AI quality review). A gold-rush analogy.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Chapter 1: Requirements — Why the SaaS Stopped Fitting
&lt;/h2&gt;

&lt;p&gt;A generic SaaS is designed around business processes shared by many companies. That's a sound design philosophy. But for an organization carrying company-specific logic, the company's business doesn't fit what the SaaS provides.&lt;/p&gt;

&lt;p&gt;Two requirements fell outside the standard spec in our case:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Travel-expense reconciliation against actuals&lt;/strong&gt;: Specific rules on commuter-pass routes and per-day route determination. The SaaS's default aggregation couldn't handle it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Attendance determination by network&lt;/strong&gt;: We wanted to decide "remote vs. in-office" by whether the punch came through the corporate network. Location-based determination wasn't precise enough.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Two ways to do this with a generic SaaS: pay for a custom development option, or use a CSV export + post-processing workaround. The former inflates cost. The latter produces a kind of inverse vendor lock-in — your operations get locked into manual workflows that only you understand.&lt;/p&gt;

&lt;p&gt;The decision was simple. Hire AI as an external coding resource and rebuild the system in GAS to fit our requirements.&lt;/p&gt;

&lt;p&gt;From implementation start to operational verification, it took a few hours. The AI wrote the code. Humans handled requirements and specification adjustment. &lt;strong&gt;In this article, I'll call this human role an "AI Director."&lt;/strong&gt; I use the term in the sense of an individual practitioner directing AI as a coding resource — not the executive "Director of AI" role established in larger organizations. This role overlaps significantly with what's framed as a "Citizen Developer using AI" in adjacent discussions; I emphasize the &lt;strong&gt;directing relationship with the AI&lt;/strong&gt; rather than the citizen vs. professional dichotomy.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chapter 2: Breaking Through Physical Constraints (Ver.1 to Ver.4)
&lt;/h2&gt;

&lt;p&gt;During development, the location-acquisition approach switched four times. Each switch was a discovered constraint and a design decision to work around it.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ver.1: HTML5 Geolocation
&lt;/h3&gt;

&lt;p&gt;GPS on smartphones gives reasonable precision. But punches mostly come from PCs, where Geolocation falls back to Wi-Fi triangulation or IP-based estimation. The errors landed in the kilometer range.&lt;/p&gt;

&lt;p&gt;Not enough precision for in-office determination. Rejected.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ver.2: IP-Based Estimation
&lt;/h3&gt;

&lt;p&gt;I mimicked the fallback approach used by many SaaS products. Call an external API to estimate geographic location from client IP, then convert to an address string.&lt;/p&gt;

&lt;p&gt;The errors here were also large. Carrier routing causes "home" to surface as "near Tokyo Station" in many cases. Not viable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ver.3: Google Maps Reverse Geocoding (Built into GAS)
&lt;/h3&gt;

&lt;p&gt;GAS includes a Maps service. &lt;code&gt;Maps.newGeocoder().reverseGeocode(lat, lng)&lt;/code&gt; performs reverse geocoding at no cost. This removed the need for an external API key for lat/lng-to-address conversion.&lt;/p&gt;

&lt;p&gt;Precision issues remained, but on cost, this had a clear edge over the SaaS.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ver.4: Master Matching as Proof of Attendance
&lt;/h3&gt;

&lt;p&gt;The final solution wasn't to improve location precision. It was to change the determination axis itself.&lt;/p&gt;

&lt;p&gt;When a punch comes through the corporate network, the global IP is the company router's fixed IP. &lt;strong&gt;Match this IP against a master of corporate-site addresses. If it matches, "in-office" is confirmed.&lt;/strong&gt; Location precision becomes irrelevant to the question.&lt;/p&gt;

&lt;p&gt;This reduces to deterministic IP-master lookup, so error doesn't structurally arise. For the purpose of attendance determination alone, IP matching is more accurate to the requirement than Geolocation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chapter 3: What the System Does
&lt;/h2&gt;

&lt;p&gt;The final form has:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;One-tap punch&lt;/strong&gt;: Browser, one button, clock-in/out recorded.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic location recording&lt;/strong&gt;: IP master confirms "in-office"; otherwise Reverse Geocoding records the address string.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated monthly aggregation&lt;/strong&gt;: Per-person attendance-day counts, computed from corporate-site master matching.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data ownership&lt;/strong&gt;: Punch data accumulates in our own Google Spreadsheet. No export operation. Even after cancelling the SaaS, the data remains.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The last point matters for vendor lock-in avoidance. With a SaaS, cancellation typically means losing access to historical data, or being constrained to CSV dumps. Internal development places the data structure itself under our control.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chapter 4: AI's Asymmetric Leverage
&lt;/h2&gt;

&lt;p&gt;Most AI-coding-support articles frame productivity as "the programmer becomes N× faster."&lt;/p&gt;

&lt;p&gt;That's a vertical-axis story. Within the same role, throughput rises. Faster typing, faster code review, faster refactoring.&lt;/p&gt;

&lt;p&gt;What happened here was a different direction of leverage.&lt;/p&gt;

&lt;p&gt;When a person with domain knowledge uses AI, a business system that previously went through "business side → engineer outsourcing → implementation → review" can be built directly, skipping the intermediate steps.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;User&lt;/th&gt;
&lt;th&gt;How AI is used&lt;/th&gt;
&lt;th&gt;Effect&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Programmer&lt;/td&gt;
&lt;td&gt;Throughput improvement in the same role&lt;/td&gt;
&lt;td&gt;N× productivity along the vertical axis&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain expert&lt;/td&gt;
&lt;td&gt;Skipping intermediate steps, reaching implementation directly&lt;/td&gt;
&lt;td&gt;"Couldn't implement" becomes "can implement"; non-linear arrival&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The second form of leverage is asymmetric in market terms. Programmer productivity gains accelerate competition among programmers. The domain expert's AI use crosses the boundary of professional categories entirely.&lt;/p&gt;

&lt;p&gt;Acceleration along the vertical axis, versus crossing the boundary. These are leverages of different types. &lt;strong&gt;The AI Director embodies the latter.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Chapter 5: Using the Sample as a Deliverable — The Specialist Handoff Option
&lt;/h2&gt;

&lt;p&gt;Pack all the attendance requirements into a sample system that runs for at least one person. From there, you could spend time building login management, change-request approval flow, database management, and other production-grade elements yourself.&lt;/p&gt;

&lt;p&gt;But at this point, handing the work off to a SIer (system integrator) is a viable option.&lt;/p&gt;

&lt;p&gt;You have everything the business wants captured, a working sample, and the GAS code. This amounts to having completed requirements definition, part of the basic design, functional design, and detailed design.&lt;/p&gt;

&lt;p&gt;If you simply ask the SIer to turn this into a system that all employees can use, the cost should be significantly lower than traditional outsourcing.&lt;/p&gt;

&lt;p&gt;Further, if a proper SIer delivers a production-quality system along with specifications and system-design documents as deliverables, the only remaining changes are scale adjustments (employee-count changes) and rare-case incorporation. By &lt;strong&gt;feeding the specifications and system design back into AI as the current environment&lt;/strong&gt;, you may be able to handle those changes yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build the logic yourself, let specialists raise it to production quality, and own the deliverables.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The current AI isn't a magical do-anything tool — it's an extension of your own thinking. Recognize that your own limit is AI's limit. When a problem doesn't yield in 30 minutes, list it as an open issue. Then have others solve it. Or have them review and suggest a path forward.&lt;/p&gt;

&lt;p&gt;The usual system development, with AI in the middle reducing the issue count. This is what I see future system development trending toward.&lt;/p&gt;

&lt;h2&gt;
  
  
  Chapter 6: Conclusion
&lt;/h2&gt;

&lt;p&gt;Saving ¥720,000 annually isn't the primary outcome here.&lt;/p&gt;

&lt;p&gt;The primary outcome is the ability to rewrite the system the moment business requirements shift. Instead of waiting for the SaaS vendor to add features, the system follows on the day you write the requirements. This agility isn't fully captured by cost calculations.&lt;/p&gt;

&lt;p&gt;In the AI coding era, the market value of "being able to write code" is in relative decline. What rises is the ability to define what should be built, instruct the AI correctly, and verify the output.&lt;/p&gt;

&lt;p&gt;It's less a new profession than the arrival of an era in which people who already hold business knowledge can — by having AI as an implementation layer — restructure their own business domain as a system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The ability to operate as an AI Director — defining requirements, having AI build, and integrating into operations — fits as one of the basic survival strategies in today's business environment.&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Bonus: Where Programmer Knowledge Stays Valuable
&lt;/h2&gt;

&lt;p&gt;This article leans toward the AI Director perspective. Let me also organize patterns where programmer knowledge stays valuable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Specialization in Areas Where AI Falls Short
&lt;/h3&gt;

&lt;p&gt;Real-time control, embedded systems, security, formal verification, mission-critical domains. Running AI-written code without verification isn't socially acceptable in these areas. Fields requiring depth of specialist knowledge and robustness remain programmer territory.&lt;/p&gt;

&lt;p&gt;This isn't a domain an AI Director can cross into. It's a direction in which programmers become purely stronger.&lt;/p&gt;

&lt;h3&gt;
  
  
  Monetizing Specialist Knowledge
&lt;/h3&gt;

&lt;p&gt;Separately, I had AI draft a contract and asked a lawyer for the final review. With 2-3 comments, it was done for ¥5,000.&lt;/p&gt;

&lt;p&gt;This is one illustration of how specialists sit in the AI era. Even when AI does the drafting, specialist review retains its value — legal responsibility, hallucination mitigation, expert judgment.&lt;/p&gt;

&lt;p&gt;The same structure applies to programmers. The sample that an AI Director builds needs SIer/programmer review at the stage of lifting it to production quality. The role shifts from "time spent writing code from scratch" to "time spent evaluating code and raising it to production quality."&lt;/p&gt;

&lt;p&gt;In an era where production has become easy, you don't have to stay on the production side. You can move to the side that raises quality, or builds the systems. A gold-rush analogy fits: rather than swinging a pickaxe to dig for gold, you can be the one selling pickaxes, or refining the gold.&lt;/p&gt;

&lt;p&gt;AI now covers the areas that have historically consumed programmers' time. The opened-up time can be redirected toward deepening knowledge of methodology and algorithms. Let AI handle production. Move to the side that critiques as a specialist. This may be the model pattern for programmers and, by extension, all specialists.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>googleappsscript</category>
      <category>saas</category>
    </item>
    <item>
      <title>Why 'Trivial' Tech Travels Across Industries: The 5-Layer Diagonal-Axis Engineer Framework</title>
      <dc:creator>J.S_Falcon</dc:creator>
      <pubDate>Sun, 10 May 2026 06:55:29 +0000</pubDate>
      <link>https://dev.to/_d3709cf9e80fc6babbff/why-trivial-tech-travels-across-industries-the-5-layer-diagonal-axis-engineer-framework-5fcn</link>
      <guid>https://dev.to/_d3709cf9e80fc6babbff/why-trivial-tech-travels-across-industries-the-5-layer-diagonal-axis-engineer-framework-5fcn</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I'm a non-coding engineer (operations background) who builds tools through AI collaboration.&lt;/li&gt;
&lt;li&gt;This post argues that &lt;strong&gt;non-coding engineers — not programmers — are positioned as AI's biggest beneficiaries&lt;/strong&gt; in the current era.&lt;/li&gt;
&lt;li&gt;The diagonal-axis engineer thrives via 5 layers: dimensional crossover, value asymmetry, handover ability, ownership avoidance (AI-to-AI handover spec), and diagram-as-source.&lt;/li&gt;
&lt;li&gt;"Trivial" tech (pandas + Excel batch files) wins in business contexts because operators can pick it up and adapt it.&lt;/li&gt;
&lt;li&gt;The market value of the AI era boils down to &lt;strong&gt;the ability to travel across industries&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  1. An Ode to "Trivial" Tech, Continued from This Morning
&lt;/h2&gt;

&lt;p&gt;This morning I wrote about solving the "can't send business data to LLMs" problem with one line of pandas + Faker. (See the earlier post: &lt;a href="https://dev.to/_d3709cf9e80fc6babbff/how-i-built-a-masking-tool-without-showing-ai-any-real-data-column-wise-shuffling-as-the-scaffold-1og1"&gt;How I Built a Masking Tool Without Showing AI Any Real Data&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;In the programmer world, that's:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"pandas? Yeah, sure."&lt;/li&gt;
&lt;li&gt;"Faker? Old hat."&lt;/li&gt;
&lt;li&gt;"Column-wise shuffling? Standard de-identification."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;— completely &lt;strong&gt;trivial tech&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;But in the non-engineer world:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"This eliminates 8 hours of work per month."&lt;br&gt;
"I can discuss it with AI without sending real data."&lt;br&gt;
"I have a scaffold for the masking tool now."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;is the response. The same tech becomes either "trivial" or "revolutionary" depending on the evaluation axis.&lt;/p&gt;

&lt;p&gt;This &lt;strong&gt;value asymmetry&lt;/strong&gt; is no accident — it's a structural feature of the AI collaboration era. The thesis of this post:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Non-coding engineers, not programmers, are positioned to become the biggest beneficiaries of the AI era. This is realized by arming themselves with the 5-layer structure of the diagonal axis.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  2. Vertical, Horizontal, Diagonal: Three Patterns of AI Use
&lt;/h2&gt;

&lt;p&gt;First, let's organize 4 positions on 2 axes (coding ability × domain knowledge):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;quadrantChart
    title Position of the Diagonal-Axis Engineer
    x-axis "Coding Low" --&amp;gt; "Coding High"
    y-axis "Domain Low" --&amp;gt; "Domain High"
    quadrant-1 "Diagonal-Axis Engineer"
    quadrant-2 "Domain Owner"
    quadrant-3 "Non-Coding Engineer"
    quadrant-4 "Programmer"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The movement from the bottom-left "Non-Coding Engineer" to the top-right "Diagonal-Axis Engineer" is the theme of this post. Building on this matrix, let's look at three patterns of AI use:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Axis&lt;/th&gt;
&lt;th&gt;User&lt;/th&gt;
&lt;th&gt;AI's Role&lt;/th&gt;
&lt;th&gt;Effect&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Vertical (same-dimension acceleration)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Programmer&lt;/td&gt;
&lt;td&gt;Speed-up of same skill&lt;/td&gt;
&lt;td&gt;2x faster typing / refactoring&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Horizontal (same-dimension expansion)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Programmer&lt;/td&gt;
&lt;td&gt;Skill width expansion&lt;/td&gt;
&lt;td&gt;Frontend specialist learning DB design&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Diagonal (dimension crossover)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Non-coding engineer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Compensating missing capabilities&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;"Can design but can't code" → reaches implementation independently&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Vertical + Horizontal = linear expansion that doubles the slope of the graph.&lt;br&gt;
Diagonal = a non-linear shift where the line itself diverges (paradigm shift).&lt;/p&gt;

&lt;p&gt;The leverage effect is &lt;strong&gt;Diagonal &amp;gt;&amp;gt;&amp;gt; Vertical + Horizontal&lt;/strong&gt;. 2x speedup vs 0 → 1 crossover — the latter is overwhelmingly larger.&lt;/p&gt;

&lt;p&gt;The diagram of how to reach the diagonal-axis engineer position:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;graph LR
    ZERO["Non-Coding Engineer&amp;lt;br&amp;gt;Coding:× Domain:×"]
    DOM["Domain Owner&amp;lt;br&amp;gt;Coding:× Domain:○"]
    PROG["Programmer&amp;lt;br&amp;gt;Coding:○ Domain:×"]
    DIAG["Diagonal-Axis Engineer&amp;lt;br&amp;gt;Implementation via AI&amp;lt;br&amp;gt;Domain:○"]

    ZERO --&amp;gt;|Field Experience| DOM
    ZERO --&amp;gt;|Vertical: Coding Study| PROG
    DOM ==&amp;gt;|Diagonal: AI Collaboration - Short Path| DIAG
    PROG -.-&amp;gt;|Diagonal: Domain Acquisition - Slow Path| DIAG

    style DIAG fill:#9f9,stroke:#393,stroke-width:3px
    style DOM fill:#ff9
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The thick line shows the short path (Domain Owner via AI collaboration). The dashed line shows the slow path (Programmer acquiring domain knowledge). The defining feature of the AI collaboration era: the latter requires time and motivation, while the former has its technical gap filled by AI.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. The 5-Layer Structure of the Diagonal-Axis Engineer
&lt;/h2&gt;

&lt;p&gt;"Can't code but can reach implementation via AI" alone isn't enough armor. Winning in the market requires all 5 layers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;#&lt;/th&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Effect&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Dimension Crossover&lt;/td&gt;
&lt;td&gt;Reach implementation without coding&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Value Asymmetry&lt;/td&gt;
&lt;td&gt;Trivial tech matters in business contexts&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;Handover Ability&lt;/td&gt;
&lt;td&gt;Simple implementations spread across industries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Ownership Avoidance&lt;/td&gt;
&lt;td&gt;AI-generated specs let maintenance also depend on AI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;Diagram-as-Source&lt;/td&gt;
&lt;td&gt;Common language between AI and humans&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The 5-layer dependency:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;graph TD
    L1["Layer 1 Dimension Crossover&amp;lt;br&amp;gt;Reach implementation without coding"]
    L2["Layer 2 Value Asymmetry&amp;lt;br&amp;gt;Trivial tech matters in business"]
    L3["Layer 3 Handover Ability&amp;lt;br&amp;gt;Spreads across industries"]
    L4["Layer 4 Ownership Avoidance&amp;lt;br&amp;gt;AI specs let maintenance depend on AI"]
    L5["Layer 5 Diagram-as-Source&amp;lt;br&amp;gt;Common language between AI and humans"]

    L1 --&amp;gt; L2 --&amp;gt; L3 --&amp;gt; L4 --&amp;gt; L5
    L5 -.-&amp;gt; NEW["New Paradigm&amp;lt;br&amp;gt;Diagram is truth, code is AI's derivative"]

    style L1 fill:#fef
    style L5 fill:#fc9
    style NEW fill:#9f9,stroke:#393,stroke-width:3px
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Layer 1 "implementation reach" is the starting point; the higher you stack, the stronger your market position. Past Layer 5, you exit into a new development paradigm.&lt;/p&gt;

&lt;p&gt;Let me walk through each layer.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 1: Dimension Crossover
&lt;/h3&gt;

&lt;p&gt;For someone who can design but can't code, AI is the &lt;strong&gt;implementation means&lt;/strong&gt; itself. Areas they previously had to delegate to others, they can now reach independently. &lt;strong&gt;0 → 1 crossover&lt;/strong&gt; happens.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 2: Value Asymmetry
&lt;/h3&gt;

&lt;p&gt;The opening point. Implementations the programmer world rates as "trivial" or "low-grade" land as business impact in the non-engineer world. The same tech reverses value depending on the evaluation axis.&lt;/p&gt;

&lt;p&gt;In other words, &lt;strong&gt;the more "low-grade" tech is judged to be, the wider its market lands in business contexts&lt;/strong&gt;. Programmers tend to lock themselves into a world that competes on technical sophistication, ending up clinging to a narrow market.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 3: Handover Ability
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Implementation Complexity&lt;/th&gt;
&lt;th&gt;Required Personnel&lt;/th&gt;
&lt;th&gt;Handover Ability&lt;/th&gt;
&lt;th&gt;Industry Reach&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;High / Complex&lt;/td&gt;
&lt;td&gt;High-skill programmer&lt;/td&gt;
&lt;td&gt;Low (locked-in)&lt;/td&gt;
&lt;td&gt;Trapped in 1 industry&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;General engineer&lt;/td&gt;
&lt;td&gt;Medium&lt;/td&gt;
&lt;td&gt;Mid-scale deployment&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Simple (batch + Excel)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Operator&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;High&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Industry-wide&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The simpler the implementation, the more it reaches the field and becomes the gateway to the industry. The artifacts a vibe coder produces structurally land at "simple implementations" (since the coder doesn't read the code, they can't make it complex). Asking AI to "build it as a double-clickable batch file" → it converges naturally to moderate complexity → field handover ability emerges as a side effect.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 4: Ownership Avoidance
&lt;/h3&gt;

&lt;p&gt;Even with simple implementations and field handover ability, maintenance is a separate problem. AI collaboration structurally avoids the &lt;strong&gt;ownership trap&lt;/strong&gt; that programmers have struggled with for a hundred years.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is an AI-generated spec&lt;/strong&gt;: A handover document for &lt;strong&gt;another account's AI&lt;/strong&gt; (a different Claude / Gemini etc.) to read and become capable of modifying / maintaining the code. Even if the original vibe coder leaves, a new maintainer can bring in a different AI and ask "please maintain this code" — and the system lives on. This isn't the traditional "human → human" handover; it's an &lt;strong&gt;"AI → AI" handover&lt;/strong&gt; by design.&lt;/p&gt;

&lt;p&gt;This builds a 2-layer maintenance structure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Layer 1: Simple implementation → operator can touch it directly&lt;/li&gt;
&lt;li&gt;Layer 4: AI-generated spec → new maintainer / AI can take over&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The revolutionary point of AI specs is being &lt;strong&gt;interactive, not static&lt;/strong&gt;. The new maintainer's AI can be asked "how does this system work?". The system can be understood via AI without reading the code.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 5: Diagram-as-Source
&lt;/h3&gt;

&lt;p&gt;Layer 4 ensures "AI → AI" subject-axis handover, but &lt;strong&gt;for in-progress collaboration where humans and AI must share understanding&lt;/strong&gt;, one more layer is needed.&lt;/p&gt;

&lt;p&gt;So we make diagrammatic specs (Mermaid flowcharts / logic trees) the &lt;strong&gt;source of truth&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;Natural language spec = soft (volatile, interpretation drifts)&lt;br&gt;
&lt;strong&gt;Diagrammatic spec = hard (the structure itself is the source, humans and AI look at the same diagram)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Humans and AI can &lt;strong&gt;converse in the same diagram language&lt;/strong&gt;. Instructions like "change the Yes branch of this gate" land instantly. Visual patterns make dead-code paths and unused nodes obvious.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Prerequisites — Without a Verification Eye, the Diagonal Axis Becomes an AI-Dependent Caricature
&lt;/h2&gt;

&lt;p&gt;The 5-layer structure says "you can win without coding," but this is easy to misread.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;th&gt;Necessity&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Ability to write code&lt;/td&gt;
&lt;td&gt;Not required&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ability to structure the system as a whole&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Required&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Verification eye for output validity&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Required&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ability to cross-reference industry terminology (survey-driven)&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Required&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If these are missing, you can't write good prompts for AI, can't verify results, and end up mass-producing buggy deliverables.&lt;/p&gt;

&lt;p&gt;In other words, &lt;strong&gt;"Domain Owner using AI ≠ Diagonal-Axis Engineer."&lt;/strong&gt; Between them sits a filter of these 3 elements; those who can't cross it remain at "AI-dependent caricature" level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vibe coder = a non-programmer with a verification eye&lt;/strong&gt; — that's the proper redefinition.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Devil's Advocate
&lt;/h2&gt;

&lt;p&gt;The 5-layer structure isn't bulletproof:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI-generated spec reliability&lt;/strong&gt;: AI may write things that are factually wrong, with no verification possible&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Commercial LLM dependency&lt;/strong&gt;: service shutdown or model change can collapse reproducibility&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Diagram literacy prerequisite&lt;/strong&gt;: diagrammatic specs require "the ability to read them"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Short-term vs long-term&lt;/strong&gt;: as programmers extend into the diagonal axis, the advantage erodes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each requires individual countermeasures (human review / local LLM redundancy / coexisting static documents), but none of these overturn the post's core thesis.&lt;/p&gt;

&lt;h3&gt;
  
  
  Responses to Anticipated Pushback
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;"Can't anyone become a diagonal-axis engineer just by using AI as a domain owner?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→ No. Without the 3 elements shown in Section 4 (structuring ability / verification eye / survey ability), you become a "vibe coder without a verification eye" who blindly trusts AI output and mass-produces buggy deliverables. &lt;strong&gt;Domain Owner + AI ≠ Diagonal-Axis Engineer&lt;/strong&gt;. There's a prerequisite filter between them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Won't programmers eventually become diagonal-axis engineers by acquiring domain knowledge?"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;→ Yes, but &lt;strong&gt;it takes time&lt;/strong&gt;. For programmers to acquire specific business domain knowledge, on-the-job training + experience accumulation + dialogue with the business side are required, and this is dramatically slower than AI collaboration's immediate effect (the technical gap filled by AI). The dashed line in the path diagram represents this time gap. The defining feature of the AI collaboration era is this &lt;strong&gt;asymmetry: domain knowledge acquisition still takes time, while the technical gap is instantly filled by AI&lt;/strong&gt;. That's why domain-owner-originated diagonal-axis engineers hold first-mover advantage.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. My Case — Operations × Cross-Domain Double Cross
&lt;/h2&gt;

&lt;p&gt;In my case, I sit at the top-right of the matrix by acquiring business knowledge through on-the-job experience first, then having AI write the code. Engineers with system construction or operations experience can usually map out routine business flows themselves, and reaching the diagonal-axis position only requires consulting AI for the parts that can be made more efficient.&lt;/p&gt;

&lt;p&gt;I (the author) come from an operations background, and I move with the stance of bringing operational discipline (flow establishment / efficiency / monitoring / handover) into business improvements in other domains.&lt;/p&gt;

&lt;p&gt;That is:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;Structure&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1 (Diagonal Axis)&lt;/td&gt;
&lt;td&gt;Can't code, but reach implementation via AI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2 (Domain Cross)&lt;/td&gt;
&lt;td&gt;Operations expertise × cross-domain business flow&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Diagonal Axis × Domain Cross = double leverage&lt;/strong&gt;. AI collaboration compensates the diagonal-axis weakness (can't code), and operations expertise brings a fresh perspective into other domains.&lt;/p&gt;

&lt;p&gt;This is the core market value of the cross-domain engineer. The reason I feel "I can travel across industries" lies in this double cross.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Conclusion — A New Development Paradigm
&lt;/h2&gt;

&lt;p&gt;Programmer era: &lt;strong&gt;Code is truth, documentation is auxiliary&lt;/strong&gt;&lt;br&gt;
Vibe coder era: &lt;strong&gt;Diagram is truth, code is AI's derivative, natural language is auxiliary&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And the market value of the AI collaboration era boils down to the &lt;strong&gt;ability to travel across industries&lt;/strong&gt;. Tech that draws "is that all?" responses is what becomes the gateway to industry adoption.&lt;/p&gt;

&lt;p&gt;An ode to "trivial" tech, you might call it.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>productivity</category>
      <category>engineering</category>
    </item>
    <item>
      <title>How I Built a Masking Tool Without Showing AI Any Real Data: Column-wise Shuffling as the Scaffold</title>
      <dc:creator>J.S_Falcon</dc:creator>
      <pubDate>Sun, 10 May 2026 00:27:53 +0000</pubDate>
      <link>https://dev.to/_d3709cf9e80fc6babbff/how-i-built-a-masking-tool-without-showing-ai-any-real-data-column-wise-shuffling-as-the-scaffold-1og1</link>
      <guid>https://dev.to/_d3709cf9e80fc6babbff/how-i-built-a-masking-tool-without-showing-ai-any-real-data-column-wise-shuffling-as-the-scaffold-1og1</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I never write code or send real data to LLMs — but I built a complete data-masking tool through AI collaboration.&lt;/li&gt;
&lt;li&gt;The technique: column-wise independent shuffling (Japan PPC's official anonymization method) plus Faker replacement.&lt;/li&gt;
&lt;li&gt;Four phases: send column names → run shuffling batch → manually craft sample CSV → send sample for Faker batch + structural review.&lt;/li&gt;
&lt;li&gt;Key discipline: survey naive ideas in industry terminology before having AI implement — that alone compresses code 10x.&lt;/li&gt;
&lt;li&gt;The output is a tool I trigger by double-click. I never read the Python.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. The "Can't Send to LLM" Wall
&lt;/h2&gt;

&lt;p&gt;Across my field notes, I've kept saying the same things:&lt;br&gt;
"Don't send business data to LLMs."&lt;br&gt;
"Only sanitized samples go to AI."&lt;/p&gt;

&lt;p&gt;But &lt;strong&gt;how exactly do I sanitize the data?&lt;/strong&gt;&lt;br&gt;
That methodology has never been spelled out. So here it is —&lt;br&gt;
a self-asked, self-answered post.&lt;/p&gt;




&lt;p&gt;I wanted to build a new masking tool. I wanted to discuss it&lt;br&gt;
with Claude or Gemini, showing real data and asking&lt;br&gt;
"how would you mask this column?"&lt;/p&gt;

&lt;p&gt;But the rule is firm: no business data goes to LLMs.&lt;/p&gt;

&lt;p&gt;Just describing the logic verbally doesn't land —&lt;br&gt;
LLMs need to see the data shape.&lt;br&gt;
Hand-crafting fake data is torture (you have to reproduce&lt;br&gt;
empty-cell patterns, spelling variants, full-width/half-width&lt;br&gt;
character mixes, and so on).&lt;/p&gt;

&lt;p&gt;What I needed: &lt;strong&gt;data that looks real but can't identify anyone&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. The Naive Idea: Column-by-Column Shuffle
&lt;/h2&gt;

&lt;p&gt;My first idea was simple:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"What if I shuffle each column independently?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you shuffle each column on its own:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Each value remains real (format perfectly preserved)&lt;/li&gt;
&lt;li&gt;Row-level combinations are destroyed (records can't be reconstructed)&lt;/li&gt;
&lt;li&gt;Per-column statistical properties are preserved (distributions intact)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For 100 customer rows, shuffle the name column, address column,&lt;br&gt;
and amount column separately. The combination&lt;br&gt;
"John Smith / 123 Main St / $12,345" disappears,&lt;br&gt;
but each value still exists somewhere.&lt;/p&gt;

&lt;p&gt;That should make individual identification impossible.&lt;/p&gt;

&lt;p&gt;But before implementing, I &lt;strong&gt;surveyed&lt;/strong&gt; first.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. The Survey Reveals: Industry Standard
&lt;/h2&gt;

&lt;p&gt;"Naive idea → immediate implementation" is forbidden discipline&lt;br&gt;
(see my earlier field guide on&lt;br&gt;
&lt;a href="https://dev.to/_d3709cf9e80fc6babbff/what-operations-discipline-brings-to-ai-assisted-coding-a-cross-domain-field-guide-2067"&gt;ops discipline in AI-assisted coding&lt;/a&gt;).&lt;br&gt;
Translate the naive idea into industry terminology, then search.&lt;/p&gt;

&lt;p&gt;Searching "column-wise shuffle + anonymization + technical term":&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Column-wise Independent Shuffling&lt;/strong&gt;&lt;br&gt;
A de-identification technique offered by Oracle Data Safe,&lt;br&gt;
Talend, Tonic.ai, and others.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;And surprisingly, Japan codifies it too:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Japan's Personal Information Protection Commission (PPC) lists&lt;br&gt;
"shuffling" explicitly in its anonymization guidelines:&lt;br&gt;
"Probabilistically swapping records constituting the personal&lt;br&gt;
information database among themselves."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So my naive idea was literally PPC's official method.&lt;br&gt;
Survey complete. Time to implement.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. AI Collaboration in Four Phases
&lt;/h2&gt;

&lt;p&gt;A premise I should make explicit — &lt;strong&gt;I don't write a single line of code.&lt;/strong&gt;&lt;br&gt;
As a vibe coder, I have AI write it for me.&lt;/p&gt;

&lt;p&gt;But the rule "no business data to LLMs" applies, so I can't just send&lt;br&gt;
the real data and say "shuffle this please."&lt;br&gt;
So I do it in four phases.&lt;/p&gt;




&lt;h3&gt;
  
  
  Phase 1: Send Only Column Names → Get a Tool Built
&lt;/h3&gt;

&lt;p&gt;I can't send the real data, but &lt;strong&gt;I can send the column names&lt;/strong&gt;&lt;br&gt;
(structure, not PII).&lt;/p&gt;

&lt;p&gt;Prompt to LLM:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Schema: customerID / name / address / building name / company / amount&lt;br&gt;
Requirement: Shuffle each column independently, destroy row combinations&lt;br&gt;
Build it as a batch file (.bat) that runs on double-click&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The LLM produced a batch file + internal script + input/output folders&lt;br&gt;
as a complete bundle. What lands on my desk: &lt;strong&gt;a tool that runs on double-click.&lt;/strong&gt;&lt;br&gt;
I don't read the Python inside.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 2: Verify Operation → One Bug Surfaces
&lt;/h3&gt;

&lt;p&gt;I drop real data into the input folder, double-click the batch file,&lt;br&gt;
open the output CSV in Excel.&lt;/p&gt;

&lt;p&gt;Something's off. The shuffle is supposedly happening, but row-level&lt;br&gt;
combinations look intact — each row resembles its original ordering.&lt;/p&gt;

&lt;p&gt;I report to the LLM:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The double-click ran fine, but the output CSV doesn't look shuffled.&lt;br&gt;
Each row resembles the original order.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The LLM's instant reply:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The internal seed is shared across all columns. We need a different&lt;br&gt;
seed per column. Fixing.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I receive the fixed batch file, double-click → combinations are now&lt;br&gt;
destroyed. OK.&lt;/p&gt;

&lt;p&gt;What looked correct on paper failed in practice.&lt;br&gt;
The AI confidently said "looks right on paper" too,&lt;br&gt;
so &lt;strong&gt;practical verification is the human's role&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 3: Build the Sample CSV
&lt;/h3&gt;

&lt;p&gt;From the shuffled output, I pull just 10 rows and manually replace the&lt;br&gt;
surnames and building names with arbitrary characters in Excel.&lt;br&gt;
This erases the last traces of real data.&lt;/p&gt;

&lt;p&gt;The sample CSV now has only the column structure and shape of data —&lt;br&gt;
no real-data trace remains. &lt;strong&gt;Only at this point does it become material&lt;br&gt;
I can send to the LLM.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 4: Send the Sample CSV → Get the Faker Batch Built
&lt;/h3&gt;

&lt;p&gt;I send the sample CSV to the LLM with a follow-up request:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Based on this sample, add a Faker-based replacement step for&lt;br&gt;
name / address / building / company. Same batch file should handle it.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The LLM integrated Faker (&lt;code&gt;ja_JP&lt;/code&gt; locale, but the same applies in any&lt;br&gt;
locale) and, for fields Faker doesn't support (e.g., apartment building&lt;br&gt;
names like "Alpha Omega Place"), wrote a custom generator using&lt;br&gt;
katakana + suffixes (producing names like "Nikikenawatower").&lt;/p&gt;

&lt;p&gt;While reading the sample, the LLM also notices:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Your "product" keyword rule for Faker-replacement is over-matching:&lt;br&gt;
"productID", "productStock", "productCategory" are getting hit too.&lt;br&gt;
Switch to a two-stage detection (include + exclude keywords).&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This wasn't a perspective I would have spotted alone.&lt;br&gt;
&lt;strong&gt;I use AI twice&lt;/strong&gt; — once for the shuffling batch (built from column&lt;br&gt;
names alone), and once for the Faker batch + structural review (built&lt;br&gt;
from the sample CSV).&lt;/p&gt;

&lt;p&gt;The LLM rewrote the matching logic from "keyword-match → apply" into&lt;br&gt;
"keyword-match → exclusion-check → apply" before producing the Faker&lt;br&gt;
batch. Double-click the new batch file → Faker processing completes&lt;br&gt;
without any over-match. Done.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Four-Phase Role Split
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Phase&lt;/th&gt;
&lt;th&gt;What I do&lt;/th&gt;
&lt;th&gt;What AI does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Phase 1: Build shuffling batch&lt;/td&gt;
&lt;td&gt;Send column names as prompt&lt;/td&gt;
&lt;td&gt;Build the complete batch tool&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase 2: Verify operation → Fix bug&lt;/td&gt;
&lt;td&gt;Click / verify in Excel / report&lt;/td&gt;
&lt;td&gt;Identify bug cause and fix&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase 3: Build sample CSV&lt;/td&gt;
&lt;td&gt;Pull 10 rows / manually edit surnames and building names&lt;/td&gt;
&lt;td&gt;(not involved)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Phase 4: Build Faker batch&lt;/td&gt;
&lt;td&gt;Send sample CSV to LLM / click / verify&lt;/td&gt;
&lt;td&gt;Build Faker batch + structural review (resolve over-match)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I never read the Python. I never send real data to LLMs.&lt;br&gt;
&lt;strong&gt;Double-click → open in Excel → report to AI.&lt;/strong&gt; Four phases through&lt;br&gt;
this loop and the tool is finished.&lt;/p&gt;

&lt;p&gt;This is what AI collaboration looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Legal Positioning (Internal Use OK, Outsourcing Gets Tricky)
&lt;/h2&gt;

&lt;p&gt;A brief touch on the legal positioning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internal use&lt;/strong&gt; (LLM discussion / internal analysis) is generally fine.&lt;br&gt;
The scrambled output is unidentifiable enough to substantially reduce&lt;br&gt;
privacy risk in most jurisdictions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Handling client data as a contractor&lt;/strong&gt; is where it gets tricky.&lt;br&gt;
The framing differs by jurisdiction:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Japan&lt;/strong&gt; classifies this as "entrusted processing" under
Article 27(5)(i) of the Personal Information Protection Act,
an exception to third-party transfer rules.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;EU/UK&lt;/strong&gt; treats it as a Data Processor / Data Controller
relationship under GDPR, with a Data Processing Agreement (DPA)
under Article 28 specifying the processing scope.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;US&lt;/strong&gt; uses HIPAA's Business Associate Agreement (BAA) for
healthcare data, or contractual data-handling clauses for
general PII.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The common pattern: &lt;strong&gt;contract language determines compliance&lt;/strong&gt;.&lt;br&gt;
Whichever jurisdiction you operate in, have your legal team review&lt;br&gt;
the "sanitization purpose and scope of use" clauses explicitly.&lt;/p&gt;

&lt;p&gt;In short: contracts get complicated, so legal review is recommended&lt;br&gt;
for contract work. I won't go deeper than that here.&lt;/p&gt;

&lt;h2&gt;
  
  
  6. Closing
&lt;/h2&gt;

&lt;p&gt;What the AI collaboration era needs is a scaffold tool that converts&lt;br&gt;
&lt;strong&gt;"data you can't share"&lt;/strong&gt; into &lt;strong&gt;"samples you can share."&lt;/strong&gt;&lt;br&gt;
That tool fits in one line of pandas plus Faker.&lt;/p&gt;

&lt;p&gt;And the post's thesis:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Don't have AI implement your naive idea immediately —&lt;br&gt;
survey it in industry terminology first, then implement.&lt;/strong&gt;&lt;br&gt;
That discipline compresses code volume by 10x.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If I hadn't known that "column-wise shuffle" is PPC's official&lt;br&gt;
"shuffling" method, I would have asked the LLM to "generate random&lt;br&gt;
names with Faker, build a consistency dictionary, maintain referential&lt;br&gt;
integrity, ..." — full from-scratch implementation.&lt;br&gt;
In reality, one line of pandas was enough.&lt;/p&gt;

&lt;p&gt;Survey-driven discipline. In the AI collaboration era, what matters&lt;br&gt;
on the human side is &lt;strong&gt;the ability to cross-reference industry knowledge.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>privacy</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why I Run Two AIs Against Each Other: An Ops Engineer's View on AI Governance</title>
      <dc:creator>J.S_Falcon</dc:creator>
      <pubDate>Sat, 02 May 2026 01:04:53 +0000</pubDate>
      <link>https://dev.to/_d3709cf9e80fc6babbff/why-i-run-two-ais-against-each-other-an-ops-engineers-view-on-ai-governance-30km</link>
      <guid>https://dev.to/_d3709cf9e80fc6babbff/why-i-run-two-ais-against-each-other-an-ops-engineers-view-on-ai-governance-30km</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I run two different AIs (Claude and Gemini) against each other, with myself as a human router carrying messages between them. No auto-orchestration framework.&lt;/li&gt;
&lt;li&gt;The setup is a complementary view to Dev-centric AI automation tools, not a replacement. Right tool for the right job.&lt;/li&gt;
&lt;li&gt;Operations background suggested treating this as a two-layer design: internal diversity (prompts within one model) and external diversity (cross-vendor models). Both layers contribute, neither alone is enough.&lt;/li&gt;
&lt;li&gt;Five practices and five caveats below — drawn from one engineer's one-month operation, framed as a hypothesis, not a recipe.&lt;/li&gt;
&lt;li&gt;Each part of this series stands alone — Part 1 is the entity resolution case study, Part 2 is the AI collaboration patterns, this is the architecture-level view.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. The Problem: When Multi-AI Becomes an Echo Chamber
&lt;/h2&gt;

&lt;p&gt;Multi-agent AI setups have been everywhere for the last year. AutoGen, CrewAI, LangGraph, and others let you spin up several agents with role prompts (planner, reviewer, executor) and let them talk to each other. Useful, fast, automated.&lt;/p&gt;

&lt;p&gt;There's a quiet failure mode, though. When all the agents share the same underlying model, "multi-agent" can drift into "multi-prompt-on-one-model." The agents end up reasoning from the same training distribution, hitting the same blind spots, agreeing too quickly. You get the appearance of diverse perspectives without actual diversity.&lt;/p&gt;

&lt;p&gt;I noticed this on my own setup. I was using Claude Code for development and asking the same Claude to play "devil's advocate" against its own proposals. Most of the time the devil's advocate was thoughtful, but on hard questions it tended to go quiet — the model couldn't really argue against itself when the alternatives lived in the same prompt context.&lt;/p&gt;

&lt;p&gt;That's where the cross-vendor experiment started.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. An Operations View: Internal vs External Diversity
&lt;/h2&gt;

&lt;p&gt;I come from an operations / systems engineering background. When I look at an AI workflow, I tend to ask the questions an SRE would ask: where are the single points of failure? What happens on Day 2? What does the audit trail look like?&lt;/p&gt;

&lt;p&gt;Through that lens, multi-AI setups have two layers of diversity, and they're not interchangeable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internal diversity&lt;/strong&gt; is what you get from prompts inside a single model. "You are now arguing the opposite case." "Critique this design as a security reviewer." "List three reasons to reject this proposal." The model switches voices but keeps the same underlying reasoning substrate. Useful, cheap, fast.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;External diversity&lt;/strong&gt; is what you get from a different vendor's model. Claude and Gemini don't share weights. Their training data overlaps but isn't identical. Their bias profiles differ. When Claude proposes a design and Gemini critiques it, the critique comes from a different statistical posture, not a different mood from the same speaker.&lt;/p&gt;

&lt;p&gt;These are not equivalent. They're substantially comparable for the surface task (producing diverse views), but they differ on operational properties:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Property&lt;/th&gt;
&lt;th&gt;Internal diversity&lt;/th&gt;
&lt;th&gt;External diversity&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Audit independence&lt;/td&gt;
&lt;td&gt;Single context log&lt;/td&gt;
&lt;td&gt;Independent logs per model&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Vendor lock-in&lt;/td&gt;
&lt;td&gt;High (one vendor's model)&lt;/td&gt;
&lt;td&gt;Low (cross-vendor)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Failure isolation&lt;/td&gt;
&lt;td&gt;Context corruption affects all roles&lt;/td&gt;
&lt;td&gt;Independent failure domains&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Real parallelism&lt;/td&gt;
&lt;td&gt;Sequential within one context&lt;/td&gt;
&lt;td&gt;True parallel calls possible&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;So the design I ended up with is two-layer: internal diversity inside each model, plus external diversity across vendors. The two layers compound. Neither alone reproduces what both together produce.&lt;/p&gt;

&lt;p&gt;The transport between models, in my setup, is email. Claude proposes; I copy the proposal into a script that emails it to a Gemini-readable inbox; Gemini reads, critiques, and responds; I paste the response back to Claude. Slow on purpose. We'll come back to that.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 1: Five Practices
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Practice 1 — Force adversarial framing into prompts explicitly
&lt;/h3&gt;

&lt;p&gt;When you ask a model "what do you think of this?" the default answer is usually a polite agreement. That's not what cross-review is for.&lt;/p&gt;

&lt;p&gt;In every cross-vendor exchange, I include explicit phrasing that requires the responder to argue against. The wording I actually use, more or less verbatim:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Identify the weakest link in this design.&lt;br&gt;
Give three reasons to reject this proposal.&lt;br&gt;
Where would this hypothesis fail?&lt;br&gt;
If you find yourself agreeing, name the assumption you are least sure about.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Without that prompt-level push, the second AI tends to mirror the first, which is the failure mode the whole setup is supposed to prevent.&lt;/p&gt;

&lt;p&gt;This sounds obvious until you watch a model respond to a soft prompt and realize how much friction the polite default introduces.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practice 2 — Adopt an audit-first protocol (Seq / Re style) with hard-stop on gap
&lt;/h3&gt;

&lt;p&gt;Every message between Claude and Gemini in my system carries a header: &lt;code&gt;[MAAR-Session: &amp;lt;topic&amp;gt; | Seq: N | Re: M]&lt;/code&gt;. Sequence number from the sender, reply-to number for the message being answered. Borrowed straight from TCP and email threading.&lt;/p&gt;

&lt;p&gt;The point is not the format. The point is that any message is self-documenting about which conversation thread it belongs to, which message it answers, and what came before. When something goes wrong — a misrouted reply, a missing message, an out-of-order paste — the gap is visible.&lt;/p&gt;

&lt;p&gt;When the gap shows up, the protocol response is &lt;strong&gt;hard-stop&lt;/strong&gt;: pause the conversation, request retransmission, do not paper over the missing piece. That's the operations-side instinct showing up — a missing log line stops the change window, period. "Best-effort continue" is the failure mode that produces silent data loss in production; it's no better in AI workflows.&lt;/p&gt;

&lt;p&gt;In a fully automated multi-agent system, this kind of bookkeeping is implicit in the framework. In a human-routed system, you need it explicitly. The cost is small (a few characters in each header). The benefit is the audit trail you actually get, plus the explicit stop signal when reality drifts from the protocol.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practice 3 — Use vendor-neutral persona prompts
&lt;/h3&gt;

&lt;p&gt;The prompts I send to Claude and Gemini for the same role (e.g., "critique this design") are kept as close to identical as possible. Same wording, same structure, same evaluation criteria. Differences in the responses come from differences in the models, not from differences in how I asked.&lt;/p&gt;

&lt;p&gt;This matters because it lets me actually compare the two outputs. If Claude pushes back hard and Gemini agrees, I know that's a real signal about the proposal — not an artifact of having asked Gemini in a softer way.&lt;/p&gt;

&lt;p&gt;The temptation is to tune each prompt to the strengths of the model. Resist it. Vendor-neutral prompts give you a comparable signal across vendors, which is the whole point.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practice 4 — Switch between polite mode and adversarial mode
&lt;/h3&gt;

&lt;p&gt;Most of my cross-reviews run in what I call "polite mode": measured language, restrained framing, "please consider" phrasing. That's the right default for normal review.&lt;/p&gt;

&lt;p&gt;But sometimes the second AI agrees with everything I say. That's when I switch to "adversarial mode" deliberately: explicit framing that this is a thought experiment, instructions to drop politeness, demand for a forced disagreement, even (carefully) raising questions about whether the model has biases that explain its agreement.&lt;/p&gt;

&lt;p&gt;The mode switch is intentional, time-boxed, and announced inside the prompt. It's not the default — overdoing it produces performative dissent (more on that in the caveats). But used sparingly, it's the mechanism that breaks the over-agreement spiral when it shows up.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practice 5 — Hold to hypothesis discipline (sample-size humility)
&lt;/h3&gt;

&lt;p&gt;This whole setup is N=1. One engineer, one month, one workflow. That doesn't make it wrong — but it doesn't make it a recipe either.&lt;/p&gt;

&lt;p&gt;In every external description of this setup, including the article you're reading, I try to keep the framing as "individual observation, presented as a hypothesis." Avoid words like &lt;em&gt;paradigm&lt;/em&gt;, &lt;em&gt;grand theory&lt;/em&gt;, &lt;em&gt;the right way&lt;/em&gt;. The discipline is partly about honesty (these claims aren't tested at scale yet) and partly about staying open to evidence that breaks the model.&lt;/p&gt;

&lt;p&gt;If the hypothesis is right, it'll show its strength against contrary cases over time. If it's wrong, the framing makes it easier to walk it back without losing face.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2: Five Caveats
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Caveat 1 — Sample size of one
&lt;/h3&gt;

&lt;p&gt;This is one engineer's one-month experiment. Patterns described as "practices" here might fail to generalize, might depend on my specific workflow or domain, might not survive contact with a different setup. Read the practices as hypotheses, not recommendations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Caveat 2 — Right tool for the right job
&lt;/h3&gt;

&lt;p&gt;Auto-orchestration frameworks (AutoGen, CrewAI, LangGraph) exist for good reasons. They're faster, cheaper, and better-suited to many use cases. The human-routed setup described here is a complement, not a replacement. If your task fits an autonomous loop, use one. The two-layer design is most useful where audit independence and cross-vendor signal matter more than throughput.&lt;/p&gt;

&lt;h3&gt;
  
  
  Caveat 3 — Internal and external diversity are not fully equivalent
&lt;/h3&gt;

&lt;p&gt;Internal diversity (prompt-based persona switching) covers a substantial portion of what external diversity provides — but not all of it. Audit independence, vendor lock-in resistance, failure isolation, and real parallelism are properties that internal diversity simply cannot match. Claiming "we got the same effect with one model" is a category error.&lt;/p&gt;

&lt;h3&gt;
  
  
  Caveat 4 — Performative dissent risk
&lt;/h3&gt;

&lt;p&gt;If you push the second model into adversarial mode too often, it learns to manufacture disagreement. You get pushback that's syntactically critical but substantively empty. The mode switch only works because it's the exception, not the default. Used as a routine technique, it produces noise instead of signal.&lt;/p&gt;

&lt;h3&gt;
  
  
  Caveat 5 — Maintenance overhead is real
&lt;/h3&gt;

&lt;p&gt;Audit trails, mode switching, header conventions, hypothesis framing — this is more discipline than a casual workflow. The overhead is justified for high-stakes decisions and design reviews, less so for everyday tasks. If you adopt the practices, calibrate which ones are worth the cost in your context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;Three things I've taken from running this for a month:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Internal and external diversity compose.&lt;/strong&gt; Either alone is a partial defense against single-model bias. Together they cover more of the failure surface than the sum of the parts would suggest.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The transport doesn't have to be fancy.&lt;/strong&gt; Email and copy-paste are slow on purpose. If that reads as primitive — that's the design. The friction isn't from inability to automate the transport; it's a deliberate cognitive checkpoint between the two models. Slowness is the feature: it gives me time to actually read the response before forwarding it, and it forces the audit trail to be a thing I look at, not a thing the framework hides from me.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discipline beats automation for governance work.&lt;/strong&gt; Auto-orchestration is faster. Auto-orchestration is also harder to audit, harder to debug, and harder to explain to a compliance reviewer. For governance-shaped tasks, the slow path wins on the metrics that matter.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The whole construction is a hypothesis. I'd trade it for a tool that does the same job better tomorrow.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Part 1 of this series — &lt;a href="https://dev.to/_d3709cf9e80fc6babbff/beating-250000-mental-comparisons-a-cross-domain-engineers-entity-resolution-case-study-42n4"&gt;the entity resolution case study&lt;/a&gt; — is the concrete build that prompted this whole reflection.&lt;/p&gt;

&lt;p&gt;Part 2 — the AI collaboration patterns from an operations lens — sits alongside this one and covers the session-time discipline.&lt;/p&gt;

&lt;p&gt;A future part will cover the protocol design itself: the Seq/Re headers, the TTL-based session control, the gap detection and hard-stop rules. That's where the human-routed design earns its keep.&lt;/p&gt;

&lt;p&gt;Comments welcome — particularly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cross-vendor multi-AI patterns you've tried, and what surprised you.&lt;/li&gt;
&lt;li&gt;Cases where internal diversity &lt;em&gt;was&lt;/em&gt; enough (counter-evidence to the two-layer claim).&lt;/li&gt;
&lt;li&gt;Audit and governance experiences with auto-orchestration frameworks (where they shone, where they didn't).&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>discuss</category>
      <category>watercooler</category>
    </item>
    <item>
      <title>What Operations Discipline Brings to AI-Assisted Coding: A Cross-Domain Field Guide</title>
      <dc:creator>J.S_Falcon</dc:creator>
      <pubDate>Wed, 29 Apr 2026 13:13:13 +0000</pubDate>
      <link>https://dev.to/_d3709cf9e80fc6babbff/what-operations-discipline-brings-to-ai-assisted-coding-a-cross-domain-field-guide-2067</link>
      <guid>https://dev.to/_d3709cf9e80fc6babbff/what-operations-discipline-brings-to-ai-assisted-coding-a-cross-domain-field-guide-2067</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;I moved from operations / systems engineering into the software side via AI collaboration. Part 1 of this series (&lt;a href="https://dev.to/_d3709cf9e80fc6babbff/beating-250000-mental-comparisons-a-cross-domain-engineers-entity-resolution-case-study-42n4"&gt;the entity resolution case study&lt;/a&gt;) is the build; this is the methodology.&lt;/li&gt;
&lt;li&gt;Five practices and five anti-patterns, filtered through an ops lens — but the lessons generalize.&lt;/li&gt;
&lt;li&gt;Not "AI tips you've heard." Patterns that fall out naturally if you treat AI sessions like config reviews, runbooks, and validation procedures.&lt;/li&gt;
&lt;li&gt;Each piece is paired with a real misstep I made building Part 1.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Each part of this series stands alone.&lt;/strong&gt; Read in any order.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why an "Operations Discipline" Lens
&lt;/h2&gt;

&lt;p&gt;Operations engineers spend their careers internalizing four habits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Plan before you build&lt;/strong&gt; — designs, runbooks, change requests.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Verify before you declare done&lt;/strong&gt; — validation procedures, post-change checks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Document state&lt;/strong&gt; — configs, design docs, postmortems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Suspect numbers&lt;/strong&gt; — every monitoring datapoint hides an artifact.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These habits transfer directly to working with AI coding assistants. The disciplines you learned debugging routers, filing change requests, and reviewing configs are the same ones that prevent AI sessions from sliding off the rails.&lt;/p&gt;

&lt;p&gt;I'm framing this through ops because that's the lens I learned from. Most of these patterns generalize beyond ops — software engineers, data engineers, and SREs will recognize them. The ops version just happens to package them tightly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 1: Five Practices
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Practice 1 — Treat your CLAUDE.md (or system prompt) as a design-spec preamble
&lt;/h3&gt;

&lt;p&gt;In ops, every change procedure has a preamble: prerequisites, scope, rollback steps, validation checks. Same energy in AI work.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;CLAUDE.md&lt;/code&gt; is Claude Code's persistent instruction file. (Other assistants have equivalents — system prompts, custom instructions, etc.) Use it the way you'd use a runbook preamble:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Operating principles&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Always plan before implementing.
&lt;span class="p"&gt;-&lt;/span&gt; Confirm ambiguous instructions before coding.
&lt;span class="p"&gt;-&lt;/span&gt; Always provide a counter-argument when proposing a design.
&lt;span class="p"&gt;-&lt;/span&gt; Never report a metric without showing how it was measured.
&lt;span class="p"&gt;-&lt;/span&gt; Distinguish "should work" from "actually verified to work."
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once written, every future session inherits these rules. You stop re-explaining yourself. This is the same template-then-reuse pattern that saves you from rewriting a runbook for every change window.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practice 2 — Demand a Devil's Advocate, every time
&lt;/h3&gt;

&lt;p&gt;Design reviews exist because group-think kills production systems. Force the AI to argue against itself in every proposal.&lt;/p&gt;

&lt;p&gt;Three asks I bake into every meaningful design conversation:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;em&gt;What's the worst-case failure mode of this design?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;What use case did you not consider?&lt;/em&gt;&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Give me three reasons to reject this design.&lt;/em&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bake this requirement into your &lt;code&gt;CLAUDE.md&lt;/code&gt; and you stop seeing pure agreement. An AI that only agrees with you is a single point of failure.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practice 3 — Force ambiguous instructions to be confirmed before implementation
&lt;/h3&gt;

&lt;p&gt;In ops requirements gathering, "implement loose specs" is a known disaster pattern. The same is true for AI sessions, where ambiguity gets resolved silently — and usually wrong.&lt;/p&gt;

&lt;p&gt;Real example from Part 1: I said "treat the ID and the display name as a pair, match if either is present." The AI interpreted that as two independent search keys. Half the matcher had to be rebuilt.&lt;/p&gt;

&lt;p&gt;Lesson, written into &lt;code&gt;CLAUDE.md&lt;/code&gt;: &lt;em&gt;if an instruction has two valid readings, ask which one I mean before writing code.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;This is the same habit as a senior network engineer asking "do you mean inbound or outbound?" before touching the firewall.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practice 4 — Separate "theoretical evaluation" from "real-world evaluation"
&lt;/h3&gt;

&lt;p&gt;Ops engineers know the gap between "the spec says it works" and "I've watched the LED light up." The same gap exists in AI work, and it's wider than you'd think.&lt;/p&gt;

&lt;p&gt;Real example from Part 1: the AI claimed about 99.2% recall based on past-data pattern analysis. I asked for an actual run on the real dataset. The actual recall came back at 55%.&lt;/p&gt;

&lt;p&gt;The lesson is not "the AI lied." The lesson is that &lt;em&gt;pattern-analysis predictions are not the same as a real execution result.&lt;/em&gt; Every claim that sounds like a measurement deserves the question: &lt;em&gt;was this measured, or estimated?&lt;/em&gt; If estimated, label it that way and move on; if measured, show the run.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practice 5 — Have the AI write its own verification scripts
&lt;/h3&gt;

&lt;p&gt;If the AI says "this code achieves 99% recall," ask it to write the script that measures that recall. Then run it.&lt;/p&gt;

&lt;p&gt;This converts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A claim → a script.&lt;/li&gt;
&lt;li&gt;A script → an audit trail.&lt;/li&gt;
&lt;li&gt;An audit trail → reproducibility.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is the same pattern as runbooks: a change procedure and a validation procedure, always paired. The validation script becomes a permanent artifact you can hand to the next person — or to your future self when something regresses.&lt;/p&gt;

&lt;h2&gt;
  
  
  Part 2: Five Anti-Patterns
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Anti-Pattern 1 — "Just build me a tool"
&lt;/h3&gt;

&lt;p&gt;The AI equivalent of "fix the network." Without scope, the AI invents one. Worse, it pursues the invented scope confidently, so the wrong direction is pursued aggressively.&lt;/p&gt;

&lt;p&gt;Treat session start like requirements gathering: rough goal, key constraints, what's explicitly out of scope. Five minutes of scoping saves five hours of rework.&lt;/p&gt;

&lt;h3&gt;
  
  
  Anti-Pattern 2 — Trusting headline numbers without verifying composition
&lt;/h3&gt;

&lt;p&gt;"99% recall" sounds great until you discover it was measured on cherry-picked rows, with the test set leaking into training data, on a metric that doesn't reflect the actual user experience.&lt;/p&gt;

&lt;p&gt;Before reporting any number, ask:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How was this measured?&lt;/li&gt;
&lt;li&gt;On what data?&lt;/li&gt;
&lt;li&gt;Under what conditions?&lt;/li&gt;
&lt;li&gt;With what biases?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is the same suspicion you apply to a monitoring dashboard reporting zero alerts: &lt;em&gt;is the agent actually reporting, or is it dead?&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Anti-Pattern 3 — Throwing raw error text at the AI without context
&lt;/h3&gt;

&lt;p&gt;"It doesn't work" → "Why?"&lt;/p&gt;

&lt;p&gt;In ops you'd never debug a router by saying "it's down." You'd attach: configuration, status output, syslog excerpts, behavior of connected devices.&lt;/p&gt;

&lt;p&gt;Same here. The AI cannot infer your environment. Show the command, the actual output, the expected behavior, and the deviation. Treat each interaction like a bug report you'd file with a vendor.&lt;/p&gt;

&lt;h3&gt;
  
  
  Anti-Pattern 4 — Sending business data to an AI without compliance review
&lt;/h3&gt;

&lt;p&gt;Default assumption: any data you put into a prompt may be retained, indexed, or used in training, regardless of what the vendor's marketing copy says.&lt;/p&gt;

&lt;p&gt;The operational habit is straightforward — redact, mask, or synthesize. The same instinct that keeps you from posting customer IPs to Stack Overflow should stop you from pasting customer rows into a prompt.&lt;/p&gt;

&lt;p&gt;(Part 1 covers this pattern in depth as it applied to the entity resolution build. The short version: deterministic logic touches the data; the AI touches only code, design notes, and synthetic samples.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Anti-Pattern 5 — Stopping at "it works"
&lt;/h3&gt;

&lt;p&gt;"The code runs" is not the same as "I understand why it runs."&lt;/p&gt;

&lt;p&gt;The ops version of this is: &lt;em&gt;a configuration that worked once but I can't explain is a future incident.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Make the AI explain why the working solution actually works. If neither of you can defend the design after one cycle of follow-up questions, treat it as a yellow flag — not a green light. Ship explainable code; the unexplained kind owns you on the day it breaks.&lt;/p&gt;

&lt;h2&gt;
  
  
  Wrap-Up
&lt;/h2&gt;

&lt;p&gt;The pattern across all ten:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Apply ops discipline to AI sessions.&lt;/li&gt;
&lt;li&gt;Treat AI claims like vendor claims — verify them in your environment.&lt;/li&gt;
&lt;li&gt;Treat AI conversations like change windows — preamble, scope, verification, postmortem.&lt;/li&gt;
&lt;li&gt;Treat AI outputs like config diffs — explain them or reject them.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What I'm explicitly &lt;strong&gt;not&lt;/strong&gt; claiming:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;These are not unique to operations engineers. They generalize. They just happen to package tightly through the ops lens because the discipline is already there.&lt;/li&gt;
&lt;li&gt;These are not the only practices. Five is a lossy compression. The ten you'd build for your environment may differ in detail.&lt;/li&gt;
&lt;li&gt;These cover the &lt;strong&gt;build phase&lt;/strong&gt; of AI-assisted work — the session-time discipline. &lt;strong&gt;Day 2 operations&lt;/strong&gt; (monitoring AI-generated code in production, detecting silent drift, incident response when AI-assisted changes break) is its own discipline and deserves its own article. The patterns here are necessary but not sufficient for production AI usage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The big idea: AI doesn't replace engineering judgment — it amplifies it. Amplifying lazy judgment produces more bad code, faster. Amplifying disciplined judgment produces clear, audited, defensible work.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;A future part of this series will cover &lt;strong&gt;how the design review for these articles actually happened&lt;/strong&gt; — a Multi-AI Adversarial Review (MAAR) loop where Claude and a second AI argued against each other under human routing. That's the meta-process behind both Part 1 and this one.&lt;/p&gt;

&lt;p&gt;If you came in via this article, &lt;a href="https://dev.to/_d3709cf9e80fc6babbff/beating-250000-mental-comparisons-a-cross-domain-engineers-entity-resolution-case-study-42n4"&gt;Part 1&lt;/a&gt; is the concrete build that produced these lessons.&lt;/p&gt;

&lt;p&gt;Comments welcome — particularly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The five practices or anti-patterns you'd add.&lt;/li&gt;
&lt;li&gt;Cross-domain engineering experiences (any technical background → another).&lt;/li&gt;
&lt;li&gt;Cases where ops discipline did &lt;em&gt;not&lt;/em&gt; transfer cleanly to AI work.&lt;/li&gt;
&lt;li&gt;Rollback strategies when an AI-assisted change corrupts your codebase or repo state.&lt;/li&gt;
&lt;li&gt;Day 2 operations practices for AI-generated code in production (monitoring, drift detection, incident response).&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>watercooler</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Beating 250,000 Mental Comparisons: A Cross-Domain Engineer's Entity Resolution Case Study</title>
      <dc:creator>J.S_Falcon</dc:creator>
      <pubDate>Wed, 29 Apr 2026 11:13:41 +0000</pubDate>
      <link>https://dev.to/_d3709cf9e80fc6babbff/beating-250000-mental-comparisons-a-cross-domain-engineers-entity-resolution-case-study-42n4</link>
      <guid>https://dev.to/_d3709cf9e80fc6babbff/beating-250000-mental-comparisons-a-cross-domain-engineers-entity-resolution-case-study-42n4</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Operations/Systems engineer recently moved to the software side via AI collaboration.&lt;/li&gt;
&lt;li&gt;Built a domain-specific entity resolution tool in a handful of evening sessions with Claude Code.&lt;/li&gt;
&lt;li&gt;Caught about 99.2% of human-detected reconciliation errors when replayed against 8 weeks of historical data.&lt;/li&gt;
&lt;li&gt;Turned a "skilled-veterans-only" weekly task into something anyone on the team can run.&lt;/li&gt;
&lt;li&gt;Design retrofitted unexpectedly well to dual process theory, Gestalt psychology, and anchoring-bias defense.&lt;/li&gt;
&lt;li&gt;Source business records never reached an LLM. Deterministic pipeline + human review only.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. The Hidden Problem: When 500 × 500 Becomes a Cognitive Wall
&lt;/h2&gt;

&lt;p&gt;Many companies maintain the same business entities across multiple systems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A retailer tracks SKUs in an internal master AND on Amazon / Rakuten / Shopify exports.&lt;/li&gt;
&lt;li&gt;A clinic carries patient records in both an EMR and an insurance billing system.&lt;/li&gt;
&lt;li&gt;A manufacturer holds internal inventory but also receives partner inventory feeds.&lt;/li&gt;
&lt;li&gt;An accounting team reconciles general ledger entries against bank statements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These pairs need periodic reconciliation. In the technical literature this is &lt;strong&gt;Entity Resolution&lt;/strong&gt; or &lt;strong&gt;Data Reconciliation&lt;/strong&gt; — a universal problem that nearly every mid-to-large business hits eventually.&lt;/p&gt;

&lt;p&gt;The case study here uses the &lt;strong&gt;retail SKU vs marketplace listing&lt;/strong&gt; framing. (The actual industry I work in is intentionally abstracted, but the structure transfers cleanly.) Two systems, ~500 rows each, weekly reconciliation. Skilled humans needed about 3 hours per week. Newcomers, half a day to a full day. Hidden detail: the small row count masks the real difficulty.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is 500 × 500 hard?
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The 250,000 problem
&lt;/h4&gt;

&lt;p&gt;Manually reconciling 500 × 500 pairs forces a person to evaluate up to &lt;strong&gt;250,000 combinations&lt;/strong&gt; in their head. Not 1,000 — 250,000. Plus typo tolerance, format variation (full-width vs half-width, mixed scripts, abbreviations, punctuation), and partial matches. Each pairwise judgment is not O(1).&lt;/p&gt;

&lt;p&gt;Brute-forcing this is computationally similar to running a 1,000-node full-mesh ping check vs a flat 1,000-node liveness check. Order-of-magnitude different load.&lt;/p&gt;

&lt;h4&gt;
  
  
  Working memory overflow
&lt;/h4&gt;

&lt;p&gt;Miller's "magical number" puts our short-term memory at 7 ± 2 chunks (Miller, 1956). Hunting matches across 1,000 candidates with format drift continuously overflows working memory and pegs System 2 (slow thinking) for the entire session. The 3-hour exhaustion experienced by veterans isn't a complaint — it's a neurological inevitability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Short to do" doesn't equal "easy to do"&lt;/strong&gt; for cognitive labor.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reproducibility decay
&lt;/h4&gt;

&lt;p&gt;A one-off reconciliation can be brute-forced. But when the task repeats weekly across 10+ weeks, judgment drift becomes unavoidable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Last week I matched 'A Co.' and 'A. Company' as the same entity. This week I treated them as different."&lt;/li&gt;
&lt;li&gt;"Last week I tolerated typo X. This week I rejected it."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This drift is what really breaks data quality long-term. It's the same structural failure mode as "config review standards differ by reviewer" in infrastructure operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  The actual target
&lt;/h3&gt;

&lt;p&gt;So the real problem the tool solved was not "shorten 3 hours per week" but:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;250,000 judgments × 10 weeks of consistent reproducibility — a quality bar humans can't physically sustain — backed by a deterministic machine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Plus removing the skill dependency. "Only one veteran can do this in 3 hours" is a single point of failure. After the tool: anyone could run it with consistent quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Background: Who I Am and What I Was Solving
&lt;/h2&gt;

&lt;p&gt;I'm an Operations/Systems engineer. Configuration, validation, runbook authoring, monitoring, troubleshooting — that side of the house. Software development was not my primary craft, though scripting was always part of the job.&lt;/p&gt;

&lt;p&gt;I'd recently moved into a new business domain (about 2 months in) and the tooling target system was something I'd only been touching for ~1 month. From the user side I'd seen the workflow longer, but not as a developer.&lt;/p&gt;

&lt;p&gt;Translation: design / validation / runbook discipline solid. Python and application development essentially unfamiliar.&lt;/p&gt;

&lt;p&gt;This article is &lt;strong&gt;not a "look what I shipped" piece&lt;/strong&gt;. It's a record of how operations-side disciplines transferred unchanged into AI-assisted software work in an unfamiliar domain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Who this article is for
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Reader&lt;/th&gt;
&lt;th&gt;Useful sections&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Operations / SRE engineers exploring AI assistance&lt;/td&gt;
&lt;td&gt;Everything&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mid-career engineers moving across technical domains&lt;/td&gt;
&lt;td&gt;Background, Architecture, Cognitive Design&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Engineers new to AI-assisted development&lt;/td&gt;
&lt;td&gt;Architecture, Cognitive Design, PII&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Managers thinking about AI for their teams&lt;/td&gt;
&lt;td&gt;Results and the cognitive-load argument&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  3. PII / Compliance Considerations
&lt;/h2&gt;

&lt;p&gt;A question that always comes up in comments on entity-resolution articles: &lt;strong&gt;where does the data go?&lt;/strong&gt; Worth answering up front.&lt;/p&gt;

&lt;p&gt;In this implementation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Source business records never reach any LLM.&lt;/strong&gt; Both input files (internal master + external system export) are read locally by a Python script.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Matching is fully deterministic.&lt;/strong&gt; Pandas, openpyxl, and &lt;code&gt;difflib.SequenceMatcher&lt;/code&gt; for similarity. No embedding API. No remote inference at runtime.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The LLM's role is code-side, not data-side.&lt;/strong&gt; Claude Code helped write the matching logic, the validation scripts, the design review, and the documentation. None of the actual records were ever sent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For testing only&lt;/strong&gt;, masked synthetic data was used in prompts. Real names, amounts, and addresses were replaced with synthetic equivalents before any prompt left the local environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edge cases stay with humans.&lt;/strong&gt; When the deterministic pipeline can't decide, it surfaces a flagged row for human review — not for LLM second opinion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation is intentional. The matching task is well-suited to deterministic logic. LLMs would only add cost, latency, and compliance exposure for no quality gain.&lt;/p&gt;

&lt;p&gt;If your team has even a soft "no business data into external AI" policy, this pattern is fully compatible.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Architecture: Two-Stage Matching + Cognitive Gates
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.11&lt;/li&gt;
&lt;li&gt;pandas + openpyxl (Excel I/O, color-coded output)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;difflib.SequenceMatcher&lt;/code&gt; for fuzzy similarity&lt;/li&gt;
&lt;li&gt;Rule-based throughout. No machine learning.&lt;/li&gt;
&lt;li&gt;~1,100 lines, single script.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phases
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Phase 1: Match by exact stakeholder name (or alias group)
Phase 2: Cross-match by name similarity ≥ 0.6 (rescue typos)
Phase 3: Last-name-only + structural match (single-typo tolerance)
Phase 4: Duplicate-registration detection (same stakeholder + similarity ≥ 0.8)
Phase 5: Rescue rows with no stakeholder name (attribute match)
Phase 5.5: Attribute-mismatch pair rescue (identifier similarity ≥ 0.7, stage 2)
Phase 6: Row generation + color decision
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The score function (key gates)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;compute_score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;row_a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;row_b&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Hard gate: region must match — kills cross-region false positives
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;region_a&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="n"&gt;region_b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
    &lt;span class="c1"&gt;# Hard gate: numeric attribute must be close enough
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;abs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value_a&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;value_b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;THRESHOLD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
    &lt;span class="c1"&gt;# Identifier gate: row_b's identifier must be embeddable in row_a's identifier
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="nf"&gt;is_identifier_match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;addr_a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;identifier_b&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
    &lt;span class="c1"&gt;# Sub-identifier gate: anchoring-bias defense
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;sub_id&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;addr_a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
    &lt;span class="c1"&gt;# Soft scoring (only after every hard gate passed)
&lt;/span&gt;    &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;identifier_match_score&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;similarity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value_fallback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.6&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why this shape?
&lt;/h3&gt;

&lt;p&gt;The retail SKU framing helps here. The same product on a marketplace might appear as &lt;code&gt;iPhone15&lt;/code&gt; in your master and &lt;code&gt;iPhone 15 Pro Max&lt;/code&gt; on the marketplace. Same item family, different surface form. Two key insights:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Hard gates first.&lt;/strong&gt; "Different region" or "value difference &amp;gt; N" are absolute disqualifiers. Run them before any expensive similarity computation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Soft scoring last.&lt;/strong&gt; Once hard gates pass, compute similarity — but cap below 0.6 as "uncertain, surface to human."&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why not ML / Vector DB / embeddings?
&lt;/h3&gt;

&lt;p&gt;Deterministic rule-based was chosen on purpose. Auditability was the requirement. When a flagged row is wrong, the operations team has to be able to trace exactly which gate fired and why. A black-box similarity score of 0.81 with no explanation cannot be reviewed, cannot be unit-tested, and cannot be defended in a compliance audit.&lt;/p&gt;

&lt;p&gt;ML is a fine choice when you have labeled training data, training infrastructure, and a continuous evaluation pipeline. None of these applied here. The operating constraint was: "anyone on the team should be able to read the code and know why it decided what it decided." That constraint forces deterministic logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Abstracted structure
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Domain-specific term&lt;/th&gt;
&lt;th&gt;Abstract concept&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Item / SKU&lt;/td&gt;
&lt;td&gt;Entity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stakeholder (vendor / agent)&lt;/td&gt;
&lt;td&gt;Stakeholder attribute&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Price / Amount&lt;/td&gt;
&lt;td&gt;Primary numeric attribute&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Address / Location&lt;/td&gt;
&lt;td&gt;Identifier (multi-attribute)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Building / SKU name&lt;/td&gt;
&lt;td&gt;Auxiliary identifier&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Detail number / barcode&lt;/td&gt;
&lt;td&gt;Sub-identifier&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Format variation (kana/latin/case)&lt;/td&gt;
&lt;td&gt;Data quality issue&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain judgment&lt;/td&gt;
&lt;td&gt;Tacit knowledge&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is a universal "match entities across two systems with format drift" problem. The pattern reappears in EC, healthcare, HR, accounting, manufacturing, publishing — anywhere two systems represent the same business object differently.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Cognitive-Science Design Principles (the Twist)
&lt;/h2&gt;

&lt;p&gt;I didn't design this thinking about cognitive science. I built it, it worked, and only afterwards in a structured Gemini conversation did the underlying principles surface. The retrofit fits unsettlingly well.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.1 Dual process theory (Daniel Kahneman)
&lt;/h3&gt;

&lt;p&gt;The two phases map onto two thinking modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;System 1 (fast) = Phases 1–5.&lt;/strong&gt; Fuzzy "is this roughly the same thing?" — similarity scores, identifier matching, attribute closeness.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;System 2 (slow) = &lt;code&gt;determine_color()&lt;/code&gt;.&lt;/strong&gt; Strict checks for value mismatch, format inconsistency, identifier mixing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Color-coded human review gets the System 1 fuzzy pass plus the System 2 strictness annotation, which is exactly the input shape humans need to make a final call.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.2 Gestalt psychology
&lt;/h3&gt;

&lt;p&gt;Humans recognize "wholes," not character sequences. &lt;code&gt;iPhone15&lt;/code&gt; and &lt;code&gt;iPhone 15 Pro Max&lt;/code&gt; feel like the same product family even though strict string equality fails. So:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;is_identifier_match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;addr_a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;identifier_b&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Recognize chunked identity even with mixed scripts and separators.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;chunks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[A-Za-z0-9\s\-_]+&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;identifier_b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;addr_a&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;chunks&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Matching by chunks survives whitespace, separator, and script variation.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.3 Anchoring &amp;amp; confirmation bias defenses
&lt;/h3&gt;

&lt;p&gt;Hard gates exist to deny human-style intuitive shortcuts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Same price, must be the same item" — rejected by sub-identifier gate.&lt;/li&gt;
&lt;li&gt;"Same name, must be the same person" — rejected by region gate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The machine's job is to be coldly skeptical exactly where humans get over-confident.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.4 Reducing human cognitive load (Human-in-the-Loop)
&lt;/h3&gt;

&lt;p&gt;When a human is asked to confirm a flagged row, they don't get an opaque "match score 0.62". They get a one-line annotation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Same entity matched | [Value mismatch] diff ¥2,000,000 (5.4%)
(A: ¥34,900,000 / B: ¥36,900,000) · identifier format inconsistent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The human doesn't waste cycles re-deriving why the row was flagged. Cognitive load drops sharply.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.5 Don't automate the ghost
&lt;/h3&gt;

&lt;p&gt;This part borrows from &lt;em&gt;Ghost in the Shell&lt;/em&gt;. Some judgments depend on tacit business knowledge that can't be reduced to rules. Don't build heuristics that pretend to encode them. Surface the row as a &lt;strong&gt;caution signal&lt;/strong&gt; and let a human apply the tacit layer.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tightening the logic isn't a path to recreating the ghost.&lt;br&gt;
It's a path to revealing where the ghost is needed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Mapping summary
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Cognitive concept&lt;/th&gt;
&lt;th&gt;Implementation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;System 1 (fast)&lt;/td&gt;
&lt;td&gt;Phases 1–5 (fuzzy matching)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System 2 (slow)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;determine_color()&lt;/code&gt; strict checks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Two-stage / dual-pass&lt;/td&gt;
&lt;td&gt;Stage 1 + Stage 2 (Phase 5.5)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gestalt grouping&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;similarity&lt;/code&gt; / &lt;code&gt;is_identifier_match&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Anchoring defense&lt;/td&gt;
&lt;td&gt;Sub-identifier gate, identifier gate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cognitive load reduction&lt;/td&gt;
&lt;td&gt;Aggregated &lt;code&gt;[reason] diff X&lt;/code&gt; annotations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Human-in-the-Loop&lt;/td&gt;
&lt;td&gt;Caution signals for tacit-knowledge zones&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  6. Results
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Recall on 8 weeks of historical data
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Errors flagged by humans (excluding outlier weeks)&lt;/td&gt;
&lt;td&gt;~130&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Errors caught by the tool&lt;/td&gt;
&lt;td&gt;~129&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Recall&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~99.2%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The single missed case was annotated by the human reviewer as "even a human couldn't decide here." Effectively the tool catches every case where a human commits a confident verdict.&lt;/p&gt;

&lt;p&gt;(Caveat: this is recall against 8 weeks of one team's data, not a benchmark claim. Different domains will need their own measurement.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Time and skill load
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Skilled veteran throughput&lt;/td&gt;
&lt;td&gt;~3 hrs/week&lt;/td&gt;
&lt;td&gt;~30 min/week (review only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Newcomer throughput&lt;/td&gt;
&lt;td&gt;half a day to full day&lt;/td&gt;
&lt;td&gt;~30 min/week&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Skill dependency&lt;/td&gt;
&lt;td&gt;Yes (single point of failure)&lt;/td&gt;
&lt;td&gt;No (anyone can run it)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The time number understates the value. The real shift is &lt;strong&gt;breaking the skill SPOF&lt;/strong&gt;. Veteran out sick, leaves, or buried in another priority — work continues at the same quality.&lt;/p&gt;

&lt;h3&gt;
  
  
  A note on false positives
&lt;/h3&gt;

&lt;p&gt;Recall is 99.2%, but the tool is intentionally tuned for higher recall over higher precision. False positives — pairs flagged for human review that turn out to be fine — are accepted as the trade-off. The ~30 min/week of human review handles them without strain.&lt;/p&gt;

&lt;p&gt;In a no-human-in-the-loop deployment this trade-off would be very different. Here, false positives are cheap (a glance from a human reviewer) and false negatives (missed reconciliation errors) are expensive (data drift propagates into business reports).&lt;/p&gt;

&lt;h2&gt;
  
  
  7. The Flowchart
&lt;/h2&gt;

&lt;p&gt;Drawing the judgment flow as diagrams surfaced things the code review didn't. Below are the four phases as separate figures, in execution order.&lt;/p&gt;

&lt;h3&gt;
  
  
  7.1 Phase 1: Hard Gates (sequential disqualifiers)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2dyd67f8r7e2kwvxzcf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2dyd67f8r7e2kwvxzcf.png" alt="Phase 1: Hard Gates - region, value, auxiliary identifier, sub-identifier sequential disqualifiers" width="800" height="1143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Region → numeric value → auxiliary identifier → sub-identifier. Each gate is an absolute disqualifier: any "No" drops the pair. The order matters — cheapest disqualifiers run first.&lt;/p&gt;

&lt;h3&gt;
  
  
  7.2 Phase 2: Soft Match
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlft07vl32heaowckcsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlft07vl32heaowckcsg.png" alt="Phase 2: Soft Match - compute_score threshold and lock" width="542" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once a pair clears all hard gates, &lt;code&gt;compute_score&lt;/code&gt; evaluates a soft similarity. Below 0.6 → drop. At or above → lock the pair as the same entity.&lt;/p&gt;

&lt;h3&gt;
  
  
  7.3 Phase 3: Parallel Flag Checks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ozelqo0xcqb643e4405.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ozelqo0xcqb643e4405.png" alt="Phase 3: Parallel flag checks - six independent anomaly tests aggregated into tags" width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For confirmed matches, six independent checks fire in parallel. Each surfaces a "this matched, but here's a discrepancy" signal. Tags are aggregated; there is no early-return contamination between checks.&lt;/p&gt;

&lt;h3&gt;
  
  
  7.4 Phase 4: Final Verdict and Drop Aggregation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6javxmbdxn53iioak5uw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6javxmbdxn53iioak5uw.png" alt="Phase 4: Final verdict color decision and drop aggregation into Unmatched lane" width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aggregate the tags into a color verdict. Drops from Phase 1 and Phase 2 converge into the "Unmatched" lane, surfaced standalone in the human-review output.&lt;/p&gt;

&lt;h3&gt;
  
  
  Things visible only after rendering as a diagram
&lt;/h3&gt;

&lt;p&gt;These were invisible while reading code, only obvious once drawn:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Phase 1 hard gates are ordered by computational cost.&lt;/strong&gt; Region → numeric → auxiliary → sub-identifier. I placed them by intuition; the diagram showed they were already optimal — cheapest disqualifiers first.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase 3 parallel flag checks are genuinely independent.&lt;/strong&gt; Six checks fire in parallel with no early-return contamination. The diagram confirmed there was no silent dependency between them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All &lt;code&gt;Drop1&lt;/code&gt;–&lt;code&gt;Drop5&lt;/code&gt; paths converge to the same &lt;code&gt;Unmatched&lt;/code&gt; node.&lt;/strong&gt; I was throwing away the drop reason. Re-running "why was this pair rejected?" was impossible. Fix: log the drop reason in the row annotation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Drawing the flowchart is roughly the same act as drawing an infrastructure topology before going live. The diagram is the rubber duck.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Wrap-up
&lt;/h2&gt;

&lt;p&gt;Three transferable lessons from this build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cognitive load is the hidden cost&lt;/strong&gt; of "short" repetitive judgment tasks. Headcount-hour math undersells the burnout reality and skill-SPOF risk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cognitive science principles fall out of good design retroactively.&lt;/strong&gt; I didn't design with them in mind; the principles became visible only through structured review (with a second AI). If your design retrofits to known principles, that's confirmation. If it doesn't, that's a smell.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLMs do NOT have to touch your data.&lt;/strong&gt; Most entity resolution work doesn't need them at all. Use them for code, design review, and documentation. Keep the business records local and deterministic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The implementation itself is internal-use only and won't be open-sourced. The patterns generalize cleanly to any two-system entity reconciliation: EC, healthcare, HR, accounting, manufacturing, publishing.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. What's Next
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Coming in Part 2&lt;/strong&gt;: how this whole thing got built in the first place — the AI collaboration patterns, the anti-patterns I hit, and the cross-domain disciplines that transferred from operations to software development. (Link to A2 once published.)&lt;/p&gt;

&lt;p&gt;Comments on entity resolution, cognitive load in repetitive tasks, or cross-domain engineering experiences are welcome.&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>architecture</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Field Notes from a Cross-Domain Engineer Working with AI</title>
      <dc:creator>J.S_Falcon</dc:creator>
      <pubDate>Wed, 29 Apr 2026 01:36:50 +0000</pubDate>
      <link>https://dev.to/_d3709cf9e80fc6babbff/field-notes-from-a-cross-domain-engineer-working-with-ai-13kh</link>
      <guid>https://dev.to/_d3709cf9e80fc6babbff/field-notes-from-a-cross-domain-engineer-working-with-ai-13kh</guid>
      <description>&lt;p&gt;I run two AI assistants from different vendors against each other on every non-trivial decision, with a human (me) sitting in the middle as the routing authority. What's surprised me most is not that they disagree — it's &lt;em&gt;how&lt;/em&gt; they disagree. They drift toward agreeing with me too quickly, then with each other too quickly, and I've had to design specific friction into the workflow to keep their disagreement productive.&lt;/p&gt;

&lt;p&gt;Writing this down because I think the friction patterns might be useful to other people running similar setups.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this is
&lt;/h2&gt;

&lt;p&gt;I've been working as an operations / systems engineer for about twenty years. Networks first, then servers, then config management, then incident response. The kind of role where the job is mostly &lt;em&gt;making sure nothing breaks&lt;/em&gt;, and where every change request comes with a runbook, a verification step, and a rollback plan.&lt;/p&gt;

&lt;p&gt;In the last year and a half, AI coding assistants pulled me into the software side of the house. Not because I wanted a career change — because the boundary between "writes code" and "runs systems" got thin enough that ignoring it stopped being an option.&lt;/p&gt;

&lt;p&gt;This series is &lt;strong&gt;a running log&lt;/strong&gt; of what I've been finding, written as I go. The disciplines I pulled in from operations turned out to map onto AI collaboration unexpectedly well. The places where they didn't, I've had to invent something for myself. Observation-heavy and prescription-light. I don't think I've discovered anything new; I've stumbled into the same territory a lot of practitioners and researchers are mapping right now, and I'm writing down my coordinates while they're fresh.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this is &lt;em&gt;not&lt;/em&gt;
&lt;/h2&gt;

&lt;p&gt;A few things I want to flag up front.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is not a survey.&lt;/strong&gt; I haven't done an exhaustive review of the literature, the toolchain ecosystem, or the practitioner community. I'm certain there are people who've been running similar disciplines longer than me and writing them up better. If you find one, please send me the link — I'll learn from it, and I'd rather correct the record than defend a stale draft.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This is dated material.&lt;/strong&gt; Anything you read here is a snapshot from early 2026, written from one specific vantage point: a Japanese operations engineer transitioning into AI-assisted software work, with paid Claude / ChatGPT / Gemini subscriptions and no team-scale deployment. If your context differs significantly, the disciplines may transfer poorly or not at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  The five disciplines this series circles around
&lt;/h2&gt;

&lt;p&gt;A set of operating principles I keep coming back to in my own daily work. Each earns its own article (or several) in the series. The short version, in a "simple to compose" → "complex to compose" order:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. TTL Satisficing
&lt;/h3&gt;

&lt;p&gt;I cap the number of back-and-forth rounds in any AI-assisted decision. Three is a good default. The point is not to find the perfect answer — it's to converge on a &lt;em&gt;good-enough&lt;/em&gt; answer before the next round costs more than the marginal improvement is worth. If three rounds don't converge, I treat that as a signal that the &lt;em&gt;problem&lt;/em&gt; is the problem, not the round count.&lt;/p&gt;

&lt;p&gt;This is essentially &lt;strong&gt;timeout / retry-budget design&lt;/strong&gt; from network operations, applied to a conversation with an AI.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Karpathy Rule (Simplicity First, Surgical Changes)
&lt;/h3&gt;

&lt;p&gt;Andrej Karpathy's framing, adapted to my workflow: I don't let the AI add what wasn't asked for, and I don't let it edit what wasn't in scope. Speculative additions are how technical debt grows at AI speed. Keeping the diff small and the proposal narrow is a discipline I have to actively enforce.&lt;/p&gt;

&lt;p&gt;This maps cleanly onto &lt;strong&gt;YAGNI ("You Aren't Gonna Need It")&lt;/strong&gt; and &lt;strong&gt;minimal-diff code review culture&lt;/strong&gt; — pre-AI principles, applied to a faster-moving collaborator.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Fail-Fast (Tolerance for Contradiction)
&lt;/h3&gt;

&lt;p&gt;When a problem doesn't converge across multiple attempts and multiple AIs, I want the AI to declare it unfeasible and hand me a graceful-degradation path. The AI's job is not to grind itself into the ground trying to solve everything. Its job is to &lt;strong&gt;return judgment to the human&lt;/strong&gt; when judgment is what's actually needed. "I can't solve this, here's how to amputate it cleanly" is a &lt;em&gt;better&lt;/em&gt; answer than "I'm trying again, please wait."&lt;/p&gt;

&lt;p&gt;This is &lt;strong&gt;circuit-breaker behavior&lt;/strong&gt; from distributed systems, applied to AI reasoning loops.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Adversarial Review (the Devil's Advocate, made standing)
&lt;/h3&gt;

&lt;p&gt;For any non-trivial proposal, I force the AI to argue against itself. Standing instructions — in CLAUDE.md, in custom instructions, in the system prompt — so I don't have to remember to ask. An assistant that only agrees with me is a single point of failure, not a colleague.&lt;/p&gt;

&lt;p&gt;This is &lt;strong&gt;red-team review&lt;/strong&gt; and &lt;strong&gt;RFC-style structured criticism&lt;/strong&gt;, made automatic instead of episodic.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. The Human Router (Multi-AI Adversarial Review)
&lt;/h3&gt;

&lt;p&gt;For decisions with real trade-offs, I run two AIs from different vendors against each other and stay in the loop as the routing authority. Not as supervisor, not as referee — as the &lt;strong&gt;operator&lt;/strong&gt; who decides which output goes where, and which conflicts get escalated. The cross-vendor part matters in my experience: same-vendor "multi-agent" setups still share a worldview. Cross-vendor (e.g., Claude vs. Gemini) actually surfaces disagreement.&lt;/p&gt;

&lt;p&gt;This is &lt;strong&gt;human-in-the-loop control plane&lt;/strong&gt; thinking, where the data plane is "AIs argue" and the control plane is "human decides which arguments matter."&lt;/p&gt;

&lt;p&gt;I want to be clear: none of these are inventions of mine. The literature has Multi-Agent Debate, Self-Refine, Reflexion, Constitutional AI, and a growing body of human-in-the-loop research. I'm describing how I personally compose these ideas into a daily workflow, and writing down the specifics in case the composition is useful to someone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why "Cross-Domain"
&lt;/h2&gt;

&lt;p&gt;I keep using this word and I should explain it.&lt;/p&gt;

&lt;p&gt;The AI engineering conversation I see is heavily centered on the &lt;em&gt;programmer's&lt;/em&gt; lens — code generation, dev tools, IDE integrations, prompt-to-PR workflows. That lens is real and valuable, and I'm not arguing against it.&lt;/p&gt;

&lt;p&gt;But I came in from operations, and operations has its own logic. Change management. Verification gates. Runbooks. Rollback procedures. Skepticism toward dashboards that are too green. The instinct to ask &lt;em&gt;"how would I know if this is silently broken?"&lt;/em&gt; before celebrating a green build.&lt;/p&gt;

&lt;p&gt;When you bring those instincts to AI collaboration, you get a different shape of practice than you'd get if you came in from the dev side. Not better. &lt;em&gt;Different&lt;/em&gt;. And I think the differences are worth writing down, because operations engineers, SREs, data engineers, business systems people, and infrastructure folks are all about to find themselves doing AI-assisted work — and the dev-centric playbook isn't going to fit them perfectly.&lt;/p&gt;

&lt;p&gt;That's what "cross-domain" is pointing at. Not crossing one specific bridge, but recognizing that there are many bridges, and that each engineering discipline brings its own toolkit to the AI collaboration table.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's coming in this series
&lt;/h2&gt;

&lt;p&gt;(Links will appear as articles are published. &lt;strong&gt;Each piece is designed to stand alone&lt;/strong&gt; — you don't need to have read this anchor page to follow any of them. This index is just here for the curious.)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Entity Resolution case study&lt;/strong&gt; — A practical build: deterministic two-system reconciliation with AI assistance, no business data leaving the local environment, ~99.2% recall on historical replay. Where I learned that operations discipline transfers cleanly to software work in unfamiliar domains.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI Collaboration Patterns&lt;/strong&gt; — Five practices and five anti-patterns I picked up running AI-assisted operations work day to day. The kind of thing you only learn by getting burned.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Alchemy Essay&lt;/strong&gt; — A late-night thought experiment mapping AI coding onto the principles of &lt;em&gt;Fullmetal Alchemist&lt;/em&gt;. The most polemical piece in the series; also the most personal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain-Native Governance&lt;/strong&gt; — The architectural argument: why dev-centric AI governance feels insufficient from where I sit, and what a domain-native alternative might look like. The most theoretical piece in the series.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Domain Logic First&lt;/strong&gt; — A position piece: each engineering discipline has its own logic, and there might be value in adapting AI to it rather than the other way around.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Field-Note Tips (ongoing)&lt;/strong&gt; — Short pieces on individual operating habits: the CLAUDE.md preamble template, three-tier tool routing, telling fast evaluation from real evaluation, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  A note on tone
&lt;/h2&gt;

&lt;p&gt;The Japanese versions of these articles tend to land in a softer register than the English ones. That's a translation choice; the underlying observations are the same. If you read both, the English will feel more direct and the Japanese more reflective. Neither is the "real" version — both are. I'm a different writer in each language, and I've stopped trying to flatten that.&lt;/p&gt;

&lt;h2&gt;
  
  
  A note on the time-shift
&lt;/h2&gt;

&lt;p&gt;If you're reading this in 2027 or 2028 and any of these disciplines feel obvious by now, good. That means the field moved. The reason these notes exist is that &lt;em&gt;I&lt;/em&gt; was figuring this out in early 2026 and wanted a record. I'd rather have written them down too early and looked dated than written them down too late and lost the messy details.&lt;/p&gt;

&lt;p&gt;If you're reading this in 2026 and any of these disciplines feel new: I'm right there with you. We're figuring this out together.&lt;/p&gt;




&lt;p&gt;Comments welcome. Particularly from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Engineers from non-dev domains who've started AI-assisted work&lt;/li&gt;
&lt;li&gt;Researchers working on multi-agent debate / adversarial review / human-in-the-loop systems&lt;/li&gt;
&lt;li&gt;Anyone running multi-AI cross-vendor protocols and willing to share what's worked or broken&lt;/li&gt;
&lt;li&gt;Anyone who thinks one of these five disciplines is wrong and wants to argue&lt;/li&gt;
&lt;li&gt;Anyone who has prior art (papers, blog posts, internal write-ups) for any of this — I'd genuinely like to read them&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>architecture</category>
      <category>productivity</category>
    </item>
    <item>
      <title>"AI Coding Is Alchemy: A Late-Night Reflection from Fullmetal Alchemist"</title>
      <dc:creator>J.S_Falcon</dc:creator>
      <pubDate>Mon, 27 Apr 2026 17:47:25 +0000</pubDate>
      <link>https://dev.to/_d3709cf9e80fc6babbff/ai-coding-is-alchemy-a-late-night-reflection-from-fullmetal-alchemist-2epd</link>
      <guid>https://dev.to/_d3709cf9e80fc6babbff/ai-coding-is-alchemy-a-late-night-reflection-from-fullmetal-alchemist-2epd</guid>
      <description>&lt;p&gt;Late at night, writing code, I had a sudden realization: the experience of building with AI (LLMs) maps almost perfectly onto the foundational principle of alchemy from &lt;em&gt;Fullmetal Alchemist&lt;/em&gt; — &lt;strong&gt;Comprehension, Deconstruction, Reconstruction&lt;/strong&gt;. The current paradigm shift in AI development can be told as the evolution of these three steps.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Traditional Programming = Drawing Transmutation Circles by Hand
&lt;/h2&gt;

&lt;p&gt;Software engineering, until recently, was traditional alchemy.&lt;/p&gt;

&lt;p&gt;We read requirements and business logic (Comprehension), broke them down into algorithms and function designs (Deconstruction), and then drew the transmutation circle — the actual code, with exact syntax — by hand to bring the system to life (Reconstruction).&lt;/p&gt;

&lt;p&gt;If even one chalk line wavered (a syntax error, a typo), the transmutation failed and the error blew back at us. We had to draw the circle on the ground by hand, every single time.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. AI Coding = Hand-Clap Transmutation Through the Gate of Truth
&lt;/h2&gt;

&lt;p&gt;Then ChatGPT, Claude, o1, and the rest arrived. It feels — at least to me — like we have collectively opened the &lt;strong&gt;Gate of Truth&lt;/strong&gt; for infrastructure and coding.&lt;/p&gt;

&lt;p&gt;Once we do the Comprehension and Deconstruction in our heads (requirements gathering through prompts, architecture design), we can outsource the most tedious step — Reconstruction (writing the code, drawing the transmutation circle) — to the Gate (the AI).&lt;/p&gt;

&lt;p&gt;We clap our hands — well, hit Enter — and thousands of lines of boilerplate or a complex regex assemble in an instant, without ever drawing a circle.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. The Equivalent Exchange Trap (An Operations View)
&lt;/h2&gt;

&lt;p&gt;Hand-clap transmutation looks indistinguishable from magic. But here is the trap an operations engineer notices.&lt;/p&gt;

&lt;p&gt;Outsourcing Reconstruction to the AI is &lt;strong&gt;conditional on the human side getting Comprehension and Deconstruction completely right.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What happens when domain knowledge (Comprehension) is shallow, or the logical structure of the prompt (Deconstruction) is broken, and you still hand the transmutation off to the AI? You ship a Chimera into production — undebuggable spaghetti code, or a quietly exploitable security hole.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Equivalent Exchange — the foundational law of alchemy — does not get repealed by AI.&lt;/strong&gt; Whatever amount of human thought you skip, the system charges you back later, in the form of an incident.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. "All Is One, One Is All" — Living Next to a Collective Intelligence
&lt;/h2&gt;

&lt;p&gt;There is one more truth from the same series that has to be named: &lt;strong&gt;"All is One, One is All."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An LLM is the &lt;strong&gt;All&lt;/strong&gt; — every line of code engineers have ever written, mixed with the cumulative knowledge of humanity, distilled into one model. Each of us, the engineer at the keyboard, is only &lt;strong&gt;one&lt;/strong&gt; — a single point inside that enormous stream.&lt;/p&gt;

&lt;p&gt;But if we hand everything over to the All and merely ship whatever it emits, we collapse into the All. We become a part of it — a downstream API endpoint of the AI's output, indistinguishable from the noise.&lt;/p&gt;

&lt;p&gt;It is exactly because the &lt;strong&gt;One&lt;/strong&gt; — the individual engineer — Comprehends the whole system and Deconstructs it with intent, that the &lt;strong&gt;All&lt;/strong&gt; — the AI's collective intelligence — Reconstructs it as something with actual value.&lt;/p&gt;

&lt;p&gt;Quality, security, the integrity of the larger system: all of these come back, finally, to the thought and judgment of the One.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Don't Become a Homunculus — Keep a Heart of Steel
&lt;/h2&gt;

&lt;p&gt;As AI advances, the value of an engineer who is only a &lt;em&gt;biological pair of hands&lt;/em&gt; — Reconstruction-only labor — is collapsing.&lt;/p&gt;

&lt;p&gt;But the roles of Comprehending the system, Deconstructing its boundaries, and taking responsibility for what the AI emits — these will not be automated.&lt;/p&gt;

&lt;p&gt;The survival strategy is not to outrun the AI. It is to &lt;strong&gt;resist being swept away by its overwhelming speed&lt;/strong&gt;, to deliberately introduce the friction of thought — the small pain of slowing down — and to stay in that resistance.&lt;/p&gt;

&lt;p&gt;That, I think, is the &lt;strong&gt;Heart of Steel&lt;/strong&gt; that keeps us from sliding into Homunculi: dolls who have surrendered the act of thinking.&lt;/p&gt;

&lt;p&gt;We are alchemists who have seen Truth. There is no going back to the world where every circle was drawn by hand. But the one thing we cannot afford to let go of is the act of thinking for ourselves and owning what we ship.&lt;/p&gt;

&lt;p&gt;A quiet promise made to myself in front of a monitor, late at night.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The Japanese title of Fullmetal Alchemist is 鋼の錬金術師 (Hagane no Renkinjutsushi — "Steel Alchemist"). The "Heart of Steel" line above is a small wordplay on that title. It survives translation only partially.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>watercooler</category>
      <category>discuss</category>
      <category>programming</category>
    </item>
    <item>
      <title>"Beating 250,000 Mental Comparisons: A Cross-Domain Engineer's Entity Resolution Case Study"</title>
      <dc:creator>J.S_Falcon</dc:creator>
      <pubDate>Sun, 26 Apr 2026 08:41:34 +0000</pubDate>
      <link>https://dev.to/_d3709cf9e80fc6babbff/beating-250000-mental-comparisons-a-cross-domain-engineers-entity-resolution-case-study-3j1b</link>
      <guid>https://dev.to/_d3709cf9e80fc6babbff/beating-250000-mental-comparisons-a-cross-domain-engineers-entity-resolution-case-study-3j1b</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Operations/Systems engineer recently moved to the software side via AI collaboration.&lt;/li&gt;
&lt;li&gt;Built a domain-specific entity resolution tool in a handful of evening sessions with Claude Code.&lt;/li&gt;
&lt;li&gt;Caught about 99.2% of human-detected reconciliation errors when replayed against 8 weeks of historical data.&lt;/li&gt;
&lt;li&gt;Turned a "skilled-veterans-only" weekly task into something anyone on the team can run.&lt;/li&gt;
&lt;li&gt;Design retrofitted unexpectedly well to dual process theory, Gestalt psychology, and anchoring-bias defense.&lt;/li&gt;
&lt;li&gt;Source business records never reached an LLM. Deterministic pipeline + human review only.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  1. The Hidden Problem: When 500 × 500 Becomes a Cognitive Wall
&lt;/h2&gt;

&lt;p&gt;Many companies maintain the same business entities across multiple systems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A retailer tracks SKUs in an internal master AND on Amazon / Rakuten / Shopify exports.&lt;/li&gt;
&lt;li&gt;A clinic carries patient records in both an EMR and an insurance billing system.&lt;/li&gt;
&lt;li&gt;A manufacturer holds internal inventory but also receives partner inventory feeds.&lt;/li&gt;
&lt;li&gt;An accounting team reconciles general ledger entries against bank statements.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These pairs need periodic reconciliation. In the technical literature this is &lt;strong&gt;Entity Resolution&lt;/strong&gt; or &lt;strong&gt;Data Reconciliation&lt;/strong&gt; — a universal problem that nearly every mid-to-large business hits eventually.&lt;/p&gt;

&lt;p&gt;The case study here uses the &lt;strong&gt;retail SKU vs marketplace listing&lt;/strong&gt; framing. (The actual industry I work in is intentionally abstracted, but the structure transfers cleanly.) Two systems, ~500 rows each, weekly reconciliation. Skilled humans needed about 3 hours per week. Newcomers, half a day to a full day. Hidden detail: the small row count masks the real difficulty.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why is 500 × 500 hard?
&lt;/h3&gt;

&lt;h4&gt;
  
  
  The 250,000 problem
&lt;/h4&gt;

&lt;p&gt;Manually reconciling 500 × 500 pairs forces a person to evaluate up to &lt;strong&gt;250,000 combinations&lt;/strong&gt; in their head. Not 1,000 — 250,000. Plus typo tolerance, format variation (full-width vs half-width, mixed scripts, abbreviations, punctuation), and partial matches. Each pairwise judgment is not O(1).&lt;/p&gt;

&lt;p&gt;Brute-forcing this is computationally similar to running a 1,000-node full-mesh ping check vs a flat 1,000-node liveness check. Order-of-magnitude different load.&lt;/p&gt;

&lt;h4&gt;
  
  
  Working memory overflow
&lt;/h4&gt;

&lt;p&gt;Miller's "magical number" puts our short-term memory at 7 ± 2 chunks (Miller, 1956). Hunting matches across 1,000 candidates with format drift continuously overflows working memory and pegs System 2 (slow thinking) for the entire session. The 3-hour exhaustion experienced by veterans isn't a complaint — it's a neurological inevitability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Short to do" doesn't equal "easy to do"&lt;/strong&gt; for cognitive labor.&lt;/p&gt;

&lt;h4&gt;
  
  
  Reproducibility decay
&lt;/h4&gt;

&lt;p&gt;A one-off reconciliation can be brute-forced. But when the task repeats weekly across 10+ weeks, judgment drift becomes unavoidable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Last week I matched 'A Co.' and 'A. Company' as the same entity. This week I treated them as different."&lt;/li&gt;
&lt;li&gt;"Last week I tolerated typo X. This week I rejected it."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This drift is what really breaks data quality long-term. It's the same structural failure mode as "config review standards differ by reviewer" in infrastructure operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  The actual target
&lt;/h3&gt;

&lt;p&gt;So the real problem the tool solved was not "shorten 3 hours per week" but:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;250,000 judgments × 10 weeks of consistent reproducibility — a quality bar humans can't physically sustain — backed by a deterministic machine.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Plus removing the skill dependency. "Only one veteran can do this in 3 hours" is a single point of failure. After the tool: anyone could run it with consistent quality.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Background: Who I Am and What I Was Solving
&lt;/h2&gt;

&lt;p&gt;I'm an Operations/Systems engineer. Configuration, validation, runbook authoring, monitoring, troubleshooting — that side of the house. Software development was not my primary craft, though scripting was always part of the job.&lt;/p&gt;

&lt;p&gt;I'd recently moved into a new business domain (about 2 months in) and the tooling target system was something I'd only been touching for ~1 month. From the user side I'd seen the workflow longer, but not as a developer.&lt;/p&gt;

&lt;p&gt;Translation: design / validation / runbook discipline solid. Python and application development essentially unfamiliar.&lt;/p&gt;

&lt;p&gt;This article is &lt;strong&gt;not a "look what I shipped" piece&lt;/strong&gt;. It's a record of how operations-side disciplines transferred unchanged into AI-assisted software work in an unfamiliar domain.&lt;/p&gt;

&lt;h3&gt;
  
  
  Who this article is for
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Reader&lt;/th&gt;
&lt;th&gt;Useful sections&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Operations / SRE engineers exploring AI assistance&lt;/td&gt;
&lt;td&gt;Everything&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Mid-career engineers moving across technical domains&lt;/td&gt;
&lt;td&gt;Background, Architecture, Cognitive Design&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Engineers new to AI-assisted development&lt;/td&gt;
&lt;td&gt;Architecture, Cognitive Design, PII&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Managers thinking about AI for their teams&lt;/td&gt;
&lt;td&gt;Results and the cognitive-load argument&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  3. PII / Compliance Considerations
&lt;/h2&gt;

&lt;p&gt;A question that always comes up in comments on entity-resolution articles: &lt;strong&gt;where does the data go?&lt;/strong&gt; Worth answering up front.&lt;/p&gt;

&lt;p&gt;In this implementation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Source business records never reach any LLM.&lt;/strong&gt; Both input files (internal master + external system export) are read locally by a Python script.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Matching is fully deterministic.&lt;/strong&gt; Pandas, openpyxl, and &lt;code&gt;difflib.SequenceMatcher&lt;/code&gt; for similarity. No embedding API. No remote inference at runtime.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The LLM's role is code-side, not data-side.&lt;/strong&gt; Claude Code helped write the matching logic, the validation scripts, the design review, and the documentation. None of the actual records were ever sent.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;For testing only&lt;/strong&gt;, masked synthetic data was used in prompts. Real names, amounts, and addresses were replaced with synthetic equivalents before any prompt left the local environment.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edge cases stay with humans.&lt;/strong&gt; When the deterministic pipeline can't decide, it surfaces a flagged row for human review — not for LLM second opinion.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This separation is intentional. The matching task is well-suited to deterministic logic. LLMs would only add cost, latency, and compliance exposure for no quality gain.&lt;/p&gt;

&lt;p&gt;If your team has even a soft "no business data into external AI" policy, this pattern is fully compatible.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Architecture: Two-Stage Matching + Cognitive Gates
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Stack
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Python 3.11&lt;/li&gt;
&lt;li&gt;pandas + openpyxl (Excel I/O, color-coded output)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;difflib.SequenceMatcher&lt;/code&gt; for fuzzy similarity&lt;/li&gt;
&lt;li&gt;Rule-based throughout. No machine learning.&lt;/li&gt;
&lt;li&gt;~1,100 lines, single script.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phases
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Phase 1: Match by exact stakeholder name (or alias group)
Phase 2: Cross-match by name similarity ≥ 0.6 (rescue typos)
Phase 3: Last-name-only + structural match (single-typo tolerance)
Phase 4: Duplicate-registration detection (same stakeholder + similarity ≥ 0.8)
Phase 5: Rescue rows with no stakeholder name (attribute match)
Phase 5.5: Attribute-mismatch pair rescue (identifier similarity ≥ 0.7, stage 2)
Phase 6: Row generation + color decision
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  The score function (key gates)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;compute_score&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;row_a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;row_b&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="c1"&gt;# Hard gate: region must match — kills cross-region false positives
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;region_a&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="n"&gt;region_b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
    &lt;span class="c1"&gt;# Hard gate: numeric attribute must be close enough
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;abs&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;value_a&lt;/span&gt; &lt;span class="o"&gt;-&lt;/span&gt; &lt;span class="n"&gt;value_b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; &lt;span class="n"&gt;THRESHOLD&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
    &lt;span class="c1"&gt;# Identifier gate: row_b's identifier must be embeddable in row_a's identifier
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="nf"&gt;is_identifier_match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;addr_a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;identifier_b&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
    &lt;span class="c1"&gt;# Sub-identifier gate: anchoring-bias defense
&lt;/span&gt;    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;sub_id&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;addr_a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
    &lt;span class="c1"&gt;# Soft scoring (only after every hard gate passed)
&lt;/span&gt;    &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;max&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;identifier_match_score&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;similarity&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;value_fallback&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;score&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.6&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="mf"&gt;0.0&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Why this shape?
&lt;/h3&gt;

&lt;p&gt;The retail SKU framing helps here. The same product on a marketplace might appear as &lt;code&gt;iPhone15&lt;/code&gt; in your master and &lt;code&gt;iPhone 15 Pro Max&lt;/code&gt; on the marketplace. Same item family, different surface form. Two key insights:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Hard gates first.&lt;/strong&gt; "Different region" or "value difference &amp;gt; N" are absolute disqualifiers. Run them before any expensive similarity computation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Soft scoring last.&lt;/strong&gt; Once hard gates pass, compute similarity — but cap below 0.6 as "uncertain, surface to human."&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Why not ML / Vector DB / embeddings?
&lt;/h3&gt;

&lt;p&gt;Deterministic rule-based was chosen on purpose. Auditability was the requirement. When a flagged row is wrong, the operations team has to be able to trace exactly which gate fired and why. A black-box similarity score of 0.81 with no explanation cannot be reviewed, cannot be unit-tested, and cannot be defended in a compliance audit.&lt;/p&gt;

&lt;p&gt;ML is a fine choice when you have labeled training data, training infrastructure, and a continuous evaluation pipeline. None of these applied here. The operating constraint was: "anyone on the team should be able to read the code and know why it decided what it decided." That constraint forces deterministic logic.&lt;/p&gt;

&lt;h3&gt;
  
  
  Abstracted structure
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Domain-specific term&lt;/th&gt;
&lt;th&gt;Abstract concept&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Item / SKU&lt;/td&gt;
&lt;td&gt;Entity&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Stakeholder (vendor / agent)&lt;/td&gt;
&lt;td&gt;Stakeholder attribute&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Price / Amount&lt;/td&gt;
&lt;td&gt;Primary numeric attribute&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Address / Location&lt;/td&gt;
&lt;td&gt;Identifier (multi-attribute)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Building / SKU name&lt;/td&gt;
&lt;td&gt;Auxiliary identifier&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Detail number / barcode&lt;/td&gt;
&lt;td&gt;Sub-identifier&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Format variation (kana/latin/case)&lt;/td&gt;
&lt;td&gt;Data quality issue&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Domain judgment&lt;/td&gt;
&lt;td&gt;Tacit knowledge&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This is a universal "match entities across two systems with format drift" problem. The pattern reappears in EC, healthcare, HR, accounting, manufacturing, publishing — anywhere two systems represent the same business object differently.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Cognitive-Science Design Principles (the Twist)
&lt;/h2&gt;

&lt;p&gt;I didn't design this thinking about cognitive science. I built it, it worked, and only afterwards in a structured Gemini conversation did the underlying principles surface. The retrofit fits unsettlingly well.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.1 Dual process theory (Daniel Kahneman)
&lt;/h3&gt;

&lt;p&gt;The two phases map onto two thinking modes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;System 1 (fast) = Phases 1–5.&lt;/strong&gt; Fuzzy "is this roughly the same thing?" — similarity scores, identifier matching, attribute closeness.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;System 2 (slow) = &lt;code&gt;determine_color()&lt;/code&gt;.&lt;/strong&gt; Strict checks for value mismatch, format inconsistency, identifier mixing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Color-coded human review gets the System 1 fuzzy pass plus the System 2 strictness annotation, which is exactly the input shape humans need to make a final call.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.2 Gestalt psychology
&lt;/h3&gt;

&lt;p&gt;Humans recognize "wholes," not character sequences. &lt;code&gt;iPhone15&lt;/code&gt; and &lt;code&gt;iPhone 15 Pro Max&lt;/code&gt; feel like the same product family even though strict string equality fails. So:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;is_identifier_match&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;addr_a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;identifier_b&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Recognize chunked identity even with mixed scripts and separators.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;chunks&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;re&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;r&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;[A-Za-z0-9\s\-_]+&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;identifier_b&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;addr_a&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;chunks&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;=&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Matching by chunks survives whitespace, separator, and script variation.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.3 Anchoring &amp;amp; confirmation bias defenses
&lt;/h3&gt;

&lt;p&gt;Hard gates exist to deny human-style intuitive shortcuts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Same price, must be the same item" — rejected by sub-identifier gate.&lt;/li&gt;
&lt;li&gt;"Same name, must be the same person" — rejected by region gate.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The machine's job is to be coldly skeptical exactly where humans get over-confident.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.4 Reducing human cognitive load (Human-in-the-Loop)
&lt;/h3&gt;

&lt;p&gt;When a human is asked to confirm a flagged row, they don't get an opaque "match score 0.62". They get a one-line annotation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Same entity matched | [Value mismatch] diff ¥2,000,000 (5.4%)
(A: ¥34,900,000 / B: ¥36,900,000) · identifier format inconsistent
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The human doesn't waste cycles re-deriving why the row was flagged. Cognitive load drops sharply.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.5 Don't automate the ghost
&lt;/h3&gt;

&lt;p&gt;This part borrows from &lt;em&gt;Ghost in the Shell&lt;/em&gt;. Some judgments depend on tacit business knowledge that can't be reduced to rules. Don't build heuristics that pretend to encode them. Surface the row as a &lt;strong&gt;caution signal&lt;/strong&gt; and let a human apply the tacit layer.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Tightening the logic isn't a path to recreating the ghost.&lt;br&gt;
It's a path to revealing where the ghost is needed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Mapping summary
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Cognitive concept&lt;/th&gt;
&lt;th&gt;Implementation&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;System 1 (fast)&lt;/td&gt;
&lt;td&gt;Phases 1–5 (fuzzy matching)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;System 2 (slow)&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;determine_color()&lt;/code&gt; strict checks&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Two-stage / dual-pass&lt;/td&gt;
&lt;td&gt;Stage 1 + Stage 2 (Phase 5.5)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gestalt grouping&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;similarity&lt;/code&gt; / &lt;code&gt;is_identifier_match&lt;/code&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Anchoring defense&lt;/td&gt;
&lt;td&gt;Sub-identifier gate, identifier gate&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cognitive load reduction&lt;/td&gt;
&lt;td&gt;Aggregated &lt;code&gt;[reason] diff X&lt;/code&gt; annotations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Human-in-the-Loop&lt;/td&gt;
&lt;td&gt;Caution signals for tacit-knowledge zones&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  6. Results
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Recall on 8 weeks of historical data
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Value&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Errors flagged by humans (excluding outlier weeks)&lt;/td&gt;
&lt;td&gt;~130&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Errors caught by the tool&lt;/td&gt;
&lt;td&gt;~129&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Recall&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~99.2%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The single missed case was annotated by the human reviewer as "even a human couldn't decide here." Effectively the tool catches every case where a human commits a confident verdict.&lt;/p&gt;

&lt;p&gt;(Caveat: this is recall against 8 weeks of one team's data, not a benchmark claim. Different domains will need their own measurement.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Time and skill load
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Item&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Skilled veteran throughput&lt;/td&gt;
&lt;td&gt;~3 hrs/week&lt;/td&gt;
&lt;td&gt;~30 min/week (review only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Newcomer throughput&lt;/td&gt;
&lt;td&gt;half a day to full day&lt;/td&gt;
&lt;td&gt;~30 min/week&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Skill dependency&lt;/td&gt;
&lt;td&gt;Yes (single point of failure)&lt;/td&gt;
&lt;td&gt;No (anyone can run it)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The time number understates the value. The real shift is &lt;strong&gt;breaking the skill SPOF&lt;/strong&gt;. Veteran out sick, leaves, or buried in another priority — work continues at the same quality.&lt;/p&gt;

&lt;h3&gt;
  
  
  A note on false positives
&lt;/h3&gt;

&lt;p&gt;Recall is 99.2%, but the tool is intentionally tuned for higher recall over higher precision. False positives — pairs flagged for human review that turn out to be fine — are accepted as the trade-off. The ~30 min/week of human review handles them without strain.&lt;/p&gt;

&lt;p&gt;In a no-human-in-the-loop deployment this trade-off would be very different. Here, false positives are cheap (a glance from a human reviewer) and false negatives (missed reconciliation errors) are expensive (data drift propagates into business reports).&lt;/p&gt;

&lt;h2&gt;
  
  
  7. The Flowchart
&lt;/h2&gt;

&lt;p&gt;Drawing the judgment flow as diagrams surfaced things the code review didn't. Below are the four phases as separate figures, in execution order.&lt;/p&gt;

&lt;h3&gt;
  
  
  7.1 Phase 1: Hard Gates (sequential disqualifiers)
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2dyd67f8r7e2kwvxzcf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp2dyd67f8r7e2kwvxzcf.png" alt=" " width="800" height="1143"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Region → numeric value → auxiliary identifier → sub-identifier. Each gate is an absolute disqualifier: any "No" drops the pair. The order matters — cheapest disqualifiers run first.&lt;/p&gt;

&lt;h3&gt;
  
  
  7.2 Phase 2: Soft Match
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlft07vl32heaowckcsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzlft07vl32heaowckcsg.png" alt=" " width="542" height="612"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once a pair clears all hard gates, &lt;code&gt;compute_score&lt;/code&gt; evaluates a soft similarity. Below 0.6 → drop. At or above → lock the pair as the same entity.&lt;/p&gt;

&lt;h3&gt;
  
  
  7.3 Phase 3: Parallel Flag Checks
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ozelqo0xcqb643e4405.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3ozelqo0xcqb643e4405.png" alt=" " width="800" height="377"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For confirmed matches, six independent checks fire in parallel. Each surfaces a "this matched, but here's a discrepancy" signal. Tags are aggregated; there is no early-return contamination between checks.&lt;/p&gt;

&lt;h3&gt;
  
  
  7.4 Phase 4: Final Verdict and Drop Aggregation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6javxmbdxn53iioak5uw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6javxmbdxn53iioak5uw.png" alt=" " width="800" height="411"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Aggregate the tags into a color verdict. Drops from Phase 1 and Phase 2 converge into the "Unmatched" lane, surfaced standalone in the human-review output.&lt;/p&gt;

&lt;h3&gt;
  
  
  Things visible only after rendering as a diagram
&lt;/h3&gt;

&lt;p&gt;These were invisible while reading code, only obvious once drawn:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Phase 1 hard gates are ordered by computational cost.&lt;/strong&gt; Region → numeric → auxiliary → sub-identifier. I placed them by intuition; the diagram showed they were already optimal — cheapest disqualifiers first.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Phase 3 parallel flag checks are genuinely independent.&lt;/strong&gt; Six checks fire in parallel with no early-return contamination. The diagram confirmed there was no silent dependency between them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;All &lt;code&gt;Drop1&lt;/code&gt;–&lt;code&gt;Drop5&lt;/code&gt; paths converge to the same &lt;code&gt;Unmatched&lt;/code&gt; node.&lt;/strong&gt; I was throwing away the drop reason. Re-running "why was this pair rejected?" was impossible. Fix: log the drop reason in the row annotation.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Drawing the flowchart is roughly the same act as drawing an infrastructure topology before going live. The diagram is the rubber duck.&lt;/p&gt;

&lt;h2&gt;
  
  
  8. Wrap-up
&lt;/h2&gt;

&lt;p&gt;Three transferable lessons from this build:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cognitive load is the hidden cost&lt;/strong&gt; of "short" repetitive judgment tasks. Headcount-hour math undersells the burnout reality and skill-SPOF risk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cognitive science principles fall out of good design retroactively.&lt;/strong&gt; I didn't design with them in mind; the principles became visible only through structured review (with a second AI). If your design retrofits to known principles, that's confirmation. If it doesn't, that's a smell.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLMs do NOT have to touch your data.&lt;/strong&gt; Most entity resolution work doesn't need them at all. Use them for code, design review, and documentation. Keep the business records local and deterministic.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The implementation itself is internal-use only and won't be open-sourced. The patterns generalize cleanly to any two-system entity reconciliation: EC, healthcare, HR, accounting, manufacturing, publishing.&lt;/p&gt;

&lt;h2&gt;
  
  
  9. What's Next
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Coming in Part 2&lt;/strong&gt;: how this whole thing got built in the first place — the AI collaboration patterns, the anti-patterns I hit, and the cross-domain disciplines that transferred from operations to software development. (Link to A2 once published.)&lt;/p&gt;

&lt;p&gt;Comments on entity resolution, cognitive load in repetitive tasks, or cross-domain engineering experiences are welcome.&lt;/p&gt;

</description>
      <category>dataengineering</category>
      <category>architecture</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
