<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Terezinha Tech Operations</title>
    <description>The latest articles on DEV Community by Terezinha Tech Operations (@ttoss).</description>
    <link>https://dev.to/ttoss</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ttoss"/>
    <language>en</language>
    <item>
      <title>The Most Important Decision You'll Make as an Engineer This Year</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Sat, 10 Jan 2026 19:41:49 +0000</pubDate>
      <link>https://dev.to/ttoss/the-most-important-decision-youll-make-as-an-engineer-this-year-4fob</link>
      <guid>https://dev.to/ttoss/the-most-important-decision-youll-make-as-an-engineer-this-year-4fob</guid>
      <description>&lt;p&gt;The single most important decision an engineer must make today is a binary choice regarding their role in the software development lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option A:&lt;/strong&gt; Continue reviewing code line-by-line as if a human wrote it. This path guarantees you become the bottleneck, stifling your team's throughput.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option B:&lt;/strong&gt; Evolve your review policy, relinquishing low-level implementation control to AI to unlock high-level architectural velocity.&lt;/p&gt;

&lt;p&gt;If you choose Option A, this article is not for you. You will likely continue to drown in an ever-increasing tide of pull requests until external metrics force a change.&lt;/p&gt;

&lt;p&gt;If you choose Option B, you are ready for a paradigm shift. However, blindly "letting AI code" without a governance system invites chaos. You need robust strategies to maintain system quality without scrutinizing every line of implementation. Here are the five strategies that make Option B a reality.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Velocity Trap: Why Option A is Mathematically Impossible
&lt;/h2&gt;

&lt;p&gt;This decision is driven by a hard mathematical reality.&lt;/p&gt;

&lt;p&gt;An experienced engineer can meaningfully review perhaps 200 lines of complex code per hour. An AI agent can generate 200 lines of code &lt;em&gt;per second&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;If you choose &lt;strong&gt;Option A&lt;/strong&gt;, you are pitting linear human processing speed against exponential AI generation speed. As your team adopts more AI tools, the volume of code produced will increase by orders of magnitude. If your review policy remains "human eyes on every line," your backlog will grow infinitely, and your velocity will asymptote to zero.&lt;/p&gt;

&lt;p&gt;You cannot out-read the machine. You must out-think it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategy 1: Climb the Abstraction Ladder
&lt;/h2&gt;

&lt;p&gt;The first step is to redefine what you "understand" about your software.&lt;/p&gt;

&lt;p&gt;Historically, "understanding the codebase" meant knowing how the &lt;code&gt;if&lt;/code&gt; statements worked inside a specific function. In the age of AI, this is unsustainable. You must divide your software mental model into a hierarchy: Product &amp;gt; System &amp;gt; Modules &amp;gt; Functions.&lt;/p&gt;

&lt;p&gt;The shift is simple: &lt;strong&gt;Stop reviewing the lowest level&lt;/strong&gt; (functions, in this example).&lt;/p&gt;

&lt;p&gt;If a product has a Stripe integration, you should not care if the code uses a ternary operator or an &lt;code&gt;if-else&lt;/code&gt; block. You should care that there is a &lt;strong&gt;Billing System&lt;/strong&gt; containing a &lt;strong&gt;Stripe Module&lt;/strong&gt; that adheres to a specific contract.&lt;/p&gt;

&lt;p&gt;Your job is to maintain a clear mental model of the higher levels (Product and Systems). This aligns with &lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-contractual-specialization" rel="noopener noreferrer"&gt;The Principle of Contractual Specialization&lt;/a&gt;. By keeping the boundaries rigid, you can let the AI handle the implementation details within those boundaries. As noted in &lt;a href="https://ttoss.dev/blog/2025/12/26/coding-is-now-a-commodity" rel="noopener noreferrer"&gt;Coding is Now a Commodity&lt;/a&gt;, the value has shifted from the "bricks" (functions) to the "blueprint" (system architecture).&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategy 2: Review Instructions, Not Code
&lt;/h2&gt;

&lt;p&gt;When you find a flaw in the architecture or logic, your instinct will be to fix the code. &lt;strong&gt;Resist it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the AI generates code that violates your architecture, it is a &lt;strong&gt;mentorship failure&lt;/strong&gt;. Instead of rewriting the code, rewrite the &lt;strong&gt;instruction&lt;/strong&gt; or the &lt;strong&gt;system prompt&lt;/strong&gt; that generated it.&lt;/p&gt;

&lt;p&gt;This transforms you from a code reviewer into an architect of agency. As discussed in &lt;a href="https://ttoss.dev/blog/2025/12/17/from-scripter-to-architect" rel="noopener noreferrer"&gt;From Scripter to Architect&lt;/a&gt;, you are no longer the author of the flow; you are the architect of the boundaries. If you fix the code manually, you teach nothing and fix only one instance. If you fix the instruction, you maintain the "employee" that will write the next 100 features.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategy 3: Automated Verification as the Safety Net
&lt;/h2&gt;

&lt;p&gt;The fear of not reviewing low-level code is reasonable: "What if there's a bug?"&lt;/p&gt;

&lt;p&gt;This brings us to the third strategy: &lt;strong&gt;Aggressive Automated Testing.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you stop reviewing functions, you lose the ability to spot subtle logical bugs by eye. You must replace that manual vigilance with &lt;a href="https://ttoss.dev/docs/engineering/guidelines/technical-debt#2-automated-verification-for-every-change" rel="noopener noreferrer"&gt;Automated Verification&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Tests are the guarantee that allows you to be "ignorant" of the details. When an engineer (or AI) needs to fix a bug in a low-level module they don't fully understand, the test suite acts as the guardrail. It ensures that fixing the Stripe integration doesn't break the User Auth flow.&lt;/p&gt;

&lt;p&gt;This approach implements &lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-intrinsic-verification" rel="noopener noreferrer"&gt;The Principle of Intrinsic Verification&lt;/a&gt;. You are trading the high-friction cost of manual review for the upfront cost of writing robust tests. This allows you to escape the &lt;a href="https://ttoss.dev/blog/2025/12/18/from-reviewer-to-architect" rel="noopener noreferrer"&gt;AI Verification Trap&lt;/a&gt; and focus on system-level constraints rather than syntax.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategy 4: Enforce Schema Supremacy
&lt;/h2&gt;

&lt;p&gt;You can't verify everything, but you &lt;em&gt;can&lt;/em&gt; verify the boundaries.&lt;/p&gt;

&lt;p&gt;Before an AI writes a single line of logic, you should strictly define the input and output schemas (Types, Interfaces, Zod schemas). This is &lt;strong&gt;&lt;a href="https://ttoss.dev/blog/2025/12/17/from-scripter-to-architect#corollary-1-schema-supremacy" rel="noopener noreferrer"&gt;Schema Supremacy&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you control the shape of the data entering and leaving a module, you care much less about how the data is transformed inside. You aren't reviewing the transformation logic; you are reviewing the strictness of the contract. If the AI respects the schema, the crash radius of bad code is contained.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategy 5: Observability as Interest Payments
&lt;/h2&gt;

&lt;p&gt;When you trade review depth for velocity, you are technically taking on a form of risk. You pay for this risk with &lt;strong&gt;Observability&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;You shift from "Is this code clean?" to "Is the system healthy?"&lt;/p&gt;

&lt;p&gt;Instead of agonizing over potential edge cases in code review, you ensure you have logs, metrics, and alerts that will scream if those edge cases happen in production. This aligns with &lt;a href="https://ttoss.dev/docs/engineering/guidelines/technical-debt#4-observability-as-interest-payments" rel="noopener noreferrer"&gt;Observability as Interest Payments&lt;/a&gt;. If you can't see it fail, you can't afford to let the AI write it without review.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The decision to shift from reviewer to architect is about relevance and leverage.&lt;/p&gt;

&lt;p&gt;You can remain the gatekeeper who catches every missing semicolon, proudly "owning" every line of code while shipping one feature a month. Or you can evolve into the architect who defines the systems, authors the instructions, and enforces the verification loops—empowering an AI workforce to ship at the speed of compute.&lt;/p&gt;

&lt;p&gt;The first path leads to traditional engineering. The second path defines the future of engineering. Make your choice.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>software</category>
      <category>programming</category>
    </item>
    <item>
      <title>Tutorial: How to Serve REST and MCP on the Same Server</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Mon, 29 Dec 2025 09:44:37 +0000</pubDate>
      <link>https://dev.to/ttoss/tutorial-how-to-serve-rest-and-mcp-on-the-same-server-1a10</link>
      <guid>https://dev.to/ttoss/tutorial-how-to-serve-rest-and-mcp-on-the-same-server-1a10</guid>
      <description>&lt;p&gt;This example demonstrates how to create a single HTTP server that exposes both REST and MCP (Model Context Protocol) endpoints, sharing the same business logic.&lt;/p&gt;

&lt;p&gt;Using &lt;a href="https://ttoss.dev/docs/modules/packages/http-server/" rel="noopener noreferrer"&gt;&lt;code&gt;@ttoss/http-server&lt;/code&gt;&lt;/a&gt; and &lt;a href="https://ttoss.dev/docs/modules/packages/http-server-mcp/" rel="noopener noreferrer"&gt;&lt;code&gt;@ttoss/http-server-mcp&lt;/code&gt;&lt;/a&gt;, which are built on top of &lt;a href="https://koajs.com/" rel="noopener noreferrer"&gt;Koa&lt;/a&gt; and the official &lt;a href="https://github.com/modelcontextprotocol/typescript-sdk" rel="noopener noreferrer"&gt;Model Context Protocol TypeScript SDK&lt;/a&gt;, this example showcases how to integrate traditional REST APIs with AI-compatible MCP endpoints in a single application.&lt;/p&gt;

&lt;p&gt;Check the source code on GitHub to adapt to your case:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/ttoss/ttoss/blob/main/examples/http-server-mcp-calculator/src/index.ts" rel="noopener noreferrer"&gt;This example&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ttoss/ttoss/blob/main/packages/http-server/src/index.ts" rel="noopener noreferrer"&gt;@ttoss/http-server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/ttoss/ttoss/blob/main/packages/http-server-mcp/src/index.ts" rel="noopener noreferrer"&gt;@ttoss/http-server-mcp&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Features
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Shared Business Logic&lt;/strong&gt;: A simple &lt;code&gt;sum&lt;/code&gt; function used by both REST and MCP endpoints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;REST Endpoint&lt;/strong&gt;: &lt;code&gt;POST /sum&lt;/code&gt; - Traditional HTTP API&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;MCP Endpoint&lt;/strong&gt;: &lt;code&gt;POST /mcp&lt;/code&gt; - AI-compatible MCP protocol&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Single Server&lt;/strong&gt;: Both endpoints run on the same server instance&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Running the Example
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# Clone the ttoss repository&lt;/span&gt;
git clone https://github.com/ttoss/ttoss.git
&lt;span class="nb"&gt;cd &lt;/span&gt;ttoss

&lt;span class="c"&gt;# Install dependencies from monorepo root&lt;/span&gt;
pnpm &lt;span class="nb"&gt;install&lt;/span&gt;

&lt;span class="c"&gt;# Navigate to this example&lt;/span&gt;
&lt;span class="nb"&gt;cd &lt;/span&gt;examples/http-server-mcp-calculator

&lt;span class="c"&gt;# Run the example&lt;/span&gt;
pnpm dev
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The server will start on &lt;code&gt;http://localhost:3000&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Testing the Endpoints
&lt;/h2&gt;

&lt;h3&gt;
  
  
  REST Endpoint
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-X&lt;/span&gt; POST http://localhost:3000/sum &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{"a": 5, "b": 3}'&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Expected response:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"result"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"operation"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"5 + 3 = 8"&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  MCP Endpoint
&lt;/h3&gt;

&lt;p&gt;The MCP endpoint follows the &lt;a href="https://modelcontextprotocol.io" rel="noopener noreferrer"&gt;Model Context Protocol specification&lt;/a&gt; and can be called by AI assistants like Claude Desktop.&lt;/p&gt;

&lt;p&gt;Configure in Claude Desktop's &lt;code&gt;claude_desktop_config.json&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"mcpServers"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"calculator"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"url"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"http://localhost:3000/mcp"&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or test with the MCP Inspector CLI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# List available tools&lt;/span&gt;
npx @modelcontextprotocol/inspector &lt;span class="nt"&gt;--cli&lt;/span&gt; http://localhost:3000/mcp &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--transport&lt;/span&gt; http &lt;span class="nt"&gt;--method&lt;/span&gt; tools/list

&lt;span class="c"&gt;# Call the sum tool&lt;/span&gt;
npx @modelcontextprotocol/inspector &lt;span class="nt"&gt;--cli&lt;/span&gt; http://localhost:3000/mcp &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--transport&lt;/span&gt; http &lt;span class="nt"&gt;--method&lt;/span&gt; tools/call &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--tool-name&lt;/span&gt; &lt;span class="nb"&gt;sum&lt;/span&gt; &lt;span class="nt"&gt;--tool-arg&lt;/span&gt; &lt;span class="nv"&gt;a&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5 &lt;span class="nt"&gt;--tool-arg&lt;/span&gt; &lt;span class="nv"&gt;b&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Key Concepts
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Shared Business Logic
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Single source of truth&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sum&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="kr"&gt;number&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This function is reused by both REST and MCP endpoints, ensuring consistency.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. REST Endpoint
&lt;/h3&gt;

&lt;p&gt;Traditional HTTP API that directly uses the business logic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;router&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;post&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/sum&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;request&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;body&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;operation&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; + &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; = &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt; &lt;span class="p"&gt;};&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  3. MCP Endpoint
&lt;/h3&gt;

&lt;p&gt;AI-compatible endpoint that also uses the same business logic:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;mcpServer&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;registerTool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;sum&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Add two numbers together&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;inputSchema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;a&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;number&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;First number&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
      &lt;span class="na"&gt;b&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;number&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;describe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Second number&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;sum&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;content&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[{&lt;/span&gt; &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;text&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="na"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;a&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; + &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;b&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt; = &lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;`&lt;/span&gt; &lt;span class="p"&gt;}],&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  4. Single Server Instance
&lt;/h3&gt;

&lt;p&gt;Both endpoints are mounted on the same Koa app:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;app&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;App&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;restRouter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;routes&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt; &lt;span class="c1"&gt;// REST endpoints&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;use&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;mcpRouter&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;routes&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt; &lt;span class="c1"&gt;// MCP endpoints&lt;/span&gt;
&lt;span class="nx"&gt;app&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;3000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Benefits of This Architecture
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Code Reuse&lt;/strong&gt;: Business logic is written once, used everywhere&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistency&lt;/strong&gt;: Same validation and behavior across protocols&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Maintainability&lt;/strong&gt;: Changes to business logic automatically apply to all endpoints&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: Support both traditional clients (REST) and AI assistants (MCP)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource Efficiency&lt;/strong&gt;: Single server process handles all traffic&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Learn More
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://ttoss.dev/docs/modules/packages/http-server/" rel="noopener noreferrer"&gt;@ttoss/http-server&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://ttoss.dev/docs/modules/packages/http-server-mcp/" rel="noopener noreferrer"&gt;@ttoss/http-server-mcp&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://modelcontextprotocol.io" rel="noopener noreferrer"&gt;Model Context Protocol&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>mcp</category>
      <category>typescript</category>
      <category>api</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Coding is Now a Commodity</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Sat, 27 Dec 2025 13:49:00 +0000</pubDate>
      <link>https://dev.to/ttoss/coding-is-now-a-commodity-19j6</link>
      <guid>https://dev.to/ttoss/coding-is-now-a-commodity-19j6</guid>
      <description>&lt;p&gt;Farming didn't disappear when tractors arrived—it evolved. Software development is undergoing the same transformation with AI. The shift is from manual coding to system architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Commoditization" of Coding
&lt;/h2&gt;

&lt;p&gt;A few years ago, "React Developer" was a sought-after title. Today, writing React, JavaScript, GraphQL, or managing basic cloud setups has become a commodity. If the "how-to-code" is now a commodity, what isn't?&lt;/p&gt;

&lt;h2&gt;
  
  
  The Three Phases of Development
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq0xueja5ybqzs3d1fqx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq0xueja5ybqzs3d1fqx.png" alt=" " width="721" height="393"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Low-Level Era&lt;/strong&gt;: Early on, energy was spent on binary code and compilers. We struggled with the machine's language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Framework Era&lt;/strong&gt;: Development evolved so we could see changes in real-time. We spent 100% of our time mastering specific tools like React, MongoDB, and Cloud providers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The AI Era&lt;/strong&gt;: You no longer need to master every syntax detail. The AI handles the "labor," freeing you to focus on the bigger picture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The New Skill Set
&lt;/h2&gt;

&lt;p&gt;If AI writes the code, your value now lies in &lt;strong&gt;system design&lt;/strong&gt;—understanding how systems connect, managing global state, data consistency, and scalability. This means knowing the difference between eventual consistency and strong consistency, understanding when to use synchronous versus asynchronous patterns, not just how to write a React component.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Problem decomposition&lt;/strong&gt; becomes critical. The ability to break complex problems into small, precise tasks that an AI can execute accurately is the new bottleneck. Following &lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles" rel="noopener noreferrer"&gt;agentic development principles&lt;/a&gt; ensures you can direct AI effectively. Each task must be atomic, testable, and unambiguous.&lt;/p&gt;

&lt;p&gt;You will spend more time &lt;strong&gt;reading code than writing it&lt;/strong&gt;. Code review shifts from checking syntax to auditing for security flaws, performance bottlenecks, and catching AI hallucinations. Understanding what the system does becomes more valuable than knowing how to write it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Infrastructure and observability&lt;/strong&gt; matter more than ever. Knowing where the code runs, how it scales, and—most importantly—how to debug when it breaks separates effective developers from those still thinking in framework terms. Logs, metrics, and traces become your primary interface.&lt;/p&gt;

&lt;p&gt;Finally, &lt;strong&gt;collaboration and communication&lt;/strong&gt; with cross-functional teams ensures the system meets user needs and business goals. The ability to translate business requirements into system constraints and explain technical trade-offs to non-technical stakeholders is increasingly valuable.&lt;/p&gt;

&lt;p&gt;The focus has shifted from the "bricks" to the "blueprint."&lt;/p&gt;

</description>
      <category>ai</category>
      <category>softwareengineering</category>
      <category>agenticdevelopment</category>
    </item>
    <item>
      <title>Why You Can't "Manage" Code You Don't Understand</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Mon, 22 Dec 2025 01:39:52 +0000</pubDate>
      <link>https://dev.to/ttoss/why-you-cant-manage-code-you-dont-understand-4idd</link>
      <guid>https://dev.to/ttoss/why-you-cant-manage-code-you-dont-understand-4idd</guid>
      <description>&lt;p&gt;A common question in the age of AI is: &lt;em&gt;"If AI writes the code, do developers just become Product Managers?"&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The answer is &lt;strong&gt;No&lt;/strong&gt;, and the reason lies in &lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-contextual-authority" rel="noopener noreferrer"&gt;The Principle of Contextual Authority&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Product Managers primarily own the &lt;em&gt;Problem Space&lt;/em&gt; (user needs, market fit, value). Engineers primarily own the &lt;em&gt;Solution Space&lt;/em&gt; (architecture, reliability, maintainability).&lt;/p&gt;

&lt;p&gt;If a PM doesn't understand the market, they build the wrong product. If an Engineer doesn't understand the system, they build a &lt;em&gt;fragile&lt;/em&gt; product.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "How" Contains the Risk
&lt;/h2&gt;

&lt;p&gt;When a developer delegates to AI without maintaining ownership, they are trying to outsource the Solution Space. The story becomes: "The AI handles the 'how', I just handle the 'what'." But the "how" contains the technical risk you can't outsource.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The "how" determines if the database locks up under load.&lt;/li&gt;
&lt;li&gt;The "how" determines if the security model is valid.&lt;/li&gt;
&lt;li&gt;The "how" determines if the system can be extended next month.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the engineering team delegates the "how" to AI without maintaining deep understanding, the codebase becomes a liability. Also, the team doesn't become PMs; they become custodians of technical debt they can't reason about.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Contractor Trap
&lt;/h2&gt;

&lt;p&gt;When you delegate a task to an AI because you &lt;em&gt;don't understand the code&lt;/em&gt; (or don't want to deal with its complexity), you are acting as a &lt;strong&gt;Contractor&lt;/strong&gt;. You are using the AI as a shield against complexity. The AI produces a "black box" patch that closes the immediate ticket, but you can't predict its impact on the rest of the system.&lt;/p&gt;

&lt;p&gt;If you do this enough times, you lose your mental model of the software. You become a "Product Manager of Code"—someone who can describe &lt;em&gt;what&lt;/em&gt; they want, but can't reliably explain &lt;em&gt;why&lt;/em&gt; it works or &lt;em&gt;when&lt;/em&gt; it will fail.&lt;/p&gt;

&lt;p&gt;And unlike a real Product Manager who relies on an engineering team to ensure structural integrity, you're relying on a model that (without strong feedback loops) is optimized for plausible output, not system guarantees.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Architect of Agency
&lt;/h2&gt;

&lt;p&gt;The best strategy for scaling is not to become a manager of black boxes, but to become an &lt;strong&gt;Architect of Agency&lt;/strong&gt;. You use AI to execute, but you rigorously audit the output against your mental model. You trade the low-leverage work of syntax generation for the high-leverage work of &lt;strong&gt;System Verification&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This requires &lt;a href="https://ttoss.dev/docs/ai/agentic-design-patterns#ownership-preserving-delegation" rel="noopener noreferrer"&gt;Ownership-Preserving Delegation&lt;/a&gt;. Don't demand "trust me" output. Demand an audit trail: tests that pin behavior, clear invariants, notes about trade-offs, and a narrative diff you can reason about before you accept the change.&lt;/p&gt;

&lt;p&gt;You don't lose ownership by delegating; you lose ownership by stopping to look.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>product</category>
      <category>softwaredevelopment</category>
    </item>
    <item>
      <title>From Reviewer to Architect: Escaping the AI Verification Trap</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Thu, 18 Dec 2025 18:15:29 +0000</pubDate>
      <link>https://dev.to/ttoss/from-reviewer-to-architect-escaping-the-ai-verification-trap-3eg0</link>
      <guid>https://dev.to/ttoss/from-reviewer-to-architect-escaping-the-ai-verification-trap-3eg0</guid>
      <description>&lt;p&gt;There's a moment every engineering manager experiences after adopting AI coding tools. The initial excitement—"We're shipping features twice as fast!"—slowly curdles into a disturbing realization: "Wait, why are my senior engineers spending hours manually testing for regressions that proper automated tests could catch in seconds?"&lt;/p&gt;

&lt;p&gt;This is the AI Verification Trap, and there's only one way out.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Trap
&lt;/h2&gt;

&lt;p&gt;The trap isn't that AI makes you slower—it's that AI shifts the bottleneck to where your most expensive resources are doing the cheapest work.&lt;/p&gt;

&lt;p&gt;Here's how it unfolds:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;You adopt AI coding agents&lt;/li&gt;
&lt;li&gt;Code generation accelerates 5-10x&lt;/li&gt;
&lt;li&gt;Your review queue grows proportionally&lt;/li&gt;
&lt;li&gt;Engineers spend their days catching type errors, formatting issues, and broken tests&lt;/li&gt;
&lt;li&gt;High-value work (architecture, business logic, innovation) gets squeezed out&lt;/li&gt;
&lt;li&gt;You're shipping faster, but your engineering capacity is misallocated&lt;/li&gt;
&lt;li&gt;Competitors who automated verification are shipping faster AND building better&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This trap is a direct consequence of &lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-verification-asymmetry" rel="noopener noreferrer"&gt;The Principle of Verification Asymmetry&lt;/a&gt;: generating AI output is cheap, verifying it is expensive. When you 10x generation without automating verification, you create a misallocation crisis—expensive human attention spent on problems machines could solve.&lt;/p&gt;

&lt;h2&gt;
  
  
  The False Dichotomy
&lt;/h2&gt;

&lt;p&gt;Most teams see only two options:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option A: Rigorous Review&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every AI-generated PR receives full human scrutiny. Engineers catch everything—formatting issues, type errors, test failures, security vulnerabilities, AND business logic problems. Velocity improves over pre-AI baselines, but engineers are exhausted from reviewing PRs that should never have reached them. A senior engineer earning $200/hour spends 30 minutes catching a missing semicolon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option B: Trust the Machine&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reduce review friction. If tests pass, ship it. Velocity spikes dramatically. Six months later, the codebase is an unmaintainable disaster of subtle bugs and architectural violations that no human ever validated.&lt;/p&gt;

&lt;p&gt;Both options waste resources. Option A wastes engineering talent on automatable work. Option B wastes future velocity on technical debt. The trap seems inescapable.&lt;/p&gt;

&lt;p&gt;But there's a third option that most teams miss.&lt;/p&gt;

&lt;h2&gt;
  
  
  Option C: Stop Reviewing, Start Architecting
&lt;/h2&gt;

&lt;p&gt;The insight is this: &lt;strong&gt;not all verification requires human judgment.&lt;/strong&gt; Most of what engineers catch in review—formatting, types, test failures, complexity violations—can be caught by machines at near-zero cost.&lt;/p&gt;

&lt;p&gt;The solution is to &lt;strong&gt;architect verification systems that filter out automatable problems before they reach human eyes.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the shift from "engineer as reviewer" to "engineer as architect." Instead of spending 30 minutes reviewing each PR (catching issues a linter could find), you spend 30 hours building systems that filter 1,000 PRs automatically—so human review focuses only on what humans do best: validating intent, architecture, and business logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Automated Verification Pipeline
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://ttoss.dev/docs/ai/agentic-design-patterns#automated-verification-pipeline" rel="noopener noreferrer"&gt;Automated Verification Pipeline&lt;/a&gt; pattern provides the architectural blueprint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;┌─────────────────────────────────────────────────────────────────────┐
│                    THE VERIFICATION FUNNEL                          │
├─────────────────────────────────────────────────────────────────────┤
│                                                                     │
│   AI Output (100 PRs)                                               │
│        │                                                            │
│        ▼                                                            │
│   ┌─────────────┐                                                   │
│   │   Linters   │ ──► 20 PRs sent back (formatting issues)         │
│   └─────────────┘                                                   │
│        │ 80 PRs                                                     │
│        ▼                                                            │
│   ┌─────────────┐                                                   │
│   │ Type Check  │ ──► 15 PRs sent back (type errors)               │
│   └─────────────┘                                                   │
│        │ 65 PRs                                                     │
│        ▼                                                            │
│   ┌─────────────┐                                                   │
│   │ Unit Tests  │ ──► 25 PRs sent back (behavioral regression)     │
│   └─────────────┘                                                   │
│        │ 40 PRs                                                     │
│        ▼                                                            │
│   ┌─────────────┐                                                   │
│   │ Complexity  │ ──► 10 PRs sent back (exceeded thresholds)       │
│   │   Gates     │                                                   │
│   └─────────────┘                                                   │
│        │ 30 PRs                                                     │
│        ▼                                                            │
│   ┌─────────────┐                                                   │
│   │  Security   │ ──► 5 PRs sent back (vulnerability detected)     │
│   │  Scanners   │                                                   │
│   └─────────────┘                                                   │
│        │ 25 PRs                                                     │
│        ▼                                                            │
│   ┌─────────────┐                                                   │
│   │   Human     │ ──► 25 PRs reviewed (semantic validation only)   │
│   │   Review    │                                                   │
│   └─────────────┘                                                   │
│                                                                     │
└─────────────────────────────────────────────────────────────────────┘
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;From 100 AI-generated PRs, only 25 require human attention. And those 25 have already passed syntax, type, behavioral, complexity, and security checks. The human reviewer's job transforms from "catch all problems" to "validate intent and architecture"—the work that actually requires human judgment.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Economics of Misallocation
&lt;/h2&gt;

&lt;p&gt;The real cost isn't productivity—it's opportunity cost. Let's quantify:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Without Automated Filtering:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;100 PRs × 30 min/review = 50 hours/day of review&lt;/li&gt;
&lt;li&gt;Team capacity: 8 engineers × 8 hours = 64 hours&lt;/li&gt;
&lt;li&gt;Review consumes 78% of team capacity&lt;/li&gt;
&lt;li&gt;Of that review time, ~70% is spent catching issues machines could catch&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;35 hours/day of senior engineering talent wasted on automatable work&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Remaining capacity for architecture, innovation, complex problems: 14 hours&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;With Automated Verification Pipeline:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;100 PRs, 75 auto-filtered before human review&lt;/li&gt;
&lt;li&gt;25 PRs × 15 min/review (pre-filtered, focused review) = 6.25 hours/day&lt;/li&gt;
&lt;li&gt;Review consumes 10% of team capacity&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;0 hours wasted on automatable issues&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Remaining capacity for high-value work: 57.75 hours&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both teams ship the same features. But the second team has 4x more capacity for the work that actually differentiates: architecture, innovation, complex problem-solving, and building the next generation of products.&lt;/p&gt;

&lt;p&gt;The ROI of verification infrastructure isn't about shipping more—it's about reallocating engineering capacity from low-value review to high-value creation.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Architects Build
&lt;/h2&gt;

&lt;p&gt;The engineer-as-architect focuses on these leverage points:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Test Infrastructure
&lt;/h3&gt;

&lt;p&gt;Not just writing tests, but building systems that make testing frictionless:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test generators that create coverage from specifications&lt;/li&gt;
&lt;li&gt;Mutation testing to validate test quality&lt;/li&gt;
&lt;li&gt;Property-based testing for edge case discovery&lt;/li&gt;
&lt;li&gt;Visual regression testing for UI components&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Static Analysis
&lt;/h3&gt;

&lt;p&gt;Configuring and extending linters to catch domain-specific issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Custom ESLint rules for architectural violations&lt;/li&gt;
&lt;li&gt;Type-level constraints that prevent invalid states&lt;/li&gt;
&lt;li&gt;Import boundary enforcement&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Complexity Gates
&lt;/h3&gt;

&lt;p&gt;Automated guardrails that prevent entropy:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cyclomatic complexity thresholds per function&lt;/li&gt;
&lt;li&gt;File size limits&lt;/li&gt;
&lt;li&gt;Dependency graph analysis&lt;/li&gt;
&lt;li&gt;Breaking change detection&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. AI-Assisted Review
&lt;/h3&gt;

&lt;p&gt;Using AI to pre-review AI output:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A cheaper, faster model scans for common issues&lt;/li&gt;
&lt;li&gt;Flags potential problems for human attention&lt;/li&gt;
&lt;li&gt;Generates review summaries&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Feedback Loops
&lt;/h3&gt;

&lt;p&gt;Systems that learn from past failures:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Post-incident analysis feeds into new automated checks&lt;/li&gt;
&lt;li&gt;Bug patterns become linter rules&lt;/li&gt;
&lt;li&gt;Security vulnerabilities become scanner signatures&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Role Transformation
&lt;/h2&gt;

&lt;p&gt;This shift changes what it means to be a senior engineer:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Old Role (Reviewer)&lt;/th&gt;
&lt;th&gt;New Role (Architect)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Reviews PRs manually&lt;/td&gt;
&lt;td&gt;Builds systems that review automatically&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Catches bugs by reading code&lt;/td&gt;
&lt;td&gt;Catches bugs by writing tests&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Validates formatting&lt;/td&gt;
&lt;td&gt;Configures linters&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Checks for security issues&lt;/td&gt;
&lt;td&gt;Deploys security scanners&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ensures consistency&lt;/td&gt;
&lt;td&gt;Enforces consistency via automation&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The senior engineer's value is no longer in their ability to spot bugs—it's in their ability to build systems that spot bugs at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The AI Verification Trap is real, but it's not about productivity—it's about allocation. Teams that fall into the trap aren't slower; they're misallocated. Their best engineers spend hours catching problems that machines could catch in seconds.&lt;/p&gt;

&lt;p&gt;The transition from reviewer to architect isn't just an optimization. It's a fundamental reallocation of engineering capacity. Every hour your senior engineers spend catching type errors is an hour they don't spend designing systems, mentoring juniors, or solving the hard problems that actually require human creativity.&lt;/p&gt;

&lt;p&gt;If your team is drowning in review queues full of formatting issues and broken tests, the answer isn't to review faster. It's to build the systems that filter out automatable problems—so human review focuses on what humans do best.&lt;/p&gt;

&lt;p&gt;The future doesn't belong to the teams with the most patient reviewers. It belongs to the teams that freed their engineers to be architects.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>softwareengineering</category>
      <category>architecture</category>
    </item>
    <item>
      <title>From Scripter to Architect in the Age of AI</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Wed, 17 Dec 2025 14:00:36 +0000</pubDate>
      <link>https://dev.to/ttoss/from-scripter-to-architect-in-the-age-of-ai-2f4p</link>
      <guid>https://dev.to/ttoss/from-scripter-to-architect-in-the-age-of-ai-2f4p</guid>
      <description>&lt;p&gt;For decades, the job of a software engineer was to write the "happy path." We spent our days scripting the exact sequence of events: fetch data, transform it, display it. We were the authors of the flow.&lt;/p&gt;

&lt;p&gt;With the rise of Applied AI, that role is fundamentally changing. When an LLM generates the logic, we are no longer the scripters. We are the architects of the boundaries.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Principle of Constraint-Driven Architecture
&lt;/h2&gt;

&lt;p&gt;This shift is captured in &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-constraint-driven-architecture" rel="noopener noreferrer"&gt;The Principle of Constraint-Driven Architecture&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Because AI models are probabilistic (see &lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-probabilistic-ai-output" rel="noopener noreferrer"&gt;The Principle of Probabilistic AI Output&lt;/a&gt;), they don't follow instructions with 100% reliability. They follow probability distributions. If you rely on them to "just do the right thing," you are gambling with your system's stability.&lt;/p&gt;

&lt;p&gt;Your new job is to construct the guardrails that keep that probabilistic engine on the tracks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Failure Scenario: The Trusting Developer
&lt;/h2&gt;

&lt;p&gt;The "Trusting Developer" writes a prompt and assumes the AI understands the &lt;em&gt;intent&lt;/em&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ❌ The Trusting Developer Approach&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`
    Extract the address from this email: "&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;emailBody&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;".
    Please be careful not to include the signature.
    Also, ensure the zip code is valid.
`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="c1"&gt;// Result: "123 Main St, Springfield, IL 62704\nSent from my iPhone"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The developer then spends hours tweaking the prompt to "exclude signatures," playing a game of whack-a-mole with edge cases. This is a failure of architecture, not prompting.&lt;/p&gt;

&lt;h2&gt;
  
  
  Corollary 1: Schema Supremacy
&lt;/h2&gt;

&lt;p&gt;To succeed, you must embrace &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-corollary-of-schema-supremacy" rel="noopener noreferrer"&gt;The Corollary of Schema Supremacy&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;This corollary states a simple rule: &lt;strong&gt;If it can be code, it shouldn't be English.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instead of asking the model to "be careful," you define a rigid schema using tools like Zod:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// ✅ The Architect Approach&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;zod&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;AddressSchema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;object&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;street&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="na"&gt;city&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="na"&gt;zipCode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;regex&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sr"&gt;/^&lt;/span&gt;&lt;span class="se"&gt;\d{5}(&lt;/span&gt;&lt;span class="sr"&gt;-&lt;/span&gt;&lt;span class="se"&gt;\d{4})?&lt;/span&gt;&lt;span class="sr"&gt;$/&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="c1"&gt;// Enforced in code, not prompt&lt;/span&gt;
  &lt;span class="na"&gt;deliveryDate&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;date&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;min&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;()),&lt;/span&gt; &lt;span class="c1"&gt;// Enforced in code, not prompt&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="c1"&gt;// The LLM is forced to generate JSON that matches this shape&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generateStructured&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;AddressSchema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`Extract the address from: "&lt;/span&gt;&lt;span class="p"&gt;${&lt;/span&gt;&lt;span class="nx"&gt;emailBody&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You don't ask the model to respect the rules; you build a system where breaking the rules is impossible. The model's output is treated as "untrusted user input" that must be sanitized and validated before it ever touches your business logic.&lt;/p&gt;

&lt;h2&gt;
  
  
  Corollary 2: The Probabilistic Funnel
&lt;/h2&gt;

&lt;p&gt;Visualizing this new architecture leads us to &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-corollary-of-the-probabilistic-funnel" rel="noopener noreferrer"&gt;The Corollary of The Probabilistic Funnel&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Your system design is a funnel that progressively removes randomness.&lt;/p&gt;

&lt;h3&gt;
  
  
  Stage 1: The Wide Mouth (Prompting)
&lt;/h3&gt;

&lt;p&gt;At the top, we allow high entropy. We want the model to be creative and understand vague user intent.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// 1. User Intent (High Entropy)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;userRequest&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;
  &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;I want to book a flight to Paris next week, maybe Tuesday?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Stage 2: The Neck (Structured Generation)
&lt;/h3&gt;

&lt;p&gt;We force that vague intent into a rigid structure using schemas. This reduces entropy but doesn't eliminate it (the model could still hallucinate a date).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// 2. Structure (Medium Entropy)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;BookingSchema&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;object&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
  &lt;span class="na"&gt;date&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;string&lt;/span&gt;&lt;span class="p"&gt;().&lt;/span&gt;&lt;span class="nf"&gt;date&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="c1"&gt;// YYYY-MM-DD&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;structuredData&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;llm&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generateStructured&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;BookingSchema&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;userRequest&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;span class="c1"&gt;// Result: { destination: "Paris", date: "2023-12-25" }&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Stage 3: The Filter (Deterministic Validation)
&lt;/h3&gt;

&lt;p&gt;Finally, we apply deterministic business logic. This is the "zero entropy" zone. If the data doesn't pass, it never touches the database.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// 3. Validation (Zero Entropy)&lt;/span&gt;
&lt;span class="kd"&gt;function&lt;/span&gt; &lt;span class="nf"&gt;validateBooking&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;booking&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;z&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;infer&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="k"&gt;typeof&lt;/span&gt; &lt;span class="nx"&gt;BookingSchema&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;date&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;booking&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;date&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;today&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Date&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;date&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;today&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;throw&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Cannot book flights in the past.&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;date&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getDay&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;!==&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Tuesday check&lt;/span&gt;
    &lt;span class="c1"&gt;// Handle logic...&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As an architect, your job is to design these filters. You decide where the creativity stops and the reliability begins. You are no longer writing the story; you are building the walls that ensure the story doesn't spill over into chaos.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>career</category>
      <category>architecture</category>
    </item>
    <item>
      <title>How to Stop AI From Ruining Your Architecture</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Tue, 16 Dec 2025 13:18:35 +0000</pubDate>
      <link>https://dev.to/ttoss/how-to-stop-ai-from-ruining-your-architecture-403h</link>
      <guid>https://dev.to/ttoss/how-to-stop-ai-from-ruining-your-architecture-403h</guid>
      <description>&lt;p&gt;We are witnessing a new phenomenon in AI-assisted teams: &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-zero-cost-erosion" rel="noopener noreferrer"&gt;The Principle of Zero-Cost Erosion&lt;/a&gt;&lt;/strong&gt;.&lt;br&gt;
Because AI makes adding complexity (patching) nearly free, while refactoring remains expensive (requiring deep thought), teams are defaulting to infinite patching.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Old vs. New Reality
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pre-AI:&lt;/strong&gt; Cost(Patch) &amp;gt; Cost(Refactor) -&amp;gt; &lt;strong&gt;Trigger to Refactor.&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Post-AI:&lt;/strong&gt; Cost(Patch) ~ 0 -&amp;gt; &lt;strong&gt;Never Refactor.&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This economic imbalance leads to "Zero-Cost Erosion," where systems degrade rapidly because "just one more if-statement" is always the path of least resistance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hidden Costs
&lt;/h2&gt;

&lt;p&gt;Beyond the obvious technical debt, this erosion creates two invisible taxes on your team. First, there's the &lt;strong&gt;Token Tax&lt;/strong&gt;: unrefactored, verbose code consumes more tokens in the context window, making every future interaction with that file more expensive (in API costs) and less intelligent (because the context window is filled with noise). Refactoring is an investment in &lt;em&gt;cheaper, smarter future agents&lt;/em&gt;. Second, there's the &lt;strong&gt;Reviewer's Asymmetry&lt;/strong&gt;: it takes longer to &lt;em&gt;review&lt;/em&gt; complex code than to &lt;em&gt;generate&lt;/em&gt; it with AI. Without a brake, your senior engineers (the reviewers) become the bottleneck, drowning in "LGTM" fatigue.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Consequences
&lt;/h2&gt;

&lt;p&gt;If left unchecked, this leads to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Hardened Technical Debt:&lt;/strong&gt; AI cements bad patterns by mimicking them.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Operational Risk:&lt;/strong&gt; Unrefactored code has higher entropy and failure rates.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Review Fatigue:&lt;/strong&gt; Humans cannot effectively review the high-volume, high-complexity output.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Solution: The Complexity Brake
&lt;/h2&gt;

&lt;p&gt;We need to engineer a solution that forces refactoring. This is the practical application of &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-artificial-friction" rel="noopener noreferrer"&gt;The Principle of Artificial Friction&lt;/a&gt;&lt;/strong&gt;: when technology removes natural friction, we must engineer artificial barriers to prevent collapse.&lt;/p&gt;

&lt;p&gt;We call this specific implementation &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/ai/agentic-design-patterns#the-complexity-brake" rel="noopener noreferrer"&gt;The Complexity Brake&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  How to Implement It
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Quantify Complexity:&lt;/strong&gt; Integrate a linter (like SonarQube or ESLint complexity rules) into your CI pipeline.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Set Thresholds:&lt;/strong&gt; Define a maximum complexity score for functions (e.g., 10) and classes.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Veto:&lt;/strong&gt; If an AI agent submits a PR that increases the complexity of a file beyond the threshold, the PR is &lt;strong&gt;automatically rejected&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Refactor Mandate:&lt;/strong&gt; The agent receives the rejection with a specific instruction: &lt;em&gt;"Complexity too high. You must refactor the existing code to reduce complexity below X before adding the new feature."&lt;/em&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;By automating the "No," we force the AI to maintain the hygiene of the codebase, ensuring that speed doesn't come at the cost of survival.&lt;/p&gt;

&lt;h3&gt;
  
  
  Agentic Workflow: The 3-Step Protocol
&lt;/h3&gt;

&lt;p&gt;To operationalize this with agents, we use a specific workflow that prioritizes hygiene over speed.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Test-First Mandate:&lt;/strong&gt; Before asking an agent to write any implementation code, you must first ask it (or a separate "QA Agent") to write the tests for the current code. This ensures we have a safety net for the upcoming changes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;The Artificial Friction Agent:&lt;/strong&gt; Before the implementation begins, a specialized "Friction Agent" evaluates the target file.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;It calculates the current complexity score.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If the threshold is reached:&lt;/strong&gt; It halts the feature work and triggers a "Refactor Mode." It proposes a refactor to simplify the existing code &lt;em&gt;without&lt;/em&gt; changing the tests (Green-Green Refactor).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If the threshold is safe:&lt;/strong&gt; It allows the process to proceed.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;The Implementer Agent:&lt;/strong&gt; Only after the Friction Agent clears the path (either by approving the current state or completing the refactor) does the "Implementer Agent" write the new feature code.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This sequence ensures that we never build new features on top of rotting foundations.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>aiops</category>
      <category>llm</category>
      <category>softwareengineering</category>
    </item>
    <item>
      <title>The AI Collaboration Paradox: Why Being Smart Isn't Enough Anymore</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Mon, 08 Dec 2025 13:48:21 +0000</pubDate>
      <link>https://dev.to/ttoss/the-ai-collaboration-paradox-why-being-smart-isnt-enough-anymore-1aha</link>
      <guid>https://dev.to/ttoss/the-ai-collaboration-paradox-why-being-smart-isnt-enough-anymore-1aha</guid>
      <description>&lt;p&gt;Two engineers join your team on the same day. Both have stellar résumés. Both ace the technical interviews. Both score in the 95th percentile on algorithmic problem-solving.&lt;/p&gt;

&lt;p&gt;Six months later, one is shipping 3x more features than the other.&lt;/p&gt;

&lt;p&gt;The difference? It's not intelligence. It's not work ethic. It's not even technical depth.&lt;/p&gt;

&lt;p&gt;It's something we're just beginning to measure: &lt;strong&gt;collaborative ability with AI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;And it's exposing an uncomfortable truth: in the age of AI agents, being smart isn't enough anymore.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Metric That Doesn't Transfer
&lt;/h2&gt;

&lt;p&gt;For decades, engineering productivity was roughly correlated with individual capability. The best problem-solvers wrote the best code. The fastest debuggers shipped the most features. Team velocity was basically the sum of individual velocities.&lt;/p&gt;

&lt;p&gt;Then AI coding assistants arrived.&lt;/p&gt;

&lt;p&gt;Suddenly, two engineers with identical technical skills could have &lt;strong&gt;wildly different productivity&lt;/strong&gt;. Same experience. Same tools. Same codebase. One developer's output doubles. The other's barely moves.&lt;/p&gt;

&lt;p&gt;What's happening?&lt;/p&gt;

&lt;p&gt;Recent research on human-AI synergy reveals a paradigm-breaking insight: &lt;strong&gt;individual problem-solving ability and collaborative AI ability are not the same thing&lt;/strong&gt;. They're distinct, measurable competencies. You can be exceptional at one and mediocre at the other.&lt;/p&gt;

&lt;p&gt;This is &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-collaborative-ability-distinction" rel="noopener noreferrer"&gt;The Principle of Collaborative Ability Distinction&lt;/a&gt;&lt;/strong&gt; from our &lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles" rel="noopener noreferrer"&gt;Agentic Development Principles&lt;/a&gt;, and it's fundamentally reshaping how we think about developer performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Two Types of Intelligence
&lt;/h2&gt;

&lt;p&gt;Think of a brilliant architect who works alone. They can design complex systems, optimize for scale, and solve intricate technical challenges. Give them a problem and a whiteboard, and they'll crack it.&lt;/p&gt;

&lt;p&gt;Now put that same architect on a team.&lt;/p&gt;

&lt;p&gt;Some brilliant solo performers are &lt;strong&gt;terrible collaborators&lt;/strong&gt;. They struggle to:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Explain their reasoning to others&lt;/li&gt;
&lt;li&gt;Incorporate feedback from teammates&lt;/li&gt;
&lt;li&gt;Adapt their communication style to different audiences&lt;/li&gt;
&lt;li&gt;Coordinate work across distributed contributors&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn't stupidity—it's a &lt;strong&gt;different skill&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;AI collaboration exposes the same dynamic. Being a great solo developer doesn't automatically make you a great AI-augmented developer. These are &lt;strong&gt;separate abilities&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Individual Ability&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Collaborative Ability&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Solve problems independently&lt;/td&gt;
&lt;td&gt;Solve problems with AI assistance&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debug by reasoning through code&lt;/td&gt;
&lt;td&gt;Debug by framing questions for AI&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Architect systems alone&lt;/td&gt;
&lt;td&gt;Architect systems through dialogue&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Optimize algorithms manually&lt;/td&gt;
&lt;td&gt;Optimize by iterating with AI suggestions&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Research shows these abilities &lt;strong&gt;don't strongly correlate&lt;/strong&gt;. Some developers with modest solo ability achieve extraordinary results with AI. Some elite solo performers can barely leverage AI at all.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Smart People Struggle with AI
&lt;/h2&gt;

&lt;p&gt;Let's examine why high-capability developers often underperform with AI tools:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The Expertise Curse
&lt;/h3&gt;

&lt;p&gt;Expert developers have deep mental models of how systems work. They're accustomed to &lt;strong&gt;deterministic reasoning&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"If I call this function with these parameters, I get this exact output"&lt;/li&gt;
&lt;li&gt;"This code pattern always produces this behavior"&lt;/li&gt;
&lt;li&gt;"When I see this error, it always means this problem"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-probabilistic-ai-output" rel="noopener noreferrer"&gt;AI agents are &lt;strong&gt;probabilistic&lt;/strong&gt;&lt;/a&gt;. The same prompt can yield different responses. The same context can lead to different interpretations.&lt;/p&gt;

&lt;p&gt;Expert developers expect precision. When they don't get it, they blame the tool: "This AI doesn't understand the problem."&lt;/p&gt;

&lt;p&gt;They fail to recognize that working with AI requires a &lt;strong&gt;different framing&lt;/strong&gt;—one based on disambiguation, iteration, and perspective-taking rather than deterministic instruction.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. The Communication Gap
&lt;/h3&gt;

&lt;p&gt;Elite developers often excel at technical execution but struggle with &lt;strong&gt;explanatory communication&lt;/strong&gt;. They can build it, but explaining it to someone else (or something else) requires different skills:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Providing context&lt;/strong&gt;: "What background does the AI need to understand this problem?"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Clarifying ambiguity&lt;/strong&gt;: "What terms in my request could have multiple meanings?"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Structuring information&lt;/strong&gt;: "What's the most effective order to present this information?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are &lt;strong&gt;Theory of Mind skills&lt;/strong&gt;—the ability to infer what someone else knows and adapt your communication accordingly. Developers who excel at this (see &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-theory-of-mind-in-human-ai-collaboration" rel="noopener noreferrer"&gt;The Principle of Theory of Mind in Human-AI Collaboration&lt;/a&gt;&lt;/strong&gt;) achieve superior AI collaboration outcomes.&lt;/p&gt;

&lt;p&gt;Developers who don't practice these skills treat AI like a better Google—and get frustrated when it doesn't read their mind.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. The Template Fallacy
&lt;/h3&gt;

&lt;p&gt;Smart developers love &lt;strong&gt;systems&lt;/strong&gt;. When they discover AI, they follow a natural pattern:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Experiment with prompts&lt;/li&gt;
&lt;li&gt;Find ones that work&lt;/li&gt;
&lt;li&gt;Systematize them into a reusable library&lt;/li&gt;
&lt;li&gt;Apply them mechanically&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This works brilliantly for deterministic tools. It &lt;strong&gt;fails catastrophically for AI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Why? Because effective AI collaboration requires &lt;strong&gt;dynamic adaptation&lt;/strong&gt; (see &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles#the-principle-of-dynamic-adaptation" rel="noopener noreferrer"&gt;The Principle of Dynamic Adaptation&lt;/a&gt;&lt;/strong&gt;). The same prompt produces different results depending on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The accumulated context in the conversation&lt;/li&gt;
&lt;li&gt;The specific nuances of the current task&lt;/li&gt;
&lt;li&gt;How the AI interpreted your previous messages&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Developers who treat AI collaboration as a script to memorize plateau quickly. Those who treat it as improvisational dialogue compound their effectiveness.&lt;/p&gt;

&lt;h2&gt;
  
  
  What High Collaborative Ability Looks Like
&lt;/h2&gt;

&lt;p&gt;Let's contrast two developers tackling the same problem: implementing a new authentication flow.&lt;/p&gt;

&lt;h3&gt;
  
  
  Developer A: High Solo Ability, Low Collaborative Ability
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Asks AI: "Generate an authentication flow with OAuth2"&lt;/li&gt;
&lt;li&gt;Gets a generic implementation&lt;/li&gt;
&lt;li&gt;Realizes it doesn't match their architecture&lt;/li&gt;
&lt;li&gt;Asks AI: "Make this work with our user model"&lt;/li&gt;
&lt;li&gt;Gets another generic response&lt;/li&gt;
&lt;li&gt;Gives up, writes it themselves&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Time spent:&lt;/strong&gt; 15 minutes with AI (wasted), 3 hours solo coding&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway:&lt;/strong&gt; "AI tools aren't helpful for complex tasks"&lt;/p&gt;

&lt;h3&gt;
  
  
  Developer B: Moderate Solo Ability, High Collaborative Ability
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Approach:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Asks AI: "What are the key decisions I need to make when implementing OAuth2?"&lt;/li&gt;
&lt;li&gt;Reviews the list, identifies gaps in AI's assumptions&lt;/li&gt;
&lt;li&gt;Asks: "Our app uses stateless JWT tokens and separates auth from user management. How would you structure the flow given these constraints?"&lt;/li&gt;
&lt;li&gt;Examines the response, spots a misunderstanding&lt;/li&gt;
&lt;li&gt;Clarifies: "When I said 'stateless,' I mean we don't store session data. Does that change your approach?"&lt;/li&gt;
&lt;li&gt;Iterates until the AI understands the architecture&lt;/li&gt;
&lt;li&gt;Asks for implementation broken into small, testable steps&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Time spent:&lt;/strong&gt; 30 minutes with AI (productive dialogue), 1 hour implementing and adapting&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Takeaway:&lt;/strong&gt; "AI tools help me explore solutions faster and catch edge cases I'd miss"&lt;/p&gt;

&lt;p&gt;Developer B isn't smarter. They're &lt;strong&gt;better at collaboration&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Skills That Matter Now
&lt;/h2&gt;

&lt;p&gt;If collaborative ability is distinct from individual ability, what specific skills should developers cultivate?&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Perspective-Taking (Theory of Mind)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Practice:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Before prompting, ask: "What does the AI need to know to help me?"&lt;/li&gt;
&lt;li&gt;After responses, ask: "What did the AI misunderstand about my request?"&lt;/li&gt;
&lt;li&gt;Regularly ask: "What assumptions is the AI making that I haven't validated?"&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Developers who actively model the AI's "mental state" frame better prompts and catch misalignments faster.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Iterative Refinement
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Practice:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Never accept the first AI response as final&lt;/li&gt;
&lt;li&gt;Treat each interaction as a feedback loop: "What got better? What's still wrong?"&lt;/li&gt;
&lt;li&gt;Develop a habit of three-turn minimum: explore → narrow → refine&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; AI collaboration is dialogue, not dictation. Compound understanding through iteration.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Context Calibration
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Practice:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Track what you've already discussed in the conversation&lt;/li&gt;
&lt;li&gt;Notice when the AI seems to "forget" earlier context (context window overflow)&lt;/li&gt;
&lt;li&gt;Learn when to provide more context vs. when to reset and start fresh&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Context is your scarcest resource in AI interactions. Manage it intentionally.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Adaptive Communication
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Practice:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Vary your communication style based on the task (exploratory vs. prescriptive)&lt;/li&gt;
&lt;li&gt;Adjust prompts based on response quality (too generic → add constraints; off-topic → clarify)&lt;/li&gt;
&lt;li&gt;Recognize when templates help vs. when they hinder&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Static approaches fail in dynamic environments. Flexibility compounds effectiveness.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Organizational Implications
&lt;/h2&gt;

&lt;p&gt;This paradigm shift has profound implications for how we build engineering teams:&lt;/p&gt;

&lt;h3&gt;
  
  
  Hiring: Look Beyond Algorithmic Prowess
&lt;/h3&gt;

&lt;p&gt;Traditional technical interviews measure individual problem-solving. That's necessary but insufficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;New assessment vectors:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pair programming with AI&lt;/strong&gt;: How does the candidate leverage AI tools during a coding challenge?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompt refinement exercise&lt;/strong&gt;: Give a candidate a mediocre AI output and ask them to improve it iteratively&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context management&lt;/strong&gt;: Present a complex problem and observe how they decompose it for AI assistance&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Onboarding: Teach Collaboration Explicitly
&lt;/h3&gt;

&lt;p&gt;Don't assume developers will "figure out" AI tools. Collaborative ability is a skill that must be taught.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Onboarding components:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI collaboration workshop&lt;/strong&gt;: Teach perspective-taking, iterative refinement, context management&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shadowing exercises&lt;/strong&gt;: New hires observe senior engineers' AI workflows&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low-stakes practice&lt;/strong&gt;: Assign small tasks specifically for AI collaboration skill-building&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Performance Reviews: Measure Collaboration Competency
&lt;/h3&gt;

&lt;p&gt;If productivity increasingly depends on collaborative ability, we must measure and reward it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;New metrics:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;AI leverage ratio&lt;/strong&gt;: How much more productive is the developer with AI vs. without?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Collaboration quality&lt;/strong&gt;: Do they iterate effectively or give up after one failed prompt?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Knowledge transfer&lt;/strong&gt;: Do they document their AI workflows to help others?&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Truth
&lt;/h2&gt;

&lt;p&gt;Here's what the data tells us: &lt;strong&gt;in head-to-head comparisons, developers with high collaborative ability outperform developers with high solo ability when both have access to AI&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Let that sink in.&lt;/p&gt;

&lt;p&gt;A moderately skilled developer who excels at AI collaboration can outship a brilliant engineer who doesn't.&lt;/p&gt;

&lt;p&gt;This doesn't mean individual ability is obsolete—it means it's &lt;strong&gt;no longer sufficient&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The future belongs to developers who cultivate &lt;strong&gt;both&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Deep technical expertise (individual ability)&lt;/li&gt;
&lt;li&gt;Adaptive collaboration skills (collaborative ability)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The developers who recognize this shift and invest deliberately in collaborative ability will have an outsized advantage.&lt;/p&gt;

&lt;p&gt;The developers who dismiss AI as "just a tool" without examining their collaboration approach will find themselves outpaced by peers with half their experience but double their collaborative effectiveness.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion: The New Intelligence
&lt;/h2&gt;

&lt;p&gt;For decades, we've optimized for hiring "the smartest engineers." We've celebrated solo genius. We've rewarded individual contribution.&lt;/p&gt;

&lt;p&gt;AI agents are rewriting those rules.&lt;/p&gt;

&lt;p&gt;Intelligence still matters. But &lt;strong&gt;collaborative intelligence matters more&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;The smartest developer in the room isn't the one who can solve the hardest algorithm. It's the one who can orchestrate the most effective human-AI partnership.&lt;/p&gt;

&lt;p&gt;This is the paradigm shift: from &lt;strong&gt;individual capability&lt;/strong&gt; to &lt;strong&gt;collaborative capability&lt;/strong&gt; as the primary driver of productivity.&lt;/p&gt;

&lt;p&gt;The question isn't whether you're smart enough to compete in the AI era.&lt;/p&gt;

&lt;p&gt;The question is: are you &lt;strong&gt;collaborative enough&lt;/strong&gt;?&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Learn more about developing collaborative AI skills in our &lt;a href="https://ttoss.dev/docs/ai/agentic-development-principles" rel="noopener noreferrer"&gt;Agentic Development Principles&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>productivity</category>
      <category>development</category>
    </item>
    <item>
      <title>Mastering Prompts Through Inversion: The Anti-Prompt Guide</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Sun, 07 Dec 2025 13:56:31 +0000</pubDate>
      <link>https://dev.to/ttoss/mastering-prompts-through-inversion-the-anti-prompt-guide-4i1f</link>
      <guid>https://dev.to/ttoss/mastering-prompts-through-inversion-the-anti-prompt-guide-4i1f</guid>
      <description>&lt;p&gt;Want to instantly level up your prompting skills? Stop trying to write "good" prompts. Instead, learn how to write the &lt;strong&gt;worst&lt;/strong&gt; possible prompts—and then do the exact opposite.&lt;/p&gt;

&lt;p&gt;The mental model of &lt;strong&gt;Inversion&lt;/strong&gt; is a powerful tool in engineering and problem-solving. As the mathematician Carl Jacobi said, "Invert, always invert." When applied to AI prompting, this means identifying the specific patterns that guarantee failure (hallucinations, vague answers, off-topic ramblings) and ruthlessly eliminating them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Anti-Prompt" Philosophy
&lt;/h2&gt;

&lt;p&gt;We've compiled a comprehensive guide based on this philosophy. It identifies the common "Anti-Patterns" that degrade model performance. Here is a breakdown of why they fail:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The Lazy Delegator (Vagueness)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Mistake:&lt;/strong&gt; Using broad verbs and ambiguous words like "cool", "better", or "nice".&lt;br&gt;
&lt;strong&gt;Why it fails:&lt;/strong&gt; The model has infinite degrees of freedom and will regress to the mean, giving you the most statistically likely (mediocre) answer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Anti-Prompt:&lt;/em&gt; "Write something about marketing."&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Fix:&lt;/em&gt; Be specific. "Write a LinkedIn post about B2B marketing trends in 2025."&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. The Kitchen Sink (Overloading)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Mistake:&lt;/strong&gt; Asking for everything at once—mixing explanation, coding, and creative writing in one block.&lt;br&gt;
&lt;strong&gt;Why it fails:&lt;/strong&gt; It confuses the model's attention mechanism. Instructions buried in the middle often get ignored ("Lost in the Middle" phenomenon).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Anti-Prompt:&lt;/em&gt; "Explain quantum physics, write a poem about cats, and give me 10 business ideas."&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Fix:&lt;/em&gt; Chain your prompts. Break the task into distinct steps.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. The Mind Reader (Missing Context)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Mistake:&lt;/strong&gt; Assuming the AI knows who you are, who the audience is, and what the goal is.&lt;br&gt;
&lt;strong&gt;Why it fails:&lt;/strong&gt; Without a persona, the model defaults to a generic, bland "helpful assistant". Without an audience, it guesses (often wrongly).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Anti-Prompt:&lt;/em&gt; "Explain how a car engine works."&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Fix:&lt;/em&gt; Assign a role. "Act as a senior mechanical engineer explaining to a 5-year-old."&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. The Negative Bias (Negative Constraints)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Mistake:&lt;/strong&gt; Relying only on negative constraints ("Don't do X", "Don't be Y").&lt;br&gt;
&lt;strong&gt;Why it fails:&lt;/strong&gt; Models are often bad at negatives and may focus on the very thing you told them to avoid.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Anti-Prompt:&lt;/em&gt; "Don't be boring. Don't use long sentences."&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Fix:&lt;/em&gt; State what you DO want. "Write in a witty, conversational tone with short sentences."&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. The Chaos Agent (Structure &amp;amp; Format)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Mistake:&lt;/strong&gt; Letting the model choose the format or giving contradictory instructions.&lt;br&gt;
&lt;strong&gt;Why it fails:&lt;/strong&gt; You get whatever format is statistically most common (usually paragraphs).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Anti-Prompt:&lt;/em&gt; "Write a story but make it detailed. Put it in a table if you want."&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Fix:&lt;/em&gt; Force the format. "Output the result as a JSON object."&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. The Chatty Cathy (Fluff)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The Mistake:&lt;/strong&gt; Treating the AI like a human colleague with small talk.&lt;br&gt;
&lt;strong&gt;Why it fails:&lt;/strong&gt; It wastes tokens, dilutes the signal, and signals low commitment.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Anti-Prompt:&lt;/em&gt; "Hi there! I was wondering if you could maybe help me..."&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Fix:&lt;/em&gt; Be direct. "You are an expert Python developer. Write a script to..."&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why It Works
&lt;/h2&gt;

&lt;p&gt;Understanding these failure modes gives you a checklist for success. Instead of guessing what might work, you can systematically verify that your prompt &lt;em&gt;doesn't&lt;/em&gt; contain the elements that make it fail.&lt;/p&gt;

&lt;p&gt;For example, instead of asking "How do I make this better?", you realize that "better" is a vague anti-pattern. You invert it to: "Optimize this text for a 5th-grade reading level."&lt;/p&gt;

&lt;h2&gt;
  
  
  Read the Full Guide
&lt;/h2&gt;

&lt;p&gt;Ready to master the art of the Anti-Prompt? Check out our detailed documentation:&lt;/p&gt;

&lt;p&gt;👉 &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/ai/how-to-prompt" rel="noopener noreferrer"&gt;How to Prompt: The Anti-Prompt Guide&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It includes a full breakdown of the 6 major anti-patterns, examples of failure, and the specific corrections you need to apply.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>Mastering the Context Window: Why Your AI Agent Forgets (and How to Fix It)</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Sat, 06 Dec 2025 17:52:25 +0000</pubDate>
      <link>https://dev.to/ttoss/mastering-the-context-window-why-your-ai-agent-forgets-and-how-to-fix-it-58lo</link>
      <guid>https://dev.to/ttoss/mastering-the-context-window-why-your-ai-agent-forgets-and-how-to-fix-it-58lo</guid>
      <description>&lt;p&gt;AI agents are transforming how we write code, but they are not magic. They operate within a strict constraint that many developers overlook until it bites them: the &lt;strong&gt;context window&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;If you treat an AI session like an infinite conversation, you will eventually hit a wall where the model starts "forgetting" your initial instructions, hallucinating APIs, or reverting to bad patterns. This isn't a bug; it's a fundamental limitation of the technology. Success in agentic development requires treating &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/product/product-development/agentic-development-principles#the-principle-of-context-scarcity" rel="noopener noreferrer"&gt;context as a scarce, economic resource&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Hard Limit of Memory
&lt;/h2&gt;

&lt;p&gt;Every Large Language Model (LLM) has a fixed context window—a maximum limit on the amount of text (tokens) it can process at once. This includes your system prompt, the current conversation history, the files you've attached, and the model's own response.&lt;/p&gt;

&lt;p&gt;When you exceed this limit, the model doesn't warn you; it simply truncates the input. Usually, the oldest parts of the conversation—often your critical architectural guidelines or the initial problem statement—are silently discarded.&lt;/p&gt;

&lt;p&gt;This reality is codified in &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/product/product-development/agentic-development-principles#the-principle-of-finite-context-window" rel="noopener noreferrer"&gt;The Principle of Finite Context Window&lt;/a&gt;&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;AI models have a fixed context window... Teams must manage context as a scarce resource, prioritizing relevant information and resetting sessions when necessary.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;If you are not aware of this limit, you might find yourself in a situation where an agent that was brilliant five minutes ago suddenly starts generating code that violates the very rules you gave it at the start of the session.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Downward Spiral of "Just One More Fix"
&lt;/h2&gt;

&lt;p&gt;When an agent produces buggy code, the natural human instinct is to correct it immediately within the same chat. "No, that's wrong, try again." "You missed this edge case." "Fix the import."&lt;/p&gt;

&lt;p&gt;However, every interaction consumes tokens. As you pile on error messages, stack traces, and correction prompts, you are filling the context window with "garbage" data. This leads to &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/product/product-development/agentic-development-principles#the-principle-of-compounding-contextual-error" rel="noopener noreferrer"&gt;The Principle of Compounding Contextual Error&lt;/a&gt;&lt;/strong&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If an AI interaction does not resolve the problem quickly, the likelihood of successful resolution drops with each additional interaction... Fast, decisive resolution is critical.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;A long, winding debugging session is often counterproductive. The model is "reading" a history full of its own mistakes, which biases it toward repeating them. Instead of fixing the bug, you are often better off resetting the context and starting fresh with a refined prompt.&lt;/p&gt;

&lt;h2&gt;
  
  
  Strategies for the Finite Window
&lt;/h2&gt;

&lt;p&gt;So, how do you work with a massive codebase when your "smart" assistant has a limited memory? You cannot simply dump a 100GB repository into a 200k token window. You need a strategy.&lt;/p&gt;

&lt;p&gt;Here are the three main architectures for handling codebases larger than the context window:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The "Agentic" Approach (Best for Development)
&lt;/h3&gt;

&lt;p&gt;This is the approach used by advanced coding tools like Aider, Cursor, or custom agent scripts. Instead of dumping the entire codebase into the prompt, you give the agent a "map" of the file structure (which consumes very few tokens).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt;&lt;br&gt;
The agent looks at the map and decides which specific files it needs to read to solve the task. It then loads &lt;em&gt;only&lt;/em&gt; those files into its context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it works:&lt;/strong&gt;&lt;br&gt;
This mimics how a human developer works. You don't hold the entire codebase in your head; you look up the specific files relevant to the bug you are fixing. It keeps the context clean and focused on the immediate problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. RAG (Retrieval-Augmented Generation)
&lt;/h3&gt;

&lt;p&gt;This is the standard enterprise solution for querying large knowledge bases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt;&lt;br&gt;
You break your code into small chunks, turn them into mathematical vectors (embeddings), and store them in a database. When you ask a question, the system searches for the most relevant chunks and sends &lt;em&gt;only&lt;/em&gt; those to the model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it works:&lt;/strong&gt;&lt;br&gt;
It scales infinitely. You could have a 100GB codebase, and the model only sees the 10KB relevant to your question. However, it can sometimes miss broader architectural context if the relevant chunks aren't retrieved correctly.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Vertical Model Scaling
&lt;/h3&gt;

&lt;p&gt;Sometimes, the best solution is brute force. If you have a complex problem that requires understanding the entire system at once, you can switch to a model with a massive context window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works:&lt;/strong&gt;&lt;br&gt;
This allows you to paste an entire repository directly into the prompt without complex retrieval architectures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it works:&lt;/strong&gt;&lt;br&gt;
It simplifies the workflow. You don't need to worry about which files to include; you just include everything. This is powerful for large-scale refactoring or understanding deep dependencies, though it comes with higher latency and cost.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Context is currency. Every token you feed an agent has a cost—not just in money, but in the model's attention span and reliability.&lt;/p&gt;

&lt;p&gt;By understanding &lt;strong&gt;The Principle of Finite Context Window&lt;/strong&gt; and avoiding &lt;strong&gt;The Principle of Compounding Contextual Error&lt;/strong&gt;, you can stop fighting against the tool and start using it effectively. Whether you choose an agentic approach, RAG, or a massive context model, the key is intention: know what is in your context, and ensure it's only what matters.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>productivity</category>
    </item>
    <item>
      <title>The 80% Rule: Why Your AI Agents Should Only Speak When Confident</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Fri, 05 Dec 2025 14:03:35 +0000</pubDate>
      <link>https://dev.to/ttoss/the-80-rule-why-your-ai-agents-should-only-speak-when-confident-58i0</link>
      <guid>https://dev.to/ttoss/the-80-rule-why-your-ai-agents-should-only-speak-when-confident-58i0</guid>
      <description>&lt;p&gt;We've all been there: You ask your AI coding assistant for a solution to a tricky bug. It responds instantly, with absolute certainty, providing a code snippet that looks perfect. You copy it, run it, and... nothing. Or worse, a new error.&lt;/p&gt;

&lt;p&gt;The AI wasn't lying to you. It was hallucinating. It was "confidently wrong."&lt;/p&gt;

&lt;p&gt;In our &lt;a href="https://ttoss.dev/docs/product/product-development/agentic-development-principles" rel="noopener noreferrer"&gt;Agentic Development Principles&lt;/a&gt;, we call this &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/product/product-development/agentic-development-principles#the-principle-of-confidence-qualified-ai-output" rel="noopener noreferrer"&gt;The Principle of Confidence-Qualified AI Output&lt;/a&gt;&lt;/strong&gt;. But in practice, we just call it &lt;strong&gt;The 80% Rule&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The "Yes Man" Problem
&lt;/h2&gt;

&lt;p&gt;Large Language Models (LLMs) are designed to predict the next most likely token. They are eager to please. If you ask a question, their training compels them to provide an answer—any answer—rather than silence.&lt;/p&gt;

&lt;p&gt;This makes them excellent creative partners but dangerous engineering consultants. When an engineer is unsure, they say, "I need to check the docs." When an LLM is unsure, it often invents a plausible-sounding API method that doesn't exist.&lt;/p&gt;

&lt;p&gt;This creates &lt;strong&gt;noise&lt;/strong&gt;. Every time you have to verify a low-quality suggestion, you pay a cognitive tax—a direct violation of the &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/product/product-development/agentic-development-principles#the-principle-of-cognitive-bandwidth-conservation" rel="noopener noreferrer"&gt;Principle of Cognitive Bandwidth Conservation&lt;/a&gt;&lt;/strong&gt;. You stop trusting the tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Solution: Force Self-Reflection
&lt;/h2&gt;

&lt;p&gt;The good news is that while LLMs are probabilistic engines, they are also capable of evaluating their own probability distributions. You can ask an LLM to "grade" its own answer before showing it to you.&lt;/p&gt;

&lt;p&gt;We implemented a simple instruction across our agentic workflows:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;"Only provide a solution if your confidence is HIGH (&amp;gt;80%). If you are unsure, state 'I don't know' or ask for more context."&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How to Implement The 80% Rule
&lt;/h2&gt;

&lt;p&gt;You don't need complex code to use this. It starts with your system prompts or custom instructions.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. The System Prompt
&lt;/h3&gt;

&lt;p&gt;Add this to your custom instructions in VS Code (Copilot), ChatGPT, or Claude:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CRITICAL INSTRUCTION:
You are an expert engineering consultant.
- Before answering, assess your confidence level (0-100%) based on your training data and the provided context.
- If Confidence &amp;lt; 80%: Do NOT guess. State clearly: "I am not confident in this answer because [reason]." Ask clarifying questions or suggest where I should look.
- If Confidence &amp;gt;= 80%: Provide the solution directly.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. The "Vibe Check"
&lt;/h3&gt;

&lt;p&gt;If you are already deep in a conversation and suspect the AI is drifting, use a "Vibe Check" prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"On a scale of 0-100%, how confident are you that this specific import path exists in the current version of the library?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You will be surprised how often the model replies: &lt;em&gt;"Actually, upon reflection, I am only 40% confident. That import might have changed in v5."&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters: Signal vs. Noise
&lt;/h2&gt;

&lt;p&gt;Implementing the 80% Rule changes the dynamic of your development workflow.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Reduced Cognitive Load:&lt;/strong&gt; When the AI speaks, you know it's worth listening to. You don't have to fact-check every single line.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Faster Failure:&lt;/strong&gt; Instead of debugging a hallucinated solution for 20 minutes, you get an immediate "I don't know," prompting you to check the official documentation yourself.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Better Trust:&lt;/strong&gt; An agent that admits ignorance is an agent you can trust.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AI agents are powerful, but they lack the human instinct for self-preservation that keeps us from making wild guesses in production. By enforcing &lt;strong&gt;&lt;a href="https://ttoss.dev/docs/product/product-development/agentic-development-principles#the-principle-of-confidence-qualified-ai-output" rel="noopener noreferrer"&gt;The Principle of Confidence-Qualified AI Output&lt;/a&gt;&lt;/strong&gt;, we impose that discipline artificially.&lt;/p&gt;

&lt;p&gt;Make your agents earn the right to interrupt you. If they aren't 80% sure, they should stay 100% silent.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>promptengineering</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Agentic Development Principles - Part 1</title>
      <dc:creator>Pedro Arantes</dc:creator>
      <pubDate>Fri, 05 Dec 2025 01:03:49 +0000</pubDate>
      <link>https://dev.to/ttoss/agentic-development-principles-part-1-2olg</link>
      <guid>https://dev.to/ttoss/agentic-development-principles-part-1-2olg</guid>
      <description>&lt;p&gt;I am documenting the &lt;strong&gt;Agentic Development Principles&lt;/strong&gt; that currently guide my team and me. This post is Part 1, focusing on maximizing effective human-AI collaboration when using LLMs as development assistants, ensuring we build products efficiently and safely.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Agentic Development refers to the practice of using AI agents as collaborative partners in the development process. This approach requires the intentional design of specialized workflows, clear feedback loops, and well-defined decision boundaries.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Principles
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Principle of Immediate AI Feedback Loop
&lt;/h3&gt;

&lt;p&gt;Integrate AI tools directly into the coding environment to deliver instant suggestions and error checking, minimizing context switching and delays. This maximizes value-added time and aligns with &lt;a href="https://ttoss.dev/docs/product/product-development/principles#b3-the-batch-size-feedback-principle-reducing-batch-sizes-accelerates-feedback" rel="noopener noreferrer"&gt;B3: The Batch Size Feedback Principle&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure Scenario:&lt;/strong&gt; A team uses an AI code completion tool with a 5-second delay. Developers either wait (breaking flow) or ignore the tool, resulting in inconsistent adoption and wasted potential.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Principle of Human-in-the-Loop Veto
&lt;/h3&gt;

&lt;p&gt;Every AI-generated output must pass final review by a domain expert who retains full accountability. AI accelerates delivery but introduces risk; human oversight ensures quality and prevents costly errors, supporting &lt;a href="https://ttoss.dev/docs/product/product-development/principles#e1-the-principle-of-quantified-overall-economics-select-actions-based-on-quantified-overall-economic-impact" rel="noopener noreferrer"&gt;E1: The Principle of Quantified Overall Economics&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure Scenario:&lt;/strong&gt; An AI-generated database query is deployed without review, causing performance issues and technical debt.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Principle of Small-Experiment Automation
&lt;/h3&gt;

&lt;p&gt;Use AI agents to break down large tasks into small, verifiable experiments (e.g., auto-generated unit tests, code variations), reducing risk and enabling fast feedback. This applies &lt;a href="https://ttoss.dev/docs/product/product-development/principles#v7-the-principle-of-small-experiments-many-small-experiments-produce-less-variation-than-one-big-one" rel="noopener noreferrer"&gt;V7: The Principle of Small Experiments&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure Scenario:&lt;/strong&gt; An AI generates a massive, brittle test suite. Maintenance overhead grows, slowing development and negating the benefits of automation.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Principle of Contextual Input Quality
&lt;/h3&gt;

&lt;p&gt;Developers must provide high-quality context and select the agent/model with the best cost-benefit profile for each task. Effective prompt engineering and tool selection minimize waste and the Cost of Delay, echoing &lt;a href="https://ttoss.dev/docs/product/product-development/principles#e16-the-principle-of-marginal-economics-always-compare-marginal-cost-and-marginal-value" rel="noopener noreferrer"&gt;E16: The Principle of Marginal Economics&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure Scenario:&lt;/strong&gt; Using a slow, expensive model for a trivial task with a vague prompt leads to wasted time and resources.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Principle of Continuous Agent Proficiency
&lt;/h3&gt;

&lt;p&gt;Teams must practice frequent, low-stakes interactions with AI tools to build expertise in prompt engineering and tool selection. Skill development reduces long-term rework and accelerates learning, following &lt;a href="https://ttoss.dev/docs/product/product-development/principles#ff11-the-batch-size-principle-of-feedback-small-batches-yield-fast-feedback" rel="noopener noreferrer"&gt;FF11: The Batch Size Principle of Feedback&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure Scenario:&lt;/strong&gt; Developers receive initial training but lack time for practice, resulting in slow learning and inefficient workflows.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Principle of Delegated Agency Scaling
&lt;/h3&gt;

&lt;p&gt;Scale AI agent autonomy by task complexity: fully delegate repetitive, low-risk tasks; use AI as a consultant for complex or high-risk decisions. This balances velocity and risk, applying &lt;a href="https://ttoss.dev/docs/product/product-development/principles#b4-the-batch-size-risk-principle-reducing-batch-size-reduces-risk" rel="noopener noreferrer"&gt;B4: The Batch Size Risk Principle&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure Scenario:&lt;/strong&gt; Delegating complex build optimization to AI leads to short-term gains but introduces critical errors, increasing rework and risk.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Principle of Automated Guardrail Prerequisite
&lt;/h3&gt;

&lt;p&gt;Before granting full autonomy to AI agents, ensure robust automated safety nets (e.g., CI/CD, test suites) are in place to continuously validate outputs. Automation must be checked by automation to prevent catastrophic failures, supporting &lt;a href="https://ttoss.dev/docs/product/product-development/principles#b4-the-batch-size-risk-principle-reducing-batch-size-reduces-risk" rel="noopener noreferrer"&gt;B4: The Batch Size Risk Principle&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure Scenario:&lt;/strong&gt; An AI refactors components with inadequate automated tests, introducing subtle bugs that escape detection until production.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Principle of Layered Autonomy
&lt;/h3&gt;

&lt;p&gt;Establish tiered governance: Company sets strategic safety and cost policies, Team defines tactical goals and quality metrics, Developer retains autonomy over tool selection and workflows. This decentralizes optimization while maintaining compliance and security.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure Scenario:&lt;/strong&gt; Mandating a single AI tool for all teams blocks specialized workflows, increasing risk and cycle time for critical tasks.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Principle of Compounding Contextual Error
&lt;/h3&gt;

&lt;p&gt;If an AI interaction does not resolve the problem quickly, the likelihood of successful resolution drops with each additional interaction, as accumulated context and unresolved errors compound. Fast, decisive resolution is critical to prevent error propagation and cognitive overload, aligning with &lt;a href="https://ttoss.dev/docs/product/product-development/principles#b3-the-batch-size-feedback-principle-reducing-batch-sizes-accelerates-feedback" rel="noopener noreferrer"&gt;B3: The Batch Size Feedback Principle&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure Scenario:&lt;/strong&gt; A developer repeatedly prompts an AI agent to fix a bug, but each iteration introduces new minor errors and increases context complexity. After several cycles, the original issue is buried under layers of confusion, making resolution harder and increasing rework.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>softwaredevelopment</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
