<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Karun Japhet</title>
    <description>The latest articles on DEV Community by Karun Japhet (@javatarz).</description>
    <link>https://dev.to/javatarz</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/javatarz"/>
    <language>en</language>
    <item>
      <title>Structuring Claude Code for Multi-Repo Workspaces</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Thu, 26 Mar 2026 19:38:34 +0000</pubDate>
      <link>https://dev.to/javatarz/structuring-claude-code-for-multi-repo-workspaces-4147</link>
      <guid>https://dev.to/javatarz/structuring-claude-code-for-multi-repo-workspaces-4147</guid>
      <description>&lt;p&gt;Claude Code understands one repo at a time. Most teams have thirty.&lt;/p&gt;

&lt;p&gt;Microservices, shared libraries, infrastructure-as-code, frontend apps, data pipelines, all in separate git repos. Start Claude Code in one and ask about another, and it has no context. It doesn't know the workspace exists.&lt;/p&gt;

&lt;p&gt;Here's how I've been setting this up to work across repositories.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/posts/2026-03-26-structuring-claude-code-for-multi-repo-workspaces/cover.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fafmfxd109jf30ob3yb08.png" alt="Three translucent layers showing org, team, and repo context stacking in a multi-repo workspace" width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The problem
&lt;/h2&gt;

&lt;p&gt;When you start Claude Code in &lt;code&gt;orders/order-service&lt;/code&gt;, it has no idea that &lt;code&gt;orders/orders-ui&lt;/code&gt; exists next door, or that shared libraries live in &lt;code&gt;shared/&lt;/code&gt;, or that the data team's Spark jobs are in &lt;code&gt;analytics/&lt;/code&gt;. Every session starts with you explaining the workspace layout.&lt;/p&gt;

&lt;p&gt;The same problem shows up when someone new joins the team. They clone one repo, but they don't know what other repos exist, how they relate, or where to look for shared infrastructure.&lt;/p&gt;

&lt;h2&gt;
  
  
  A bootstrap repo as the workspace root
&lt;/h2&gt;

&lt;p&gt;The approach I landed on: a bootstrap repo that sits above all the other repos as the workspace root. It doesn't contain application code. It contains:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;A repo manifest&lt;/strong&gt; listing every repo, where it lives, and what it does&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context files&lt;/strong&gt; that Claude Code picks up from the directory tree&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tasks&lt;/strong&gt; for common cross-repo operations (pull all, search all, check status)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I use &lt;a href="https://github.com/alajmo/mani" rel="noopener noreferrer"&gt;mani&lt;/a&gt; as the repo manager, but the ideas apply regardless of tooling. You could do this with a shell script and a list of repos.&lt;/p&gt;

&lt;h3&gt;
  
  
  Directory structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;workspace/
  mani.yaml                  # imports per-product configs
  CLAUDE.md                  # org-level context
  mani.d/
    orders.yaml              # order management (3-tier)
    shipping.yaml            # shipping &amp;amp; logistics (3-tier)
    analytics.yaml           # data platform (Spark, Airflow, APIs)
    assist.yaml              # agentic AI system (FastAPI, LangGraph, React)
    shared.yaml              # shared libraries and services
    infra.yaml               # infrastructure repos
  orders/
    CLAUDE.md                # team-level context (tracked in bootstrap)
    order-service/           # Spring Boot (gitignored)
    payment-service/         # Spring Boot (gitignored)
    orders-ui/               # React (gitignored)
    reporting-service/       # Spring Boot + PostgreSQL (gitignored)
    pricing-engine/          # Vert.x, not Spring Boot (gitignored)
  shipping/
    CLAUDE.md
    shipment-service/        # Spring Boot + MongoDB
    shipping-ui/             # Angular
    carrier-service/         # Spring Boot, reactive
  analytics/
    CLAUDE.md
    airflow-dags/            # Python, Airflow
    spark-jobs/              # PySpark on EMR
    metrics-service/         # Kotlin, Micronaut
    dashboard-ui/            # React
  assist/
    CLAUDE.md
    agent-service/           # FastAPI + LangGraph
    conversation-service/    # Spring Boot + WebSocket
    chat-ui/                 # React + streaming chat
  shared/
    CLAUDE.md
    react-lib/
    java-commons/
    feature-toggles/
  infra/
    CLAUDE.md
    terraform-modules/
    ci-templates/
    cluster/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Each indented directory under a product (&lt;code&gt;order-service/&lt;/code&gt;, &lt;code&gt;orders-ui/&lt;/code&gt;, &lt;code&gt;spark-jobs/&lt;/code&gt;, etc.) is a separate git repo, cloned by the repo manager and gitignored by the bootstrap repo. The CLAUDE.md files at each level are tracked in the bootstrap repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  Three layers of context
&lt;/h2&gt;

&lt;p&gt;Claude Code walks up the directory tree looking for CLAUDE.md files. If you start it in &lt;code&gt;orders/order-service&lt;/code&gt;, it reads:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;code&gt;orders/order-service/CLAUDE.md&lt;/code&gt; (repo-level, committed in that repo)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;orders/CLAUDE.md&lt;/code&gt; (team-level, committed in bootstrap)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;workspace/CLAUDE.md&lt;/code&gt; (org-level, committed in bootstrap)&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each layer adds context without repeating what the others provide.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 1: Organisation
&lt;/h3&gt;

&lt;p&gt;The org-level CLAUDE.md covers things that apply everywhere:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Warning that this is a multi-repo workspace (check &lt;code&gt;git rev-parse --show-toplevel&lt;/code&gt; before git operations)&lt;/li&gt;
&lt;li&gt;How to discover repos (point to the manifest file)&lt;/li&gt;
&lt;li&gt;Which products exist and what they own&lt;/li&gt;
&lt;li&gt;Common cross-repo operations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Keep this short. Claude reads it on every session regardless of which repo you're in.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 2: Team
&lt;/h3&gt;

&lt;p&gt;The team-level CLAUDE.md covers conventions shared across repos in that group. The content varies by product type:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A 3-tier product&lt;/strong&gt; (like orders or shipping) might cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backend stack (Java 21, Spring Boot 3.5, Gradle, MongoDB)&lt;/li&gt;
&lt;li&gt;Frontend stack (React 19, Vite, TypeScript)&lt;/li&gt;
&lt;li&gt;Build and test commands for each&lt;/li&gt;
&lt;li&gt;The one exception (the pricing engine uses Vert.x, not Spring Boot)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;A data platform&lt;/strong&gt; (like analytics) might cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Orchestration (Airflow DAGs, triggered via async-job-service)&lt;/li&gt;
&lt;li&gt;Processing (PySpark on EMR, containerised Python jobs on ECS)&lt;/li&gt;
&lt;li&gt;Multi-region support (pipelines run per-region with region-specific config)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;An agentic system&lt;/strong&gt; (like assist) might cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Agent framework (FastAPI + LangGraph for orchestration)&lt;/li&gt;
&lt;li&gt;Backing services (Spring Boot for persistence, WebSocket for streaming)&lt;/li&gt;
&lt;li&gt;Frontend (React with streaming UI patterns)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I learned not to list repos here. Lists go stale. Instead, tell Claude where to look: "This group's repos are defined in &lt;code&gt;mani.d/orders.yaml&lt;/code&gt;. Each project has a &lt;code&gt;desc&lt;/code&gt; field. Check that file for the current list."&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 3: Repository
&lt;/h3&gt;

&lt;p&gt;This lives in each repo and is maintained by the team that owns it. Build commands, architecture notes, test instructions, things specific to that codebase. This is standard Claude Code usage, nothing new.&lt;/p&gt;

&lt;h2&gt;
  
  
  Project descriptions in the manifest
&lt;/h2&gt;

&lt;p&gt;One-line descriptions in the repo manifest make a big difference for discovery. When Claude reads the manifest, it knows what each repo does without cloning or exploring it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;projects&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;order-service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;desc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Order lifecycle management and fulfilment&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git@gitlab.com:acme/order-service.git&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;orders/order-service&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;orders&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;java&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="na"&gt;pricing-engine&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;desc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Vert.x real-time pricing engine&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git@gitlab.com:acme/pricing-engine.git&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;orders/pricing-engine&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;orders&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;java&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;

  &lt;span class="na"&gt;orders-ui&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;desc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;React UI for order management and reporting&lt;/span&gt;
    &lt;span class="na"&gt;url&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;git@gitlab.com:acme/orders-ui.git&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;orders/orders-ui&lt;/span&gt;
    &lt;span class="na"&gt;tags&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="nv"&gt;orders&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;ui&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;desc&lt;/code&gt; field costs almost nothing to maintain and saves Claude from guessing or asking.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cross-repo tasks
&lt;/h2&gt;

&lt;p&gt;A repo manager like mani lets you define tasks that run across repos:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;update-repos&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;desc&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;pull latest for all repos&lt;/span&gt;
    &lt;span class="na"&gt;target&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
    &lt;span class="na"&gt;cmd&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
      &lt;span class="s"&gt;current=$(git rev-parse --abbrev-ref HEAD)&lt;/span&gt;
      &lt;span class="s"&gt;if [[ -n $(git status -s) ]]; then&lt;/span&gt;
        &lt;span class="s"&gt;git fetch origin $branch&lt;/span&gt;
        &lt;span class="s"&gt;echo "FETCHED (dirty working tree on $current)"&lt;/span&gt;
      &lt;span class="s"&gt;elif [[ "$$current" != "$branch" ]]; then&lt;/span&gt;
        &lt;span class="s"&gt;git fetch origin $branch&lt;/span&gt;
        &lt;span class="s"&gt;echo "FETCHED (on branch $current, not $branch)"&lt;/span&gt;
      &lt;span class="s"&gt;else&lt;/span&gt;
        &lt;span class="s"&gt;git pull --rebase origin $branch&lt;/span&gt;
      &lt;span class="s"&gt;fi&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This one pulls latest on repos that are clean and on the default branch, and fetches (but doesn't touch) repos with work in progress. The data is available locally either way, so the next pull is fast.&lt;/p&gt;

&lt;p&gt;Other useful tasks: search across all repos, check which repos have uncommitted changes, trigger CI pipelines.&lt;/p&gt;

&lt;h2&gt;
  
  
  The gitignore trick for team-level CLAUDE.md files
&lt;/h2&gt;

&lt;p&gt;The bootstrap repo gitignores all sub-repo directories. But the team-level CLAUDE.md files need to be tracked in bootstrap, inside those same directories. The fix:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Use dir/* instead of dir/ so exceptions work
orders/*
!orders/CLAUDE.md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;orders/&lt;/code&gt; ignores the directory entirely (git won't look inside). &lt;code&gt;orders/*&lt;/code&gt; ignores everything inside it but lets you exclude specific files.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills, hooks, and commands
&lt;/h2&gt;

&lt;p&gt;Claude Code supports &lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;skills, hooks, and custom commands&lt;/a&gt; configured in the &lt;code&gt;.claude/&lt;/code&gt; directory of a repo. These have always worked at the repo level. The bootstrap structure gives you two more levels:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Org level&lt;/strong&gt; (in the bootstrap repo's &lt;code&gt;.claude/&lt;/code&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Skills that work across all repos. I have one that queries SonarQube for any repo in the workspace, auto-detecting the project key from the current directory.&lt;/li&gt;
&lt;li&gt;Pre-commit hooks (gitleaks for secret detection, applied to the bootstrap repo itself).&lt;/li&gt;
&lt;li&gt;Shell scripts for operations that span teams, like auditing which repos still need a branch migration.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Team level&lt;/strong&gt; (in each team's CLAUDE.md or tracked config):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build conventions that apply to all repos in a team but not the whole org. A team with ten Spring Boot services can document the shared Gradle convention plugins once, in the team CLAUDE.md.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Repo level&lt;/strong&gt; (in each repo, as before):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Repo-specific skills, hooks, and commands. Nothing changes here.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The layering means you write a SonarQube skill once at the org level and it works in any repo. You document &lt;code&gt;./gradlew spotlessApply&lt;/code&gt; once at the team level and every repo in that team inherits the context.&lt;/p&gt;

&lt;h2&gt;
  
  
  Partial and full checkouts
&lt;/h2&gt;

&lt;p&gt;Not everyone needs the whole workspace. Most developers I work with only clone their team's repos:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;workspace/
  mani.yaml
  CLAUDE.md
  orders/
    CLAUDE.md
    order-service/
    payment-service/
    orders-ui/
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;They still get the org-level and team-level CLAUDE.md files. Claude Code still understands the team's conventions and knows how to discover the rest of the organisation through the manifest.&lt;/p&gt;

&lt;p&gt;A platform engineer or architect who works across teams clones everything. They get the full context at every level.&lt;/p&gt;

&lt;p&gt;The repo manager handles both. You can tag repos by team and clone selectively (&lt;code&gt;mani sync --tags orders&lt;/code&gt;) or clone everything (&lt;code&gt;mani sync&lt;/code&gt;). Either way, the layered context works because CLAUDE.md files at each level are already in place.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this gets you
&lt;/h2&gt;

&lt;p&gt;When someone starts Claude Code in any repo in the workspace, it already knows:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What the repo does and how to build it&lt;/li&gt;
&lt;li&gt;What other repos exist in the same team and how they relate&lt;/li&gt;
&lt;li&gt;How to navigate to shared libraries, infrastructure, and deployment configs&lt;/li&gt;
&lt;li&gt;Common conventions and exceptions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you want to try this, start small. Create a bootstrap repo, add a CLAUDE.md with your workspace layout, and list your repos in a manifest with one-line descriptions. You can add team-level context and cross-repo tasks as the structure proves useful.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Agentic Patterns Developers Should Steal</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Thu, 19 Mar 2026 05:13:51 +0000</pubDate>
      <link>https://dev.to/javatarz/agentic-patterns-developers-should-steal-pb1</link>
      <guid>https://dev.to/javatarz/agentic-patterns-developers-should-steal-pb1</guid>
      <description>&lt;p&gt;Production agentic systems decompose problems and use the right tool for each step. Most developers hand the AI the whole problem.&lt;/p&gt;

&lt;p&gt;That's the gap. Teams building production AI workflows have developed patterns for making AI reliable. Developers using AI coding assistants like Claude Code, Cursor, or Copilot mostly haven't adopted them yet.&lt;/p&gt;

&lt;p&gt;These patterns aren't theoretical. They're practical and don't require special tooling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/posts/2026-03-19-agentic-patterns-developers-should-steal/cover.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhesd60z9bx0m2bp6oj7p.png" alt="A figure crossing a bridge from a chaotic single-screen setup to an organised multi-station workspace" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Patterns
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Pattern&lt;/th&gt;
&lt;th&gt;What most devs currently do&lt;/th&gt;
&lt;th&gt;What devs should be doing&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Deterministic tool delegation&lt;/td&gt;
&lt;td&gt;Ask AI to do everything&lt;/td&gt;
&lt;td&gt;Use tools for solved problems, AI orchestrates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Verification loops&lt;/td&gt;
&lt;td&gt;Accept first output&lt;/td&gt;
&lt;td&gt;Generate → evaluate → revise&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context engineering&lt;/td&gt;
&lt;td&gt;Dump everything in&lt;/td&gt;
&lt;td&gt;Curate what the model sees&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Upfront planning&lt;/td&gt;
&lt;td&gt;One big prompt&lt;/td&gt;
&lt;td&gt;Reviewable plan before execution&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Persistent memory&lt;/td&gt;
&lt;td&gt;Start fresh each session&lt;/td&gt;
&lt;td&gt;Cross-session learning, codified constraints&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Structured guardrails&lt;/td&gt;
&lt;td&gt;Hope for the best&lt;/td&gt;
&lt;td&gt;Execution-layer constraints, hooks, gates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Observability&lt;/td&gt;
&lt;td&gt;Look at the output&lt;/td&gt;
&lt;td&gt;Structured traces, quality measurement&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-agent specialisation&lt;/td&gt;
&lt;td&gt;One agent does everything&lt;/td&gt;
&lt;td&gt;Separate agents for separate concerns&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Human-in-the-loop checkpoints&lt;/td&gt;
&lt;td&gt;Trust everything or nothing&lt;/td&gt;
&lt;td&gt;Consequence-based approval tiers&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Here's what each one looks like. Some link to deeper posts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Deterministic Tool Delegation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The pattern:&lt;/strong&gt; Don't let the AI make decisions it doesn't need to make. If a deterministic tool can handle something (refactoring, formatting, linting, data validation), use the tool. The AI's job is orchestration, not execution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What most developers do instead:&lt;/strong&gt; Ask the AI to rewrite code for a rename, follow a style guide from memory, or process data it doesn't need to see.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Every unnecessary decision is a degree of freedom. Every degree of freedom is an opportunity to get something wrong, burn tokens, and produce a result you can't reproduce. Deterministic tools give you the same output every time.&lt;/p&gt;

&lt;p&gt;I wrote about this in depth in &lt;a href="https://dev.to/javatarz/the-unix-philosophy-for-agentic-coding-112p"&gt;The Unix Philosophy for Agentic Coding&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification Loops
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The pattern:&lt;/strong&gt; Instead of accepting the first output, create a generate-evaluate-revise cycle. The agent produces work, a separate pass critiques it against explicit criteria, and the agent revises.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What most developers do instead:&lt;/strong&gt; Prompt, receive, accept or reject. The interaction model is single-shot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; LLMs produce plausible output that can be subtly wrong. Research shows &lt;a href="https://www.anthropic.com/research/building-effective-agents" rel="noopener noreferrer"&gt;10-20 percentage point improvements&lt;/a&gt; on coding benchmarks from reflection alone. Anthropic's own guidance identifies the evaluator-optimizer workflow as one of the core composable patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What this looks like in practice:&lt;/strong&gt; After asking your AI assistant to implement a feature, follow up with: "Review what you just wrote. Check for edge cases, error handling, and whether it follows patterns in this codebase. List problems, then fix them." For high-stakes changes, use a separate session as an independent reviewer.&lt;/p&gt;

&lt;p&gt;This pattern is also the foundation of test-driven development with AI: write the test first, let the AI implement, then the test itself becomes the verification loop. I've touched on this in the &lt;a href="https://dev.to/javatarz/intelligent-engineering-in-practice-41kf#3-tdd-implementation"&gt;TDD workflow in intelligent Engineering: In Practice&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context Engineering
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The pattern:&lt;/strong&gt; Deliberately architect what information the model sees, when it sees it, and in what form. Treat context as a finite resource, not an infinite scratchpad.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What most developers do instead:&lt;/strong&gt; Paste entire files, full error logs, and broad descriptions, trusting the model to extract what's relevant.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Including irrelevant data actively worsens output quality. Models have attention patterns that favour the start and end of context, with the middle getting less focus. More context is not always better context.&lt;/p&gt;

&lt;p&gt;I wrote a full post on this: &lt;a href="https://dev.to/javatarz/context-engineering-for-ai-assisted-development-b8i"&gt;Context Engineering for AI-Assisted Development&lt;/a&gt;. The short version: curate your CLAUDE.md for signal density, use &lt;code&gt;.claudeignore&lt;/code&gt; to exclude noise, provide the two or three most relevant files rather than the entire directory, and start fresh sessions when context degrades.&lt;/p&gt;

&lt;h3&gt;
  
  
  Upfront Planning
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The pattern:&lt;/strong&gt; Before any code is written, create an explicit plan that decomposes the work into steps with dependencies and acceptance criteria. Review the plan before execution begins.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What most developers do instead:&lt;/strong&gt; Give the AI a single prompt describing what they want and let it figure out the approach. "Add user authentication" becomes one big prompt rather than a sequence of reviewable steps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Internal planning by the model is invisible and unreviewable. An explicit plan is where you catch architectural mistakes that are expensive to fix after implementation. It also prevents the "AI rewrote half the codebase and something is broken but I don't know where" problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What this looks like in practice:&lt;/strong&gt; For any task that touches more than two files: "Before implementing, create a plan. List the files you'll modify, the changes in each, the order of changes, and how you'll verify each step works." Review the plan before saying "proceed."&lt;/p&gt;

&lt;p&gt;This is central to the &lt;a href="https://dev.to/javatarz/intelligent-engineering-in-practice-41kf#2-design-discussion"&gt;design discussion workflow&lt;/a&gt; I use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Persistent Memory
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The pattern:&lt;/strong&gt; Retain lessons, decisions, and discovered patterns across sessions. Build institutional knowledge over time rather than starting from zero each conversation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What most developers do instead:&lt;/strong&gt; Every session starts fresh. They rediscover the same issues, re-explain the same conventions, and re-learn the same codebase quirks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Without cross-session memory, the AI makes the same mistakes repeatedly and you correct it repeatedly. Codified constraints prevent the same mistakes from recurring.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What this looks like in practice:&lt;/strong&gt; Maintain a CLAUDE.md that evolves. When you discover a gotcha ("the payments service returns 200 even on failures, check the response body"), add it immediately. When the AI makes a mistake, codify the prevention rule. Over time, your context docs accumulate the institutional knowledge that makes the AI genuinely useful on your specific project.&lt;/p&gt;

&lt;p&gt;I cover this in detail in the &lt;a href="https://dev.to/javatarz/intelligent-engineering-in-practice-41kf#level-1-foundation"&gt;Foundation&lt;/a&gt; and &lt;a href="https://dev.to/javatarz/intelligent-engineering-in-practice-41kf#level-2-context-documentation"&gt;Context Documentation&lt;/a&gt; layers of the intelligent Engineering stack.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structured Guardrails
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The pattern:&lt;/strong&gt; Define explicit boundaries around which decisions the AI can make autonomously and which it should escalate. This includes architectural constraints ("don't introduce a new database without discussing it"), scope boundaries ("only modify files in this module"), and approval gates for high-impact changes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What most developers do instead:&lt;/strong&gt; Give the AI full autonomy without defining what's in and out of scope. The agent makes architectural decisions, introduces new patterns, or changes public APIs without checking whether that's what you intended.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; A prompt might be ignored as context fills up. A pre-commit hook won't be. Deterministic enforcement catches what prompt-based instructions miss.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What this looks like in practice:&lt;/strong&gt; Define boundaries in your CLAUDE.md ("never modify migration files without asking"). Use pre-commit hooks for formatting, linting, and security checks. Set up Claude Code hooks for auto-formatting and blocking sensitive operations. Let low-risk operations run freely. Pause high-risk ones for review.&lt;/p&gt;

&lt;p&gt;I wrote a hands-on tutorial on this: &lt;a href="https://dev.to/javatarz/level-up-code-quality-with-an-ai-assistant-5cdn"&gt;Level Up Code Quality with an AI Assistant&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Observability
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The pattern:&lt;/strong&gt; Systematic tracking of what the AI did, what worked, what failed, and using that data to improve future interactions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What most developers do instead:&lt;/strong&gt; Look at the output. No structured feedback, no trend tracking, no quality measurement over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; The &lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;METR study&lt;/a&gt; found developers estimated they were 24% faster with AI when they were actually 19% slower. Gut feel is unreliable. Without measurement, you don't know if the AI is helping, and you can't systematically improve your workflows.&lt;/p&gt;

&lt;p&gt;This is the least mature pattern in the list. The tooling barely exists for individuals and is fragmented across teams. I explore the current state, the gaps, and what we'd like to see in &lt;a href="https://dev.to/javatarz/observability-for-ai-assisted-development-2m06"&gt;Observability for AI-Assisted Development&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Multi-Agent Specialisation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The pattern:&lt;/strong&gt; Instead of one generalist agent handling everything, use multiple specialised agents with focused context, specific tool access, and defined roles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What most developers do instead:&lt;/strong&gt; One session, one agent, planning, implementation, and review all in the same context window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Each agent gets a fresh, focused context window rather than one bloated context trying to hold planning, implementation, review, and testing simultaneously. Specialisation also lets you use different models for different tasks (a thinking model for planning, a fast model for implementation).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What this looks like in practice:&lt;/strong&gt; Claude Code recently started offering to clear context when you accept a plan, giving the implementation phase a fresh, focused window with only the plan carried forward. Planning and implementation benefit from separate contexts.&lt;/p&gt;

&lt;p&gt;Take it further. Build an agentic team with a backlog: a planning agent that decomposes work into tasks, implementation agents that execute them, QA agents that test, and review agents that validate. Each agent has specific skills and focused context for its role. Claude Code's &lt;a href="https://code.claude.com/docs/en/agent-teams" rel="noopener noreferrer"&gt;Agent Teams&lt;/a&gt; and subagent features support this natively. Anthropic's engineering team &lt;a href="https://www.anthropic.com/engineering/building-c-compiler" rel="noopener noreferrer"&gt;built an entire C compiler&lt;/a&gt; using 16 agent teams, producing 100,000 lines of Rust code. Codex has &lt;a href="https://developers.openai.com/codex/multi-agent/" rel="noopener noreferrer"&gt;similar multi-agent capabilities&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Anthropic's internal benchmarks showed a &lt;a href="https://www.anthropic.com/engineering/multi-agent-research-system" rel="noopener noreferrer"&gt;90% improvement&lt;/a&gt; with multi-agent (Opus lead + Sonnet subagents) over solo Opus on complex tasks. &lt;a href="https://www.augmentcode.com/customers/Tekion-enabled-AI-agents" rel="noopener noreferrer"&gt;Tekion&lt;/a&gt; deployed persona-driven agents across 1,300 engineers and saw 50-85% productivity gains, compared to 30-40% with raw LLM prompting. The trade-off is tokens: multi-agent workflows use 2-3x more tokens, but for significant features, the quality improvement justifies the cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Human-in-the-Loop Checkpoints
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The pattern:&lt;/strong&gt; Rather than either fully trusting the AI or micromanaging every line, define structured approval gates based on the consequence of the action.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What most developers do instead:&lt;/strong&gt; Operate in one of two modes. Either review everything line-by-line (treating the AI as fancy autocomplete) or accept large chunks with only a cursory glance. A formatting change and a database schema change get the same level of scrutiny.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why it matters:&lt;/strong&gt; Not all changes carry the same risk. A tiered approach gives you speed where it's safe and control where it matters.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What this looks like in practice:&lt;/strong&gt; Define personal approval tiers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Auto-approve:&lt;/strong&gt; Formatting, import organisation, adding type annotations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quick review:&lt;/strong&gt; New functions, test additions, single-file refactors&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Careful review:&lt;/strong&gt; Public API changes, database operations, auth logic&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full review with plan:&lt;/strong&gt; Multi-file refactors, new architectural patterns, build/deploy changes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use small, frequent git commits as checkpoints. If something goes wrong, you can revert to a known-good state without losing everything. Before accepting a change, ask yourself: if this is wrong, what breaks and how hard is it to fix?&lt;/p&gt;

&lt;h2&gt;
  
  
  Where to Start
&lt;/h2&gt;

&lt;p&gt;You don't need all nine patterns at once. Start with the ones that address your biggest pain points:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Code quality issues?&lt;/strong&gt; Start with structured guardrails and verification loops.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI keeps making the same mistakes?&lt;/strong&gt; Start with persistent memory and context engineering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Large diffs that are hard to review?&lt;/strong&gt; Start with upfront planning and human-in-the-loop checkpoints.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Spending too much on tokens?&lt;/strong&gt; Start with deterministic tool delegation and context engineering.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Not sure if AI is helping?&lt;/strong&gt; Observability is still largely unsolved, but start by establishing baselines now so you can measure later.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Stop handing the AI the whole problem. Break it down and use the right tool for each step.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is part of a series on applying patterns from agentic systems to AI-assisted development. See also: &lt;a href="https://dev.to/javatarz/the-unix-philosophy-for-agentic-coding-112p"&gt;The Unix Philosophy for Agentic Coding&lt;/a&gt; and &lt;a href="https://dev.to/javatarz/observability-for-ai-assisted-development-2m06"&gt;Observability for AI-Assisted Development&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Observability for AI-Assisted Development</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Sat, 14 Mar 2026 12:57:41 +0000</pubDate>
      <link>https://dev.to/javatarz/observability-for-ai-assisted-development-2m06</link>
      <guid>https://dev.to/javatarz/observability-for-ai-assisted-development-2m06</guid>
      <description>&lt;p&gt;Developers using AI estimate they're 24% faster. A randomised controlled trial measured them at 19% slower.&lt;/p&gt;

&lt;p&gt;That's from METR's &lt;a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/" rel="noopener noreferrer"&gt;2025 study&lt;/a&gt;. These were experienced open-source developers working on their own codebases with tools they chose. Their self-assessment was off by over 40 percentage points.&lt;/p&gt;

&lt;p&gt;If your perception of AI's impact is that unreliable, what are you actually measuring?&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/posts/2026-03-12-observability-for-ai-assisted-development/cover.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fajkvd66bzyxn901ttc53.png" alt="A figure in a boat on foggy water, holding a lantern that barely illuminates the surrounding mist" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  You Need a Baseline First
&lt;/h2&gt;

&lt;p&gt;If you didn't measure before AI, measuring with AI won't work.&lt;/p&gt;

&lt;p&gt;You can't attribute improvements to AI if you don't know what "before" looked like. Cycle time, deployment frequency, change failure rate, MTTR, value delivered per sprint: these need to exist as baselines before you introduce a new variable. Otherwise you're guessing, and as the METR study shows, our guesses aren't great.&lt;/p&gt;

&lt;p&gt;I've seen teams adopt AI coding assistants and then ask "how do we know it's helping?" three months later. The real question is six months earlier: "how do we measure effectiveness?" If you didn't have that defined before AI, you won't have it now.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Exists Today
&lt;/h2&gt;

&lt;p&gt;The tooling for observability in AI-assisted development is fragmented. Cost visibility is reasonable. Quality visibility is nearly zero.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude Code&lt;/strong&gt; is the most transparent. It ships with native &lt;a href="https://code.claude.com/docs/en/monitoring-usage" rel="noopener noreferrer"&gt;OpenTelemetry support&lt;/a&gt;, tracking tokens, cost, tool calls, and session duration. The &lt;code&gt;/cost&lt;/code&gt; command shows real-time spend. &lt;code&gt;/stats&lt;/code&gt; visualises daily usage, session history, and model preferences. &lt;code&gt;/insights&lt;/code&gt; goes further, analysing your sessions to surface project areas, interaction patterns, and friction points. Commits are auto-tagged with a co-author line, giving you a built-in "was this AI-generated?" marker in your git history. Anthropic provides an &lt;a href="https://github.com/anthropics/claude-code-monitoring-guide" rel="noopener noreferrer"&gt;official monitoring guide&lt;/a&gt; with Grafana dashboard configs and a Docker Compose setup, and the community has built &lt;a href="https://grafana.com/grafana/dashboards/24640-claude-code-victoriastack/" rel="noopener noreferrer"&gt;importable dashboards&lt;/a&gt; and &lt;a href="https://grafana.com/grafana/plugins/timurdigital-claudestats-app/" rel="noopener noreferrer"&gt;plugins&lt;/a&gt;. The infrastructure for collecting data exists. What to do with it is the harder question.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;OpenAI Codex CLI&lt;/strong&gt; tags commits with a co-author line and supports &lt;a href="https://developers.openai.com/codex/cli/" rel="noopener noreferrer"&gt;OTel export&lt;/a&gt; for logs and traces. The &lt;a href="https://developers.openai.com/codex/enterprise/governance/" rel="noopener noreferrer"&gt;enterprise dashboard&lt;/a&gt; tracks daily users by product, code review completion rates, review priority and sentiment, and session-level message counts. It's adoption-focused: who's using what and how much. No quality metrics, no incident correlation, no rework tracking. Individual developers get &lt;code&gt;/status&lt;/code&gt; for rate limits but no cost visibility.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aider&lt;/strong&gt; has the &lt;a href="https://aider.chat/docs/git.html" rel="noopener noreferrer"&gt;most configurable commit attribution&lt;/a&gt; of any tool (co-author trailers include the model name). But no OTel, no dashboard, no persistent cost history.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;GitHub Copilot&lt;/strong&gt; offers &lt;a href="https://docs.github.com/en/copilot/concepts/copilot-usage-metrics/copilot-metrics" rel="noopener noreferrer"&gt;team-level dashboards&lt;/a&gt;: acceptance rates, DAU/MAU, feature adoption. It's oriented toward "is our license worth it?" rather than "is the output good?" No commit tagging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cursor&lt;/strong&gt; exposes very little. A "Year in Code" summary and an "AI Share of Committed Code" metric. No tracing, no commit tagging, no event-level data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cline&lt;/strong&gt; shows per-request cost in the UI (one of its standout features) and supports &lt;a href="https://docs.cline.bot/more-info/telemetry" rel="noopener noreferrer"&gt;OTel export at the enterprise tier&lt;/a&gt;. No commit tagging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amazon Q Developer&lt;/strong&gt; has the &lt;a href="https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/dashboard.html" rel="noopener noreferrer"&gt;richest built-in analytics dashboard&lt;/a&gt; of any tool: acceptance rates, lines of code by feature type, code review counts, per-language breakdowns. But it's admin-only, subscription-based (no per-token tracking), and publishes to CloudWatch rather than OTel.&lt;/p&gt;

&lt;p&gt;Some of us have built our own layers on top. We use &lt;a href="https://github.com/Maciek-roboblog/Claude-Code-Usage-Monitor" rel="noopener noreferrer"&gt;Claude Code Usage Monitor&lt;/a&gt; to track token usage as a proxy for understanding consumption patterns. It isn't perfect, isn't always accurate, but it gives you a feeling for where your usage goes. A few engineers on our teams have personal Grafana dashboards tracking their own AI metrics. But these aren't centralised, aren't standardised, and aren't as useful as they could be.&lt;/p&gt;

&lt;p&gt;The picture across the industry: cost visibility is reasonable if you're willing to set it up. Commit tagging is inconsistent (Claude Code and Codex do it by default, most others don't). Quality visibility is nearly zero everywhere.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Missing
&lt;/h2&gt;

&lt;p&gt;The gaps fall into three levels: what individual developers need, what teams need, and what organisations need.&lt;/p&gt;

&lt;h3&gt;
  
  
  For the Individual Developer
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;No effort distribution.&lt;/strong&gt; You know how much you spent in tokens. You don't know where that effort went. Imagine if your AI assistant could tell you: "This week, 40% of your AI time went to test writing, 30% to refactoring, 20% to feature work, 10% to debugging. Your test-writing sessions had the highest acceptance rate. Your debugging sessions cost the most tokens per useful output." That would let you consciously decide where AI is worth using and where you're better off working without it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limited failure pattern detection.&lt;/strong&gt; Claude Code's &lt;code&gt;/insights&lt;/code&gt; is the closest thing we have: it analyses sessions and surfaces friction points. That's a real start, and most other tools don't offer anything comparable. But it's still a snapshot of recent sessions, not a long-running trend line. If the AI keeps making the same category of mistake (wrong import paths, ignoring your test conventions, using a deprecated API), you want something that surfaces "you've corrected the AI on import paths 12 times this month" and suggests adding it to your CLAUDE.md. Some people maintain a manual &lt;code&gt;lessons-learned.md&lt;/code&gt; where they log AI mistakes. It works, but it's ad hoc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No context effectiveness feedback.&lt;/strong&gt; CLAUDE.md files are checked in, reviewed in PRs, and engineered for effectiveness over time, much like prompts. The feedback loop exists but it's manual and slow. You notice the AI getting something wrong, update the file, and see if it improves. What's missing is the measurement that closes the loop: did that change actually improve output quality, or did it just feel like it did? The METR perception gap applies here too.&lt;/p&gt;

&lt;h3&gt;
  
  
  For the Team
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;No aggregate failure patterns.&lt;/strong&gt; If three engineers on the same team are all hitting the same AI failure mode, that's not three individual problems. It's a systemic context gap: a missing architectural convention, an undocumented pattern, a guardrail that should exist but doesn't. No tool surfaces this today.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No RCA correlation.&lt;/strong&gt; Claude Code tags commits with a co-author line. That's the "was this AI-generated?" link in the RCA chain. But other tools don't do this consistently. And even with the tag, nobody is aggregating that data: correlating AI-tagged commits with incident rates, rework rates, or review times over time. Traditional RCA follows a clear chain (incident → deployment → commit → PR → review → root cause). AI adds a question to that chain: was the reviewer's miss caused by a large AI-generated diff? Was the AI missing context it should have had? Is this a known AI weakness that should be in the team's guardrails?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The velocity flatline problem.&lt;/strong&gt; We've seen this firsthand. Teams get faster with AI. Then velocity flattens. Not because AI stopped helping, but because teams redirected the extra capacity to paying off debt or solving problems they found interesting. That's not necessarily bad, but if you're not tracking what work goes where, you can't tell the difference between "team is investing in sustainability" and "team is coasting."&lt;/p&gt;

&lt;p&gt;The fix we found: track work against cards. Measure total value delivered, not just pace. Make sure the extra capacity from AI shows up as increased value, not just different work. This is a process fix, not a tooling fix. No observability tool surfaces this today.&lt;/p&gt;

&lt;h3&gt;
  
  
  For the Organisation
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;No cross-team maturity view.&lt;/strong&gt; Some teams will be excellent at AI-assisted development. Others will struggle. As a CTO, you need to know which is which, and more importantly, what the effective teams are doing differently. Are they better at context engineering? More disciplined about review? Today, finding this out requires manual investigation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No automated "are we improving?" picture.&lt;/strong&gt; This is the hardest gap. Drawing a full picture of whether an engineering organisation is improving has always required someone to build that view manually. AI hasn't changed that. It's just added another variable.&lt;/p&gt;

&lt;p&gt;The data exists. Commits are tagged. Tickets track value. CI tracks quality. AI tools track cost and usage. But nobody is stitching them into a coherent picture that answers: "Is AI helping us deliver more value, or is it making us feel faster while quality degrades?"&lt;/p&gt;

&lt;h2&gt;
  
  
  What We'd Like to See
&lt;/h2&gt;

&lt;p&gt;Here's what I wish existed:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI timesheets.&lt;/strong&gt; Not for billing. For self-awareness. Show me where my AI time goes, which task types have the best return, and where I'm burning tokens for low value. Let me compare across weeks and see trends.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automated RCA tagging.&lt;/strong&gt; Correlate AI-tagged commits with downstream incidents, reverts, and rework. Not to blame the tool, but to know where to invest in better review, context, or guardrails.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context effectiveness scoring.&lt;/strong&gt; When I change my CLAUDE.md, show me whether output quality improved for the task types I was targeting. Even a rough signal (fewer corrections needed, lower rework rate) would be valuable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Failure pattern aggregation.&lt;/strong&gt; Surface repeated AI mistakes at the team level. If the same failure shows up across engineers, flag it as a context gap, not an individual problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The org-wide picture, stitched together.&lt;/strong&gt; Combine git data, ticket data, CI data, and AI usage data into a view that answers: are we delivering more value? Is quality holding? Where should we invest next?&lt;/p&gt;

&lt;h2&gt;
  
  
  Questions for Solution Builders
&lt;/h2&gt;

&lt;p&gt;If you're building in this space, here are the questions I'd want answered:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Can the "are we improving?" picture be automated?&lt;/strong&gt; The data is there (git, tickets, CI, AI usage). Can you stitch it together without someone manually maintaining a dashboard? Can you infer value delivery trends from data that already exists?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How do you measure context effectiveness without controlled experiments?&lt;/strong&gt; A/B testing CLAUDE.md configurations isn't practical in real workflows. What proxy signals can tell us whether a context change helped?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;What does a useful AI timesheet look like?&lt;/strong&gt; Not session-level token counts, but task-level effort distribution. How do you classify AI sessions by task type without requiring the developer to manually tag them?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How do you surface failure patterns across a team?&lt;/strong&gt; Individual correction patterns are noisy. Aggregate patterns are signal. What's the right level of abstraction?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How do you separate "AI made us faster" from "we redirected capacity"?&lt;/strong&gt; Velocity metrics alone can't tell you this. What combination of signals can?&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;How do you handle the perception gap?&lt;/strong&gt; Developers believe they're faster. Measurement sometimes shows otherwise. How do you present this data in a way that's constructive rather than demoralising?&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;These aren't rhetorical questions. If you're building tools in this space, I'd like to hear your answers.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;This is the second post in a series on applying patterns from agentic systems to everyday AI-assisted development. The first, &lt;a href="https://dev.to/javatarz/the-unix-philosophy-for-agentic-coding-112p"&gt;The Unix Philosophy for Agentic Coding&lt;/a&gt;, covers deterministic tool delegation.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>discuss</category>
    </item>
    <item>
      <title>The Unix Philosophy for Agentic Coding</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Sat, 14 Mar 2026 12:57:26 +0000</pubDate>
      <link>https://dev.to/javatarz/the-unix-philosophy-for-agentic-coding-112p</link>
      <guid>https://dev.to/javatarz/the-unix-philosophy-for-agentic-coding-112p</guid>
      <description>&lt;p&gt;Most people use AI coding agents backwards. They hand the agent a problem and ask it to solve the whole thing. The agent reads, reasons, generates, and hopes for the best.&lt;/p&gt;

&lt;p&gt;There's a better way. One that's cheaper, more predictable, and already well understood. It's the &lt;a href="https://en.wikipedia.org/wiki/Unix_philosophy" rel="noopener noreferrer"&gt;Unix philosophy&lt;/a&gt;, applied to how we work with AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/posts/2026-03-05-the-unix-philosophy-for-agentic-coding/cover.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh00aqm9rviog128pgikv.png" alt="A robotic conductor directing an orchestra of developer tools" width="800" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Pattern
&lt;/h2&gt;

&lt;p&gt;The Unix philosophy boils down to: do one thing well, compose small tools, let the shell orchestrate. When you work with an AI coding agent, the agent is the shell.&lt;/p&gt;

&lt;p&gt;Here's how I think about it:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Break the problem down.&lt;/strong&gt; Don't hand the agent a big, vague goal. Decompose it into sub-problems.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If a tool exists, use it.&lt;/strong&gt; Refactoring, formatting, linting, deployment: these are solved problems. Don't ask the AI to reinvent them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;If no tool exists, build one.&lt;/strong&gt; A small, deterministic script is better than an LLM making judgment calls where none are needed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The agent orchestrates.&lt;/strong&gt; It decides what to do, in what order, with which tools. That's where its intelligence adds value.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The principle is simple: &lt;strong&gt;don't let AI make decisions it doesn't need to make.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every unnecessary decision is a degree of freedom. Every degree of freedom is an opportunity for the model to get something wrong, burn tokens, and produce a result you can't reproduce.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Goes Wrong Without This
&lt;/h2&gt;

&lt;p&gt;When you ask an AI agent to do something a deterministic tool already handles, you get:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Inconsistency.&lt;/strong&gt; LLMs aren't deterministic. Run the same prompt twice, get different results. A tool gives you the same output every time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wasted tokens.&lt;/strong&gt; Generating 200 lines of reformatted code costs tokens. Running &lt;a href="https://prettier.io" rel="noopener noreferrer"&gt;Prettier&lt;/a&gt; or &lt;a href="https://docs.astral.sh/ruff/" rel="noopener noreferrer"&gt;Ruff&lt;/a&gt; costs nothing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;More failure modes.&lt;/strong&gt; The model might miss edge cases a dedicated tool handles by design. A refactoring tool knows about downstream dependencies. An LLM might not.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Slower feedback loops.&lt;/strong&gt; Generating code, reviewing it, finding the error, regenerating: that cycle is slower than calling a tool that gets it right the first time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Examples
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Refactoring
&lt;/h3&gt;

&lt;p&gt;I want to rename a method. The method is used across dozens of files.&lt;/p&gt;

&lt;p&gt;The naive approach: ask the agent to read the codebase, find all references, and rewrite them. The agent will try. It might miss some. It might introduce a formatting inconsistency along the way. You'll spend time reviewing a diff that's harder to trust.&lt;/p&gt;

&lt;p&gt;The better approach: the agent calls &lt;a href="https://www.jetbrains.com/help/idea/mcp-server.html" rel="noopener noreferrer"&gt;IntelliJ's refactoring tools via MCP&lt;/a&gt;. One command. Every reference updated. Downstream dependencies handled. No formatting changes. No guesswork.&lt;/p&gt;

&lt;p&gt;Refactoring is a solved problem. I wouldn't ask a teammate to do a manual find-and-replace across a codebase. I wouldn't ask an AI agent to either.&lt;/p&gt;

&lt;h3&gt;
  
  
  Analysing CSV Data
&lt;/h3&gt;

&lt;p&gt;I have a set of CSVs I need to extract insights from.&lt;/p&gt;

&lt;p&gt;The naive approach: hand the files to the agent and ask it to read, validate, extract, and summarise everything. The agent will try. It might misparse a column, silently drop malformed rows, or hallucinate a trend that isn't there. You won't know unless you check every step. Large CSVs make this worse. Hundreds of thousands of rows won't fit in a context window, and even if they did, you're burning tokens on data the model doesn't need to see. The agent doesn't know which rows matter until it's processed all of them.&lt;/p&gt;

&lt;p&gt;The better approach: build a small CLI that pre-processes the data first. Validate schemas, flag missing values, confirm row counts, filter to the relevant subset, compute the aggregations that don't need intelligence. This is deterministic work. Then pass the clean, reduced output to the agent for the part that actually needs judgment: identifying patterns and summarising insights.&lt;/p&gt;

&lt;p&gt;No tool existed for this specific validation, so I asked the agent to build one. That's the pattern. Build the tool, then use the tool. The agent wrote a script I can run repeatedly with predictable results. Now it's free to focus on what it's good at.&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Formatting
&lt;/h3&gt;

&lt;p&gt;I want my code to follow our team's style guide.&lt;/p&gt;

&lt;p&gt;The naive approach: include the style guide in the prompt and ask the agent to follow it. It will mostly comply. It will sometimes get creative (especially as &lt;a href="https://dev.to/javatarz/context-engineering-for-ai-assisted-development-b8i"&gt;context fills up&lt;/a&gt;). You'll find inconsistencies across files that are annoying to track down.&lt;/p&gt;

&lt;p&gt;The better approach: let the agent write code however it wants, then run &lt;a href="https://prettier.io" rel="noopener noreferrer"&gt;Prettier&lt;/a&gt;, &lt;a href="https://github.com/psf/black" rel="noopener noreferrer"&gt;Black&lt;/a&gt;, &lt;a href="https://docs.astral.sh/ruff/" rel="noopener noreferrer"&gt;Ruff&lt;/a&gt;, or &lt;a href="https://eslint.org" rel="noopener noreferrer"&gt;ESLint&lt;/a&gt;. Zero ambiguity. The agent doesn't need to think about formatting at all, which means fewer tokens spent and fewer decisions that could go wrong.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills, Hooks, and Tools
&lt;/h2&gt;

&lt;p&gt;If you use &lt;a href="https://docs.anthropic.com/en/docs/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;, you'll know about skills (composable prompt-driven capabilities) and hooks (event-driven automation). These are the wiring. But wiring without workers doesn't accomplish much.&lt;/p&gt;

&lt;p&gt;A good skill is composable. A great skill is composable and delegates to deterministic tools instead of taking on responsibilities it doesn't need. If a skill invokes a CLI tool, an API, or a build system instead of asking the LLM to reason through a solved problem, that skill will be faster, cheaper, and more reliable.&lt;/p&gt;

&lt;p&gt;The same applies beyond Claude Code. Cursor rules, Windsurf workflows, any AI assistant: the pattern holds. Build your workflows so the AI orchestrates tools, not replaces them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Picture
&lt;/h2&gt;

&lt;p&gt;This isn't just about code formatting and refactoring. The same principle applies to deployment pipelines, database migrations, CI/CD workflows, building CLIs for business operations. Anywhere a deterministic tool can guarantee a correct result, use it. Reserve the LLM for the parts that genuinely need judgment: understanding intent, choosing an approach, reasoning about trade-offs, writing novel logic.&lt;/p&gt;

&lt;p&gt;Not every problem needs this treatment. For exploratory work, prototyping, or genuinely novel problems, letting the agent roam is the right call. But for the repeatable parts of your workflow, reach for a tool.&lt;/p&gt;

&lt;p&gt;The best AI workflows I've built look like Unix pipelines. Small, focused tools. A smart orchestrator composing them. The AI's value isn't in doing everything. It's in knowing what to do and calling the right tool to do it.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Thanks to &lt;a href="https://www.linkedin.com/in/carmenmardiros/" rel="noopener noreferrer"&gt;Carmen Mardiros&lt;/a&gt; whose &lt;a href="https://www.meetup.com/data-engineers-london/events/313209661/" rel="noopener noreferrer"&gt;talk at Data Engineers London&lt;/a&gt; helped crystallize this thinking.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>intelligent Engineering: In Practice</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Sat, 03 Jan 2026 18:32:40 +0000</pubDate>
      <link>https://dev.to/javatarz/intelligent-engineering-in-practice-41kf</link>
      <guid>https://dev.to/javatarz/intelligent-engineering-in-practice-41kf</guid>
      <description>&lt;p&gt;Principles are easy. Application is hard.&lt;/p&gt;

&lt;p&gt;I've written about &lt;a href="https://dev.to/javatarz/intelligent-engineering-principles-for-building-with-ai-34aa"&gt;intelligent Engineering principles&lt;/a&gt; and &lt;a href="https://dev.to/javatarz/intelligent-engineering-a-skill-map-for-learning-ai-assisted-development-3kaj"&gt;the skills needed to build with AI&lt;/a&gt;. But I kept getting the same question: "How do I actually set this up on a real project?"&lt;/p&gt;

&lt;p&gt;This post answers that question. I'll walk through the complete setup, using a real repository as a worked example. Not a toy project. Not a weekend experiment. A codebase with architectural decisions, test coverage, documentation, and a clear development workflow.&lt;/p&gt;

&lt;p&gt;Here's what it looks like in action:&lt;/p&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/oK0N7pQ5rIY"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;h2&gt;
  
  
  The intelligent Engineering Stack
&lt;/h2&gt;

&lt;p&gt;Before diving into details, here's the mental model I use. intelligent Engineering isn't one thing. It's layers that enable each other:&lt;/p&gt;

&lt;p&gt;&lt;a href="/assets/images/posts/2026-01-02-intelligent-engineering-in-practice/ie-stack.svg"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkarun.me%2Fassets%2Fimages%2Fposts%2F2026-01-02-intelligent-engineering-in-practice%2Fie-stack.svg" alt="The intelligent Engineering Stack: four layers from Foundation at the bottom, through Context, Interaction, to Workflow at the top"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This diagram shows &lt;a href="https://claude.ai/code/" rel="noopener noreferrer"&gt;Claude Code's&lt;/a&gt; primitives. Other AI assistants have different building blocks: Cursor has rules and &lt;code&gt;.cursorrules&lt;/code&gt;, Windsurf has Cascade workflows. The layers matter more than the specific implementation.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The screencast showed the workflow. The rest of this post explains what makes it work, layer by layer from top to bottom.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Two Phases of intelligent Engineering
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Shaping AI&lt;/strong&gt; is preparation. You define agentic workflows, set up tooling, provide context, and build a prompt library. Context includes coding guidelines, architecture patterns, and deployment patterns. This is the work before you write code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leading AI&lt;/strong&gt; is execution. You move through the development cycle: ideation, research, feature specs, architecture, stories, code, test, deploy. This is the work of building software, with AI as a collaborator.&lt;/p&gt;

&lt;p&gt;The screencast showed Leading AI in action. But Leading AI only works well when Shaping AI is done right. Most of this post focuses on Shaping AI, the setup that makes everything else possible.&lt;/p&gt;

&lt;p&gt;We've written more about this framework on the &lt;a href="https://sahaj.ai/featured-article/realising-efficiency-and-productivity-through-intelligent-engineering/" rel="noopener noreferrer"&gt;Sahaj website&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Level 4: The Workflow
&lt;/h2&gt;

&lt;p&gt;This workflow mirrors how many teams already work: pick up a card, groom it, discuss the approach, TDD the implementation. Your team might work differently. intelligent Engineering is about shaping AI to match your existing practices, not adopting a prescribed workflow. The stack exists to encode how your team works. You should be &lt;a href="https://dev.to/javatarz/intelligent-engineering-principles-for-building-with-ai-34aa#shape-ai-deliberately"&gt;shaping AI&lt;/a&gt;, not letting it shape you. This example shows one approach; yours will look different.&lt;/p&gt;

&lt;p&gt;Here's what the screencast showed, step by step:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Pick Up Work
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; /pickup

## Available Stories by Priority

### P0 - Critical
- #23 - S01.3: Profile Completion [module:customer]

Which card would you like to pick up?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  2. Design Discussion
&lt;/h3&gt;

&lt;p&gt;Here's what the design discussion looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I've read #23 and the customer domain docs. Here's my understanding:

**What we're building:**
- Profile completion endpoint (PUT /api/v1/customers/{id}/profile)
- Validation: name, DOB (18+), SSN format, address
- SSN encryption before storage

**Proposed approach:**
1. Add ProfileRequest DTO with validation annotations
2. Extend CustomerService with completeProfile method
3. Add encryption service for SSN

**Questions:**
- Should SSN validation reject obvious invalid patterns (000-00-0000)?
- Should we allow profile updates after initial completion?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is Leading AI in action. The AI proposes, the human validates and refines. Agreement on approach before code.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. TDD Implementation
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://martinfowler.com/bliki/TestDrivenDevelopment.html" rel="noopener noreferrer"&gt;Test-Driven Development&lt;/a&gt; (TDD) is a discipline where you write tests before code. The cycle is Red-Green-Refactor: write a failing test (red), write minimum code to pass (green), clean up (refactor). Kent Beck's &lt;a href="https://www.oreilly.com/library/view/test-driven-development/0321146530/" rel="noopener noreferrer"&gt;Test-Driven Development by Example&lt;/a&gt; is the canonical reference.&lt;/p&gt;

&lt;p&gt;With agreement on approach, the TDD skill takes over:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Running in interactive mode...

RED: Writing test for profile completion with valid data
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://github.com/javatarz/credit-card-lending/blob/main/.claude/skills/tdd/SKILL.md" rel="noopener noreferrer"&gt;TDD skill&lt;/a&gt; supports multiple review modes that control when you review AI's work. Interactive mode pauses after each red-green cycle; batch and autonomous modes give more flow with less oversight.&lt;/p&gt;

&lt;p&gt;The AI writes a failing test:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight java"&gt;&lt;code&gt;&lt;span class="nd"&gt;@Test&lt;/span&gt;
&lt;span class="kt"&gt;void&lt;/span&gt; &lt;span class="nf"&gt;shouldCompleteProfile_whenAllFieldsValid&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;aProfileRequest&lt;/span&gt;&lt;span class="o"&gt;()&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;withFirstName&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"John"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;withLastName&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"Doe"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;withDateOfBirth&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;LocalDate&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;of&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;1990&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;15&lt;/span&gt;&lt;span class="o"&gt;))&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;withSsn&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"123-45-6789"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;build&lt;/span&gt;&lt;span class="o"&gt;();&lt;/span&gt;

    &lt;span class="kt"&gt;var&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;customerService&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;completeProfile&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;customerId&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;

    &lt;span class="n"&gt;assertThat&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;status&lt;/span&gt;&lt;span class="o"&gt;()).&lt;/span&gt;&lt;span class="na"&gt;isEqualTo&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="no"&gt;PROFILE_COMPLETE&lt;/span&gt;&lt;span class="o"&gt;);&lt;/span&gt;
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;And runs it:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Test written. Running...
❌ FAILED: completeProfile method doesn't exist

RED confirmed. Ready for GREEN?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This continues through each acceptance criterion with small commits after each cycle.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Context Updates
&lt;/h3&gt;

&lt;p&gt;If you use &lt;code&gt;/pickup&lt;/code&gt; to start work, context updates happen automatically when the story completes. The &lt;code&gt;/update-context&lt;/code&gt; command is for when you need to update docs outside the standard workflow:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;&amp;gt; /update-context

Recent changes:
- Added ProfileRequest DTO
- Added completeProfile to CustomerService
- Added SSN encryption

Context docs to review:
- docs/context/domain/customer.md (profile rules)
- docs/context/modules/customer-module.md (new endpoint)
- docs/context/current-state.md (mark feature built)

Proceed with updates?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  5. Push to Main
&lt;/h3&gt;

&lt;p&gt;Trunk-based development. Small commits. Direct to main.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git push origin main
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Story auto-closes via the &lt;code&gt;Closes #23&lt;/code&gt; footer in the final commit.&lt;/p&gt;




&lt;p&gt;That's the workflow. Now let's understand what makes it possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Level 3: The Interaction Layer
&lt;/h2&gt;

&lt;p&gt;This is how you interact with the AI during development. The examples use Claude Code primitives, but the concepts transfer to other tools:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Equivalents&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cursor&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://cursor.com/docs/context/rules#rules" rel="noopener noreferrer"&gt;Rules&lt;/a&gt; (&lt;code&gt;.cursorrules&lt;/code&gt;), custom instructions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;GitHub Copilot&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://docs.github.com/copilot/customizing-copilot/adding-custom-instructions-for-github-copilot" rel="noopener noreferrer"&gt;Custom instructions&lt;/a&gt; (&lt;code&gt;.github/copilot-instructions.md&lt;/code&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Windsurf&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://docs.windsurf.com/windsurf/cascade/workflows" rel="noopener noreferrer"&gt;Workflows&lt;/a&gt;, &lt;a href="https://docs.windsurf.com/windsurf/cascade/memories#memories-and-rules" rel="noopener noreferrer"&gt;rules&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;OpenAI Codex&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://developers.openai.com/codex/guides/agents-md/" rel="noopener noreferrer"&gt;AGENTS.md&lt;/a&gt;, &lt;a href="https://developers.openai.com/codex/skills/" rel="noopener noreferrer"&gt;skills&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Claude Code organizes these into distinct primitives: &lt;a href="https://code.claude.com/docs/en/slash-commands" rel="noopener noreferrer"&gt;commands&lt;/a&gt;, &lt;a href="https://code.claude.com/docs/en/skills" rel="noopener noreferrer"&gt;skills&lt;/a&gt;, and &lt;a href="https://code.claude.com/docs/en/hooks" rel="noopener noreferrer"&gt;hooks&lt;/a&gt;. Each serves a different purpose.&lt;/p&gt;

&lt;h3&gt;
  
  
  Design Principles
&lt;/h3&gt;

&lt;p&gt;Whether you use Claude Code, Cursor, or another tool, these principles apply:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Description quality is critical.&lt;/strong&gt; AI tools use descriptions to discover which skill to activate. Vague descriptions mean skills never get triggered. Include what the skill does AND when to use it, with specific trigger terms users would naturally say.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Bad&lt;/span&gt;
description: Helps with testing

&lt;span class="gh"&gt;# Good&lt;/span&gt;
description: Enforces Red-Green-Refactor discipline for code changes.
             Use when implementing features, fixing bugs, or writing code.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Single responsibility.&lt;/strong&gt; Each command or skill does one thing. &lt;code&gt;/pickup&lt;/code&gt; selects work. &lt;code&gt;/start-dev&lt;/code&gt; begins development. Combining them makes both harder to discover and maintain.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Give goals, not steps.&lt;/strong&gt; Let the AI decide specifics. "Sort by priority and present options" beats a rigid sequence of exact commands. The AI can adapt to context you didn't anticipate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Include escape hatches.&lt;/strong&gt; "If blocked, ask the user" prevents infinite loops. AI will try to solve problems; give it permission to ask for help instead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Progressive disclosure.&lt;/strong&gt; Keep the main instruction file concise. Put detailed references in separate files that load on-demand. Context windows are shared: your skill competes with conversation history for space.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Match freedom to fragility.&lt;/strong&gt; Some tasks need exact steps (database migrations). Others benefit from AI judgment (refactoring). Use specific scripts for fragile operations; flexible instructions for judgment calls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Test across models.&lt;/strong&gt; What works with a powerful model may need more guidance for a faster one. If you switch models for cost or speed, verify your skills still work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Commands
&lt;/h3&gt;

&lt;p&gt;Commands are user-invoked. You type &lt;code&gt;/pickup&lt;/code&gt; and something happens.&lt;/p&gt;

&lt;p&gt;Here's the command set I use:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Command&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/pickup&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Select next issue from backlog&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/start-dev&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Begin TDD workflow on assigned issue&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/update-context&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Review and update context docs after work&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/check-drift&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Detect misalignment between docs and code&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;/tour&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Onboard newcomers to the project&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each command is a markdown file in &lt;code&gt;.claude/commands/&lt;/code&gt; with instructions for the AI:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Pick Up Next Card&lt;/span&gt;

You are helping the user pick up the next prioritized story.

&lt;span class="gu"&gt;## Instructions&lt;/span&gt;
&lt;span class="p"&gt;
1.&lt;/span&gt; Fetch open stories using GitHub CLI
&lt;span class="p"&gt;2.&lt;/span&gt; Sort by priority (P0 first, then P1, P2)
&lt;span class="p"&gt;3.&lt;/span&gt; Present options to the user
&lt;span class="p"&gt;4.&lt;/span&gt; When selected, assign the issue
&lt;span class="p"&gt;5.&lt;/span&gt; Show issue details to begin work
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;/tour&lt;/code&gt; command walks through project architecture, module structure, coding conventions, testing approach, and domain glossary. It turns context docs into an interactive onboarding experience.&lt;/p&gt;

&lt;h3&gt;
  
  
  Skills
&lt;/h3&gt;

&lt;p&gt;Skills are model-invoked. The AI activates them automatically based on context. If I ask to "implement the registration endpoint," the TDD skill activates without me saying &lt;code&gt;/tdd&lt;/code&gt;.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Skill&lt;/th&gt;
&lt;th&gt;Triggers On&lt;/th&gt;
&lt;th&gt;Does&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;tdd&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Code implementation requests&lt;/td&gt;
&lt;td&gt;Enforces Red-Green-Refactor&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;review&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;After code changes&lt;/td&gt;
&lt;td&gt;Structured quality assessment&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;wiki&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Wiki read/write requests&lt;/td&gt;
&lt;td&gt;Manages wiki access&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;The TDD skill&lt;/strong&gt; is the one I use most:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trigger&lt;/strong&gt;: User asks to implement something, fix a bug, or write code&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workflow&lt;/strong&gt;:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;RED&lt;/strong&gt;: Write a failing test, run it, confirm it fails&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GREEN&lt;/strong&gt;: Write minimum code to pass, run tests, confirm green&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;REFACTOR&lt;/strong&gt;: Clean up while keeping tests green&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;COMMIT&lt;/strong&gt;: Small commit with issue reference&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Review modes&lt;/strong&gt; control how much human oversight:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Review Point&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Interactive&lt;/td&gt;
&lt;td&gt;Each Red-Green cycle&lt;/td&gt;
&lt;td&gt;Learning, complex logic&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Batch AC&lt;/td&gt;
&lt;td&gt;After each acceptance criterion&lt;/td&gt;
&lt;td&gt;Moderate oversight&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Batch Story&lt;/td&gt;
&lt;td&gt;After all criteria complete&lt;/td&gt;
&lt;td&gt;Maximum flow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Autonomous&lt;/td&gt;
&lt;td&gt;Agent reviews continuously&lt;/td&gt;
&lt;td&gt;Speed with quality gates&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;I typically use interactive mode for unfamiliar code and batch-ac mode for well-understood patterns. I mostly use batch-story and autonomous modes for demos, though they'd suit repetitive work with well-established patterns.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The review skill&lt;/strong&gt; provides structured feedback:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Code Review: normal mode&lt;/span&gt;

&lt;span class="gu"&gt;### Blockers (0 found)&lt;/span&gt;

&lt;span class="gu"&gt;### Warnings (2 found)&lt;/span&gt;
&lt;span class="p"&gt;1.&lt;/span&gt; &lt;span class="gs"&gt;**CustomerService.java:45**&lt;/span&gt; Method exceeds 20 lines
&lt;span class="p"&gt;   -&lt;/span&gt; Consider extracting validation logic

&lt;span class="gu"&gt;### Suggestions (1 found)&lt;/span&gt;
&lt;span class="p"&gt;1.&lt;/span&gt; &lt;span class="gs"&gt;**CustomerServiceTest.java:112**&lt;/span&gt; Test name could be more specific

&lt;span class="gu"&gt;### Summary&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Blockers: 0
&lt;span class="p"&gt;-&lt;/span&gt; Warnings: 2
&lt;span class="p"&gt;-&lt;/span&gt; Suggestions: 1
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Verdict**&lt;/span&gt;: NEEDS ATTENTION
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The autonomous TDD mode uses this skill with configurable thresholds. "Strict" interrupts on any finding. "Relaxed" only stops for blockers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hooks
&lt;/h3&gt;

&lt;p&gt;Hooks are event-driven. They run shell commands or LLM prompts at specific lifecycle events: before a tool runs, after a file is written, when Claude asks for permission.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Event&lt;/th&gt;
&lt;th&gt;Use Case&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PostToolUse&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Auto-format files after writes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;PreToolUse&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Block sensitive operations&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;UserPromptSubmit&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Validate prompts before execution&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Example: auto-format with Prettier after every file write:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight json"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="nl"&gt;"PostToolUse"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"matcher"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"Write|Edit"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="nl"&gt;"hooks"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="p"&gt;[{&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"type"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="w"&gt;
        &lt;/span&gt;&lt;span class="nl"&gt;"command"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;&lt;span class="w"&gt; &lt;/span&gt;&lt;span class="s2"&gt;"npx prettier --write &lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;$FILE_PATH&lt;/span&gt;&lt;span class="se"&gt;\"&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="w"&gt;
      &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
    &lt;/span&gt;&lt;span class="p"&gt;}]&lt;/span&gt;&lt;span class="w"&gt;
  &lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="w"&gt;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;a href="https://github.com/javatarz/credit-card-lending" rel="noopener noreferrer"&gt;credit-card-lending&lt;/a&gt; project doesn't use hooks yet. They're next on the list.&lt;/p&gt;

&lt;h3&gt;
  
  
  Other Primitives
&lt;/h3&gt;

&lt;p&gt;Claude Code has additional constructs I haven't used in this project:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Primitive&lt;/th&gt;
&lt;th&gt;What It Does&lt;/th&gt;
&lt;th&gt;When to Use&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://code.claude.com/docs/en/sub-agents" rel="noopener noreferrer"&gt;Subagents&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Specialized delegates with separate context&lt;/td&gt;
&lt;td&gt;Complex multi-step tasks, context isolation&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://code.claude.com/docs/en/mcp" rel="noopener noreferrer"&gt;MCP&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;External tool integrations&lt;/td&gt;
&lt;td&gt;Database access, APIs, custom tools&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://code.claude.com/docs/en/output-styles" rel="noopener noreferrer"&gt;Output Styles&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Custom system prompts&lt;/td&gt;
&lt;td&gt;Non-engineering tasks (teaching, writing)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;a href="https://code.claude.com/docs/en/plugins" rel="noopener noreferrer"&gt;Plugins&lt;/a&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Bundled primitives for distribution&lt;/td&gt;
&lt;td&gt;Team-wide deployment&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Start with commands, skills, and context docs. Add the others as your needs grow.&lt;/p&gt;

&lt;h2&gt;
  
  
  Level 2: Context Documentation
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://dev.to/javatarz/context-engineering-for-ai-assisted-development-b8i"&gt;Context&lt;/a&gt; is what the AI knows about your project. I've seen teams underinvest here. They write a README and call it done, then wonder why AI assistants keep making the same mistakes.&lt;/p&gt;

&lt;p&gt;What's missing is your engineering culture. The hardest part isn't the tools, it's capturing what your team actually does. For example, code reviews are hard because most time goes to style, not substance. "Why isn't this using our logging pattern?" "We don't structure tests that way here." Without codification, AI applies its own defaults. The code might work, but it doesn't feel like &lt;em&gt;your&lt;/em&gt; code.&lt;/p&gt;

&lt;p&gt;When you codify your team's preferences, AI follows YOUR patterns instead of its defaults. Style debates &lt;a href="https://en.wikipedia.org/wiki/Shift-left_testing" rel="noopener noreferrer"&gt;shift left&lt;/a&gt;: instead of the same argument across a dozen pull requests, you debate once over a document. Once the document reflects consensus, it's settled.&lt;/p&gt;

&lt;h3&gt;
  
  
  What to Document
&lt;/h3&gt;

&lt;p&gt;I've settled on this structure:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;overview.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Architecture, tech stack, module boundaries&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;conventions.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Code patterns, naming, git workflow&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;testing.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;TDD approach, test structure, tooling&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;glossary.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Domain terms with precise definitions&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;current-state.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;What's built vs planned&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;domain/*.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Business rules for each domain&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;modules/*.md&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Technical details for each module&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The &lt;a href="https://github.com/javatarz/credit-card-lending" rel="noopener noreferrer"&gt;credit-card-lending&lt;/a&gt; project extends this with &lt;code&gt;integrations.md&lt;/code&gt; (external systems) and &lt;code&gt;metrics.md&lt;/code&gt; (measuring iE effectiveness). Adapt the structure to your domain's needs.&lt;/p&gt;

&lt;p&gt;These docs exist for both AI and human consumption, but discoverability matters. New team members shouldn't have to hunt through &lt;code&gt;docs/context/&lt;/code&gt; to understand what exists. The &lt;a href="https://github.com/javatarz/credit-card-lending" rel="noopener noreferrer"&gt;credit-card-lending&lt;/a&gt; project solves this with a &lt;code&gt;/tour&lt;/code&gt; command: run it and get an AI-guided walkthrough covering architecture, conventions, testing, and domain knowledge. This transforms static documentation into an interactive onboarding flow. Context docs become working tools, not forgotten reference material.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context Doc Anatomy
&lt;/h3&gt;

&lt;p&gt;Every context doc starts with "Why Read This?" and prerequisites:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Testing Strategy&lt;/span&gt;

&lt;span class="gu"&gt;## Why Read This?&lt;/span&gt;

TDD principles, test pyramid, and testing tools.
Read when writing tests or understanding the test approach.

&lt;span class="gs"&gt;**Prerequisites:**&lt;/span&gt; conventions.md for code style
&lt;span class="gs"&gt;**Related:**&lt;/span&gt; domain/ for business rules being tested
&lt;span class="p"&gt;
---
&lt;/span&gt;
&lt;span class="gu"&gt;## Philosophy&lt;/span&gt;

We practice Test-Driven Development as our primary approach.
Tests drive design and provide confidence for change.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This helps AI tools (and humans) know whether they need this file and what to read first.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Dense facts beat explanatory prose.&lt;/strong&gt; Compare:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"Our testing philosophy emphasizes the importance of test-driven development. We believe that writing tests first leads to better design..."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;vs.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"TDD: Red-Green-Refactor. Tests before code. One assertion per test. Naming: &lt;code&gt;should{Expected}_when{Condition}&lt;/code&gt;."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The second version is what AI tools need. Save the narrative for human-focused documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Living Documentation
&lt;/h3&gt;

&lt;p&gt;Stale documentation lies confidently. It states things that are no longer true. You write tests to catch broken code. Your documentation needs the same capability.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://github.com/javatarz/credit-card-lending" rel="noopener noreferrer"&gt;credit-card-lending&lt;/a&gt; project handles this two ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Definition of Done includes context updates&lt;/strong&gt;: Every story card lists which context docs to review. The AI won't let you forget. You can bypass it by working without your AI pair or deleting the prompt, but the default path nudges you toward keeping docs current.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Drift detection&lt;/strong&gt;: A &lt;code&gt;/check-drift&lt;/code&gt; command compares docs against code&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The second point catches what the first misses. I've seen projects where features get built but &lt;code&gt;current-state.md&lt;/code&gt; still shows them as planned. Regular drift checks catch this before it causes confusion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Patterns for Teams
&lt;/h3&gt;

&lt;p&gt;The examples above work within a single repository. At team and org level:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Shared context repository&lt;/strong&gt;: A company-wide repo with organization-level conventions, security requirements, architectural patterns. Each project references it but can override.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Team-level customization&lt;/strong&gt;: Team-specific &lt;code&gt;CLAUDE.md&lt;/code&gt; additions for their domain, their tools, their workflow quirks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prompt library&lt;/strong&gt;: Reusable prompts for common tasks. "Review this PR for security issues" with the right context attached.&lt;/p&gt;

&lt;h2&gt;
  
  
  Level 1: Foundation
&lt;/h2&gt;

&lt;p&gt;The foundation is what the AI sees when it first encounters your project.&lt;/p&gt;

&lt;h3&gt;
  
  
  CLAUDE.md
&lt;/h3&gt;

&lt;p&gt;This is your project's instruction manual for AI assistants. It goes in the repository root and contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Project context&lt;/strong&gt;: What this is, what it does&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Git workflow&lt;/strong&gt;: Commit conventions, branching strategy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context file references&lt;/strong&gt;: Where to find domain knowledge, conventions, architecture&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool-specific instructions&lt;/strong&gt;: Commands, scripts, common tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Here's an excerpt from the &lt;a href="https://github.com/javatarz/credit-card-lending/blob/main/CLAUDE.md" rel="noopener noreferrer"&gt;credit-card-lending CLAUDE.md&lt;/a&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# CLAUDE.md&lt;/span&gt;

&lt;span class="gu"&gt;## Project Context&lt;/span&gt;
Credit card lending platform built with Java 25 and Spring Boot 4.
Modular monolith architecture with clear module boundaries.

&lt;span class="gu"&gt;## Git Workflow&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Trunk-based development: push to main, no PRs for standard work
&lt;span class="p"&gt;-&lt;/span&gt; Small commits (&amp;lt;200 lines) with descriptive messages
&lt;span class="p"&gt;-&lt;/span&gt; Reference issue numbers in commits

&lt;span class="gu"&gt;## Context Files&lt;/span&gt;
Read these before working on specific areas:
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="sb"&gt;`docs/context/overview.md`&lt;/span&gt; - Architecture and module structure
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="sb"&gt;`docs/context/conventions.md`&lt;/span&gt; - Code standards and patterns
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="sb"&gt;`docs/context/testing.md`&lt;/span&gt; - TDD principles and test strategy
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;CLAUDE.md is dense and factual, not explanatory. It tells the AI what to do, not why. The "why" lives in context docs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Project Structure
&lt;/h3&gt;

&lt;p&gt;Structure matters because AI tools use file paths to understand context. I've found this layout works well:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;project/
├── CLAUDE.md                    # AI instruction manual
├── .claude/
│   ├── commands/                # User-invoked slash commands
│   └── skills/                  # Model-invoked capabilities
├── docs/
│   ├── context/                 # Dense reference documentation
│   │   ├── overview.md
│   │   ├── conventions.md
│   │   ├── testing.md
│   │   └── domain/
│   ├── wiki/                    # Narrative documentation
│   └── adr/                     # Architectural decisions
└── src/                         # Your code
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The separation between &lt;code&gt;context/&lt;/code&gt; (for AI consumption) and &lt;code&gt;wiki/&lt;/code&gt; (for humans) is intentional. Context docs are dense facts. &lt;a href="https://github.com/javatarz/credit-card-lending/wiki" rel="noopener noreferrer"&gt;Wiki pages&lt;/a&gt; explain concepts with diagrams and narrative. &lt;a href="https://adr.github.io" rel="noopener noreferrer"&gt;ADRs&lt;/a&gt; (Architectural Decision Records) capture why significant decisions were made. This context prevents future teams from wondering "why did they do it this way?"&lt;/p&gt;

&lt;h2&gt;
  
  
  Takeaways
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://github.com/javatarz/credit-card-lending" rel="noopener noreferrer"&gt;credit-card-lending&lt;/a&gt; repository demonstrates everything discussed above. Here's what I learned applying it.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Worked
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Small batches&lt;/strong&gt;: Most commits are under 100 lines. This makes review meaningful and rollbacks clean.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context primacy&lt;/strong&gt;: The AI reads &lt;code&gt;conventions.md&lt;/code&gt; before writing code. It knows our test naming patterns, package structure, and error handling approach without me repeating it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TDD skill with review modes&lt;/strong&gt;: Interactive mode for complex validation logic. Batch-ac mode for straightforward CRUD operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Living documentation&lt;/strong&gt;: Every completed story updates &lt;code&gt;current-state.md&lt;/code&gt;. I know what's built by reading one file.&lt;/p&gt;

&lt;h3&gt;
  
  
  What We Learned
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Context docs need maintenance&lt;/strong&gt;: Early on, I'd update code without updating context docs. The AI would then generate code following outdated patterns. The &lt;code&gt;/check-drift&lt;/code&gt; command catches this now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skills are better than scripts&lt;/strong&gt;: I started with bash scripts for workflows. Moving to skills let the AI adapt to context instead of following rigid steps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Design discussion matters&lt;/strong&gt;: Agreeing on approach before coding feels slow. In reality, it saves rework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Ready to try this? Here's a path:&lt;/p&gt;

&lt;h3&gt;
  
  
  If You're Starting Fresh
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create &lt;code&gt;CLAUDE.md&lt;/code&gt; with your project context&lt;/li&gt;
&lt;li&gt;Add &lt;code&gt;docs/context/conventions.md&lt;/code&gt; with your coding standards&lt;/li&gt;
&lt;li&gt;Start with one command: &lt;code&gt;/start-dev&lt;/code&gt; for TDD workflow&lt;/li&gt;
&lt;li&gt;Add context docs as you need them&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  If You Have an Existing Project
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Create &lt;code&gt;CLAUDE.md&lt;/code&gt; capturing how you want the project worked on&lt;/li&gt;
&lt;li&gt;Document your most important conventions&lt;/li&gt;
&lt;li&gt;Add the &lt;code&gt;/update-context&lt;/code&gt; command so documentation stays current&lt;/li&gt;
&lt;li&gt;Gradually expand context as you work&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Try It Yourself
&lt;/h3&gt;

&lt;p&gt;Clone the example repository and explore:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/javatarz/credit-card-lending
&lt;span class="nb"&gt;cd &lt;/span&gt;credit-card-lending
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Run &lt;code&gt;/tour&lt;/code&gt; to get an interactive walkthrough of the project structure, setup, and key concepts. Then try &lt;code&gt;/pickup&lt;/code&gt; to see available work or &lt;code&gt;/start-dev&lt;/code&gt; to see TDD in action.&lt;/p&gt;

&lt;p&gt;The branch &lt;code&gt;blog-ie-setup-jan2025&lt;/code&gt; contains the exact state referenced in this post.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;If you try this approach, I'd like to hear what works and what doesn't. The practices here evolved from experimentation. They'll keep evolving.&lt;/p&gt;

&lt;h2&gt;
  
  
  Credits
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;The intelligent Engineering framework was developed in collaboration with &lt;a href="https://www.linkedin.com/in/anandiyengar/" rel="noopener noreferrer"&gt;Anand Iyengar&lt;/a&gt; and other Sahajeevis. It was originally published on the &lt;a href="https://sahaj.ai/featured-article/realising-efficiency-and-productivity-through-intelligent-engineering/" rel="noopener noreferrer"&gt;Sahaj website&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>intelligent Engineering: A Skill Map for Learning AI-Assisted Development</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Thu, 01 Jan 2026 05:58:05 +0000</pubDate>
      <link>https://dev.to/javatarz/intelligent-engineering-a-skill-map-for-learning-ai-assisted-development-3kaj</link>
      <guid>https://dev.to/javatarz/intelligent-engineering-a-skill-map-for-learning-ai-assisted-development-3kaj</guid>
      <description>&lt;p&gt;Principles are useful, but they don't tell you what to practice.&lt;/p&gt;

&lt;p&gt;In my previous post on &lt;a href="https://dev.to/javatarz/intelligent-engineering-principles-for-building-with-ai-34aa"&gt;intelligent Engineering principles&lt;/a&gt;, I outlined the ideas that guide how I build software with AI. Since then, I've had people ask: "Where do I start? What skills should I build first?"&lt;/p&gt;

&lt;p&gt;This post answers that: a map of the skills that make up intelligent Engineering, organised into a learning path you can follow whether you're an individual contributor looking to level up or a tech leader building your team's AI fluency.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is intelligent Engineering?
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://sahaj.ai/intelligent-engineering/" rel="noopener noreferrer"&gt;intelligent Engineering&lt;/a&gt; is a framework for integrating AI across the entire software development lifecycle, not just code generation.&lt;/p&gt;

&lt;p&gt;Writing code represents only 10-20% of software development effort. The rest is research, analysis, design, testing, deployment, and maintenance. intelligent Engineering applies AI across all of these stages while keeping humans accountable for outcomes.&lt;/p&gt;

&lt;p&gt;I've already written about the &lt;a href="https://dev.to/javatarz/intelligent-engineering-principles-for-building-with-ai-34aa"&gt;five core principles&lt;/a&gt; in detail. This post focuses on the skills that make those principles actionable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Skill Map
&lt;/h2&gt;

&lt;p&gt;&lt;a href="/assets/images/posts/2026-01-01-skill-map/skill-progression.png"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6cboy16yilvjufbsck1e.png" alt="Skill progression map showing four stages: Foundations, AI Interaction, Workflow Integration, and Advanced/Agentic" width="800" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Master the skills at each stage before moving to the next. Skipping ahead creates gaps that AI will expose.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Foundations
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://dora.dev/research/2025/dora-report/" rel="noopener noreferrer"&gt;2025 DORA report&lt;/a&gt; confirmed what many suspected: AI amplifies your existing capability, magnifying both strengths and weaknesses.&lt;/p&gt;

&lt;p&gt;If your fundamentals are weak, AI won't fix them. It will make the cracks more visible, faster.&lt;/p&gt;

&lt;p&gt;This map assumes you already have solid computer science fundamentals: data structures, algorithms, and an understanding of how systems work (processors, memory, networking, databases, etc.). AI doesn't replace the need to know these.&lt;/p&gt;

&lt;h4&gt;
  
  
  Version control fluency
&lt;/h4&gt;

&lt;p&gt;Git workflows, meaningful commits, safe experimentation with branches. AI generates code quickly. If you can't safely integrate and roll back changes, you'll spend more time cleaning up than you save.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If you haven't used branches and pull requests regularly, start a side project that forces you to&lt;/li&gt;
&lt;li&gt;Read &lt;a href="https://git-scm.com/book/en/v2" rel="noopener noreferrer"&gt;Pro Git&lt;/a&gt; (free online) - chapters 1-3 cover the essentials&lt;/li&gt;
&lt;li&gt;Learn &lt;a href="https://git-scm.com/docs/git-worktree" rel="noopener noreferrer"&gt;git worktrees&lt;/a&gt; - you'll need them for multi-agent workflows in the Advanced section&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Testing fundamentals
&lt;/h4&gt;

&lt;p&gt;The &lt;a href="https://martinfowler.com/articles/practical-test-pyramid.html" rel="noopener noreferrer"&gt;test pyramid&lt;/a&gt; still applies. Unit, integration, end-to-end. AI can generate tests, but knowing which tests matter, when to push tests up or down the pyramid, and reviewing their quality is your job. Build intuition for what belongs at each layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Practice writing tests before code (TDD) on a small project&lt;/li&gt;
&lt;li&gt;Read &lt;a href="https://www.oreilly.com/library/view/test-driven-development/0321146530/" rel="noopener noreferrer"&gt;Test-Driven Development: By Example&lt;/a&gt; by Kent Beck, the foundational TDD book&lt;/li&gt;
&lt;li&gt;Read &lt;a href="https://www.pearson.com/en-us/subject-catalog/p/growing-object-oriented-software-guided-by-tests/P200000009298/" rel="noopener noreferrer"&gt;Growing Object-Oriented Software, Guided by Tests&lt;/a&gt; by Steve Freeman and Nat Pryce for TDD in practice&lt;/li&gt;
&lt;li&gt;Apply &lt;a href="https://martinfowler.com/bliki/TestPyramid.html" rel="noopener noreferrer"&gt;Martin Fowler's test pyramid rule&lt;/a&gt;: if a unit test covers it, don't duplicate at higher levels. Push tests down: unit test business logic, integration test service interactions, end-to-end only for critical user paths&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Code review discipline
&lt;/h4&gt;

&lt;p&gt;You'll review more code than ever. AI-generated code often looks plausible but handles edge cases incorrectly. Strengthen your eye for subtle bugs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to watch for in AI-generated code:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Security vulnerabilities&lt;/strong&gt;: SQL injection, unsafe data handling, hardcoded secrets. AI often generates patterns that work but aren't secure.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Edge cases&lt;/strong&gt;: Null handling, empty collections, boundary conditions. AI tends to handle the happy path well but miss edge cases.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business logic errors&lt;/strong&gt;: AI can't understand your domain. Verify that the code does what the business actually needs, not just what the prompt described.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Architectural violations&lt;/strong&gt;: Does the code respect your layer boundaries? Does it follow your ADRs? AI doesn't know your architectural constraints unless you tell it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Code smells&lt;/strong&gt;: Duplicated logic, overly complex methods, inconsistent patterns. AI doesn't always match your codebase conventions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hallucinated APIs&lt;/strong&gt;: Functions or methods that look real but don't exist. Always verify imports and dependencies.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Review pull requests on open source projects you use&lt;/li&gt;
&lt;li&gt;Read &lt;a href="https://google.github.io/eng-practices/review/" rel="noopener noreferrer"&gt;Code Review Guidelines&lt;/a&gt; from Google's engineering practices&lt;/li&gt;
&lt;li&gt;Practice the "trust but verify" mindset: assume AI code needs checking, not approval&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Code quality intuition
&lt;/h4&gt;

&lt;p&gt;Can you recognize maintainable, clean code vs technically-correct-but-messy? AI generates code fast. If you can't tell good from bad, you'll accept garbage that costs you later.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Read &lt;a href="https://www.oreilly.com/library/view/clean-code-a/9780136083238/" rel="noopener noreferrer"&gt;Clean Code&lt;/a&gt; by Robert Martin&lt;/li&gt;
&lt;li&gt;Refactor old code you wrote, or practice on &lt;a href="https://github.com/emilybache/GildedRose-Refactoring-Kata" rel="noopener noreferrer"&gt;clean code katas&lt;/a&gt; - notice what makes code hard to change&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Documentation practices
&lt;/h4&gt;

&lt;p&gt;Documentation becomes AI context. Quality documentation into the system means quality AI output. Poor docs mean the AI hallucinates or makes wrong assumptions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Document a project you're working on as if a new teammate needs to understand it&lt;/li&gt;
&lt;li&gt;Read &lt;a href="https://docsfordevelopers.com/" rel="noopener noreferrer"&gt;Docs for Developers&lt;/a&gt; for practical guidance&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Architecture understanding
&lt;/h4&gt;

&lt;p&gt;Data flow, component boundaries, dependency management. AI tools need you to describe constraints clearly. If you don't understand the architecture, you can't provide good context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Draw architecture diagrams for systems you work with&lt;/li&gt;
&lt;li&gt;Read &lt;a href="https://www.oreilly.com/library/view/fundamentals-of-software/9781492043447/" rel="noopener noreferrer"&gt;Fundamentals of Software Architecture&lt;/a&gt; by Richards and Ford for trade-offs and patterns&lt;/li&gt;
&lt;li&gt;Read &lt;a href="https://dataintensive.net/" rel="noopener noreferrer"&gt;Designing Data-Intensive Applications&lt;/a&gt; by Kleppmann for distributed systems and data architecture&lt;/li&gt;
&lt;li&gt;For microservices specifically, read &lt;a href="https://www.oreilly.com/library/view/building-microservices-2nd/9781492034018/" rel="noopener noreferrer"&gt;Building Microservices&lt;/a&gt; by Sam Newman&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  2. AI Interaction
&lt;/h3&gt;

&lt;p&gt;The skills specific to working with AI systems. You're learning to communicate with a system that's capable but context-limited, confident but sometimes wrong.&lt;/p&gt;

&lt;h4&gt;
  
  
  Prompt engineering basics
&lt;/h4&gt;

&lt;p&gt;Specificity matters. Vague requests get vague results.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Bad prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write a function to parse dates
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Good prompt:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Write a Python function that:
- Parses ISO 8601 date strings (e.g., "2025-12-31T14:30:00Z")
- Handles timezone offsets
- Returns None for invalid input
- Include docstring and type hints
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The difference isn't cleverness - it's precision.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key techniques:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Technique&lt;/th&gt;
&lt;th&gt;What It Is&lt;/th&gt;
&lt;th&gt;When to Use&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Specificity&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Precise requirements over vague requests&lt;/td&gt;
&lt;td&gt;Always - the biggest lever&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Few-shot prompting&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Show 1-3 examples of input → output&lt;/td&gt;
&lt;td&gt;Team patterns, consistent formatting&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Chain of thought&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;"Think step-by-step: analyze, identify, explain, then fix"&lt;/td&gt;
&lt;td&gt;Debugging, complex reasoning&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Role prompting&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;"Act as a senior security engineer reviewing for vulnerabilities"&lt;/td&gt;
&lt;td&gt;When expertise framing helps&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Meta prompting&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Prompts that generate or refine other prompts&lt;/td&gt;
&lt;td&gt;Org-level standards, team templates&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Explicit constraints&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;"Don't use external libraries. Keep it under 50 lines."&lt;/td&gt;
&lt;td&gt;Avoiding common failure modes&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Few-shot example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Convert these function names from camelCase to snake_case:

Example 1: getUserById -&amp;gt; get_user_by_id
Example 2: validateEmailAddress -&amp;gt; validate_email_address

Now convert: fetchAllActiveUsers
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Chain of thought example:&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Debug this function. Think step-by-step:
1. What is this function supposed to do?
2. Trace through with input X - what happens at each line?
3. Where does the actual behavior differ from expected?
4. What's the fix?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Spend a week being deliberate about prompts. Write down what you asked, what you got, and what you wish you'd asked.&lt;/li&gt;
&lt;li&gt;Read &lt;a href="https://platform.claude.com/docs/en/build-with-claude/prompt-engineering/overview" rel="noopener noreferrer"&gt;Anthropic's Prompt Engineering Guide&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Reference &lt;a href="https://www.promptingguide.ai/" rel="noopener noreferrer"&gt;promptingguide.ai&lt;/a&gt; for comprehensive techniques&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Context engineering
&lt;/h4&gt;

&lt;p&gt;A clever prompt won't fix bad context. Context engineering is about curating what information the model sees: project constraints, coding standards, relevant examples, what you've already tried.&lt;/p&gt;

&lt;p&gt;This is the 80% of the skill. Prompt engineering is maybe 20%.&lt;/p&gt;

&lt;p&gt;I've written a detailed guide on this: &lt;a href="https://dev.to/javatarz/context-engineering-for-ai-assisted-development-b8i"&gt;Context Engineering for AI-Assisted Development&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Create a project-level context file (e.g., CLAUDE.md) for your current codebase&lt;/li&gt;
&lt;li&gt;Add coding standards, architectural constraints, common patterns&lt;/li&gt;
&lt;li&gt;Notice when AI output improves because of better context&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Understanding model behaviour
&lt;/h4&gt;

&lt;p&gt;You don't need to become an ML engineer, but knowing the basics helps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to understand:&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Concept&lt;/th&gt;
&lt;th&gt;Why It Matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Context windows&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Why your 50-file codebase overwhelms the model. Why it "forgets" earlier instructions. (&lt;a href="https://docs.anthropic.com/en/docs/build-with-claude/context-windows" rel="noopener noreferrer"&gt;Anthropic's context window docs&lt;/a&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Training data &amp;amp; fine-tuning&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Why Claude excels at code review. Why some models are verbose, others concise.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Knowledge cutoff&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Why the model doesn't know about libraries released last month.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hallucinations&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Models confidently generate plausible-looking nonsense. Verify APIs exist. Test edge cases.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cost per token&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Why Opus is expensive for exploration but worth it for complex reasoning. (&lt;a href="https://www.anthropic.com/pricing" rel="noopener noreferrer"&gt;Anthropic pricing&lt;/a&gt;)&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Model strengths (from my experience):&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Model&lt;/th&gt;
&lt;th&gt;Strengths&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Claude&lt;/td&gt;
&lt;td&gt;Thoughtful about edge cases, good at following complex instructions, strong code review&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;GPT&lt;/td&gt;
&lt;td&gt;Fast, good at general tasks, wide knowledge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Gemini&lt;/td&gt;
&lt;td&gt;Larger context windows, good at multimodal tasks&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;These observations come from my own work. Models evolve quickly - what's true today may change next quarter.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Try the same task with different models. Note where each excels.&lt;/li&gt;
&lt;li&gt;Read model release notes when new versions come out&lt;/li&gt;
&lt;li&gt;Track which models work best for your common tasks&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Understanding tool behaviour
&lt;/h4&gt;

&lt;p&gt;Here's something that trips people up: &lt;strong&gt;the same model behaves differently in different tools&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Cursor's Claude is not the same as Claude Code's Claude is not the same as Windsurf's Claude. Why? Each tool wraps the model with its own system prompt.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Model Nuances (Intrinsic)&lt;/th&gt;
&lt;th&gt;Tool Nuances (Extrinsic)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;What it is&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Differences baked into the model itself&lt;/td&gt;
&lt;td&gt;Differences from how the tool wraps the model&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Examples&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Context window, reasoning style, training data, cost&lt;/td&gt;
&lt;td&gt;System prompts, UI, context injection, available commands&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;What to learn&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Model strengths for different tasks&lt;/td&gt;
&lt;td&gt;How your tool injects context, what its system prompt optimizes for&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This means: instructions that work well in Claude Code might not work the same in Cursor, even with the same underlying model. The tool's system prompt and context injection change the behavior.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Try the same prompt in multiple tools. Notice the differences.&lt;/li&gt;
&lt;li&gt;Read your tool's documentation on how it manages context&lt;/li&gt;
&lt;li&gt;Understand what your tool's system prompt optimizes for (coding, conversation, etc.)&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  3. Workflow Integration
&lt;/h3&gt;

&lt;p&gt;Making AI a standard part of how you build software, not a novelty you occasionally use.&lt;/p&gt;

&lt;h4&gt;
  
  
  Tool configuration
&lt;/h4&gt;

&lt;p&gt;Configure your AI tools for your team's context. This isn't a one-time setup. Rules files need tuning. Context evolves. Tools update frequently.&lt;/p&gt;

&lt;p&gt;Each tool has its own configuration mechanism:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Claude Code uses &lt;a href="https://code.claude.com/docs/en/memory" rel="noopener noreferrer"&gt;CLAUDE.md files&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Cursor uses &lt;a href="https://cursor.directory" rel="noopener noreferrer"&gt;rules&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;Windsurf uses &lt;a href="https://docs.windsurf.com/windsurf/cascade/memories" rel="noopener noreferrer"&gt;memories&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Instructions that work in one tool won't transfer directly to another because system prompts differ.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Document your configuration so teammates can get productive quickly&lt;/li&gt;
&lt;li&gt;Review and update configuration monthly as tools evolve&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Specs-before-implementation
&lt;/h4&gt;

&lt;p&gt;Define what to build before AI generates code. AI generates code that matches a spec well. It struggles to determine what the spec should be.&lt;/p&gt;

&lt;p&gt;Write the spec first - acceptance criteria, edge cases, constraints. Then let AI implement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Practice writing specs for features before touching code&lt;/li&gt;
&lt;li&gt;Include: what it should do, what it shouldn't do, edge cases to handle&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Test-driven mindset with AI
&lt;/h4&gt;

&lt;p&gt;Write tests first. Let AI implement to pass them. This flips the usual flow: instead of "generate code, then test it", you "define the contract, then fill it in."&lt;/p&gt;

&lt;p&gt;The tests become your spec. When AI has an executable target (tests that must pass), it produces better code than when interpreting prose requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Try TDD on a small feature: write failing tests, then ask AI to make them pass&lt;/li&gt;
&lt;li&gt;Review the generated code - does it just satisfy the tests or is it actually good?&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Human review gates
&lt;/h4&gt;

&lt;p&gt;AI-generated code requires the same (or stricter) review as human-written code. Build the habit of treating AI output like code from a confident junior developer: often correct, sometimes subtly wrong, occasionally completely off base.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set a personal rule: no AI-generated code merged without reviewing every line&lt;/li&gt;
&lt;li&gt;Track your AI acceptance rate. If you're accepting &amp;gt;80% without modification, you might be over-trusting.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Small batches
&lt;/h4&gt;

&lt;p&gt;Generate less, review more. A 1000-line AI diff is harder to review than a 100-line one. Work in small chunks. Commit often.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Break tasks into steps that produce &amp;lt;200 lines of change&lt;/li&gt;
&lt;li&gt;Commit after each step passes review&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Quality guardrails
&lt;/h4&gt;

&lt;p&gt;Integrate linting, static analysis, and security scanning into your workflow. These catch issues AI introduces. Shift left. Catch problems early.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up pre-commit hooks for linting and formatting&lt;/li&gt;
&lt;li&gt;Add security scanning to CI (e.g., &lt;a href="https://snyk.io/" rel="noopener noreferrer"&gt;Snyk&lt;/a&gt;, &lt;a href="https://semgrep.dev/" rel="noopener noreferrer"&gt;Semgrep&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Living documentation
&lt;/h4&gt;

&lt;p&gt;Documentation updated atomically with code changes. When code changes, docs change in the same commit. This keeps your AI context current.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Include doc updates in your definition of done&lt;/li&gt;
&lt;li&gt;Review PRs for documentation staleness&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  4. Advanced / Agentic
&lt;/h3&gt;

&lt;p&gt;Skills for autonomous AI workflows. These are powerful but risky - more autonomy needs stronger guardrails.&lt;/p&gt;

&lt;h4&gt;
  
  
  Agentic workflow design
&lt;/h4&gt;

&lt;p&gt;Tools like Claude Code, Cursor, and Windsurf can run shell commands, edit files, and chain actions. Know what your tool can do and design workflows that leverage it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start with supervised agents - review each step before allowing the next&lt;/li&gt;
&lt;li&gt;Read &lt;a href="https://code.claude.com/docs/en/github-actions" rel="noopener noreferrer"&gt;Claude Code's GitHub Actions integration&lt;/a&gt; for CI/CD examples&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Task decomposition
&lt;/h4&gt;

&lt;p&gt;Break complex work into subtasks an agent can handle. Good decomposition is a skill in itself. Too big and the agent loses focus. Too small and you spend all your time orchestrating.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Practice breaking features into agent-sized tasks (~30 min of work each)&lt;/li&gt;
&lt;li&gt;Notice which decompositions lead to better agent output&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Guardrails for agents
&lt;/h4&gt;

&lt;p&gt;More autonomy needs stronger guardrails. Sandboxing, approval gates, rollback procedures. Agents make mistakes. Build systems that catch them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Never give agents write access to production&lt;/li&gt;
&lt;li&gt;Implement approval gates for destructive operations&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Engineering culture codification
&lt;/h4&gt;

&lt;p&gt;Turn your team's standards, patterns, and guidelines into structured artifacts that AI can use. This is how you scale intelligent Engineering beyond individuals.&lt;/p&gt;

&lt;p&gt;When you document coding standards, architectural patterns, and review checklists in a format AI can consume, every team member (and AI tool) operates from the same playbook.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Start with a CLAUDE.md (or equivalent) that captures your team's conventions&lt;/li&gt;
&lt;li&gt;Add architectural decision records (ADRs) that AI can reference&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Multi-agent orchestration
&lt;/h4&gt;

&lt;p&gt;Running parallel agents (e.g., using git worktrees). Coordinating results. This is emerging territory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Try running two agents on independent tasks&lt;/li&gt;
&lt;li&gt;Notice coordination challenges and develop patterns for handling them&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  CI/CD integration
&lt;/h4&gt;

&lt;p&gt;Running AI reviews on pull requests. Automated code analysis. Scheduled agents for maintenance tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to build this:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Set up &lt;a href="https://docs.github.com/en/copilot/how-tos/agents/copilot-code-review/using-copilot-code-review" rel="noopener noreferrer"&gt;Copilot code review&lt;/a&gt; or similar on your repo&lt;/li&gt;
&lt;li&gt;Start with comment-only (no auto-merge) until you trust it&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Learning Paths
&lt;/h2&gt;

&lt;p&gt;Not everyone starts from the same place.&lt;/p&gt;

&lt;h3&gt;
  
  
  For Developers New to AI Tools
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Start here:&lt;/strong&gt; Foundations + AI Interaction basics&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Get comfortable with one AI tool. GitHub Copilot is a good starting point for its low cost and tight editor integration. For open source alternatives, try &lt;a href="https://aider.chat/" rel="noopener noreferrer"&gt;Aider&lt;/a&gt; or &lt;a href="https://github.com/sst/opencode" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Spend 2-4 weeks using it for completion and simple generation.&lt;/li&gt;
&lt;li&gt;Practice prompting: be specific, iterate, learn what works.&lt;/li&gt;
&lt;li&gt;Move to a more capable tool (Claude Code, Cursor, Windsurf) once you're comfortable.&lt;/li&gt;
&lt;li&gt;Build your first context file.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Expected ramp-up:&lt;/strong&gt; 4-8 weeks to feel productive.&lt;/p&gt;

&lt;h3&gt;
  
  
  For Developers Experienced With AI
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Start here:&lt;/strong&gt; Workflow Integration + Advanced&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Audit your current workflow. Where are you using AI effectively? Where are you over-trusting?&lt;/li&gt;
&lt;li&gt;Strengthen context engineering. Create comprehensive project context files.&lt;/li&gt;
&lt;li&gt;Set up guardrails: linting, security scanning, review checklists.&lt;/li&gt;
&lt;li&gt;Experiment with agentic workflows under supervision.&lt;/li&gt;
&lt;li&gt;Integrate AI into CI/CD.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Expected ramp-up:&lt;/strong&gt; 2-4 weeks to significantly improve your workflow.&lt;/p&gt;

&lt;h3&gt;
  
  
  For Tech Leaders Building Team Capability
&lt;/h3&gt;

&lt;p&gt;Whether you're a Tech Lead, Engineering Manager, Principal Engineer, or anyone else responsible for growing your team's capability, this section is for you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start here:&lt;/strong&gt; The &lt;a href="https://cloud.google.com/resources/content/2025-dora-ai-capabilities-model-report" rel="noopener noreferrer"&gt;2025 DORA AI Capabilities Model&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The report identified seven practices that amplify AI's positive impact:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Clear AI stance&lt;/strong&gt;: Establish expectations for how your team uses AI.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Healthy data ecosystem&lt;/strong&gt;: Quality documentation enables quality AI outputs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Strong version control&lt;/strong&gt;: Rollback capability provides a safety net for experimentation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Small batches&lt;/strong&gt;: Enable quick course corrections.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;User-centric focus&lt;/strong&gt;: Clear goals improve AI output quality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quality internal platforms&lt;/strong&gt;: Standardised tooling scales AI benefits.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI-accessible data&lt;/strong&gt;: Make context available to AI tools.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Actions:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Assess your team against these practices. Where are the gaps?&lt;/li&gt;
&lt;li&gt;Don't change everything at once. Introduce AI at one delivery stage at a time.&lt;/li&gt;
&lt;li&gt;Expect a learning curve: 2-4 weeks of reduced productivity before gains appear.&lt;/li&gt;
&lt;li&gt;Invest in guardrails before acceleration.&lt;/li&gt;
&lt;li&gt;Measure impact with DORA metrics: deployment frequency, lead time, change failure rate, time to restore.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Common Pitfalls
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Starting with advanced tools&lt;/strong&gt;: If you skip fundamentals, you'll produce more code, faster, with worse quality. The problems compound.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ignoring context engineering&lt;/strong&gt;: Most teams spend all their energy on prompt engineering. Context engineering matters far more. Good context makes mediocre prompts work; perfect prompts can't fix missing context. And context scales: set it up once, benefit every interaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Over-trusting AI&lt;/strong&gt;: "The AI suggested it" is not an acceptable answer in a post-mortem. &lt;a href="https://dev.to/javatarz/intelligent-engineering-principles-for-building-with-ai-34aa#ai-augments-humans-stay-accountable"&gt;You're accountable for what ships&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Under-trusting AI&lt;/strong&gt;: Some developers refuse to adopt AI tools, treating them as a passing fad. The productivity gap is real. Healthy skepticism is fine, but refusing to engage is risky. For tech leaders: &lt;a href="https://dora.dev/ai/research-insights/adopt-gen-ai/" rel="noopener noreferrer"&gt;DORA's research on AI adoption&lt;/a&gt; shows that addressing anxieties directly and providing dedicated exploration time significantly improves adoption.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No guardrails&lt;/strong&gt;: AI makes it easy to move fast. Without automated quality checks, you'll ship bugs faster too. &lt;a href="https://dev.to/javatarz/intelligent-engineering-principles-for-building-with-ai-34aa#smarter-ai-needs-smarter-guardrails"&gt;Smarter AI needs smarter guardrails&lt;/a&gt;. If you don't have linting, security scanning, and CI checks, add them before increasing your AI usage. For legacy codebases without tests, start with &lt;a href="https://understandlegacycode.com/blog/best-way-to-start-testing-untested-code/" rel="noopener noreferrer"&gt;characterization tests&lt;/a&gt; to capture current behaviour before refactoring. Michael Feathers' &lt;a href="https://www.oreilly.com/library/view/working-effectively-with/0131177052/" rel="noopener noreferrer"&gt;Working Effectively with Legacy Code&lt;/a&gt; is the definitive guide here. AI can accelerate this process, but verify every generated test passes against the real system without any changes to production code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Confusing model and tool behaviour&lt;/strong&gt;: When AI output is wrong, is it the model's limitation or the tool's system prompt? Knowing the difference helps you fix it. To diagnose: try the same prompt in a different tool or the raw API. If the problem persists across tools, it's likely a model limitation. If it only happens in one tool, check how that tool injects context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trying to measure productivity improvement without baselines&lt;/strong&gt;: You can't prove AI made your team faster if you weren't measuring before. Worse, once estimates become targets for measuring AI impact, &lt;a href="https://www.linkedin.com/feed/update/urn:li:activity:7405299770233135105/" rel="noopener noreferrer"&gt;developers adjust their estimates&lt;/a&gt; (consciously or not). Skip the productivity theatre. Instead, measure what matters: features shipped, customer value delivered, time from idea to production, team satisfaction.&lt;/p&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;This skill map is a snapshot. The tools evolve weekly. New capabilities emerge monthly.&lt;/p&gt;

&lt;p&gt;If you're on this journey, I'd like to hear what's working for you. What skills have I missed? What resources have you found valuable?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coming up:&lt;/strong&gt; Putting these skills into practice. I'll walk through setting up intelligent Engineering on a real project, covering tool configuration, context files, and workflow patterns that work.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>career</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Context Engineering for AI-Assisted Development</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Thu, 01 Jan 2026 05:57:49 +0000</pubDate>
      <link>https://dev.to/javatarz/context-engineering-for-ai-assisted-development-b8i</link>
      <guid>https://dev.to/javatarz/context-engineering-for-ai-assisted-development-b8i</guid>
      <description>&lt;p&gt;Same model, different tools, different results.&lt;/p&gt;

&lt;p&gt;If you've used Claude Sonnet in &lt;a href="https://claude.ai/code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;, &lt;a href="https://cursor.com" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;, &lt;a href="https://github.com/features/copilot" rel="noopener noreferrer"&gt;Copilot&lt;/a&gt;, and &lt;a href="https://windsurf.com" rel="noopener noreferrer"&gt;Windsurf&lt;/a&gt;, you've noticed this. The model is identical, but the behavior varies. This isn't magic. It's context engineering.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/posts/2025-12-31-context-engineering/cover.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkarun.me%2Fassets%2Fimages%2Fposts%2F2025-12-31-context-engineering%2Fcover.jpg" alt="Two people collaborating at a whiteboard with diagrams and notes" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://dev.to/javatarz/intelligent-engineering-principles-for-building-with-ai-34aa"&gt;intelligent Engineering: Principles for Building With AI&lt;/a&gt;, I mentioned that "context is everything" and that "context engineering matters more than prompt engineering." But I didn't explain what that means or how to do it. This post fills that gap.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Whiteboard
&lt;/h2&gt;

&lt;p&gt;Imagine you're in a day-long strategy meeting. There's one whiteboard in the room. That's all the shared space you have.&lt;/p&gt;

&lt;p&gt;Your teammate is brilliant. They can see everything on the board and reason about it. But here's the thing: they have no memory outside this whiteboard. What's written is all they know. Erase something, and it's gone.&lt;/p&gt;

&lt;p&gt;Before the meeting started, someone wrote ground rules at the top: "Focus on Q1 priorities. Be specific. No tangents." This section doesn't get erased. It frames everything that follows. (That's the system prompt.)&lt;/p&gt;

&lt;p&gt;The meeting begins. You add notes, diagrams, decisions. The board fills up. You need to add something new, but there's no space. What do you erase? The detailed debate from 9am, or the decision it produced? You keep the decision, erase the discussion. (That's compaction.)&lt;/p&gt;

&lt;p&gt;Three hours in, you notice something odd. Your teammate keeps referencing the top and bottom of the board, but seems to miss what's in the middle. Important context from 10:30am is right there, but they're not looking at it. The middle of the board gets less attention.&lt;/p&gt;

&lt;p&gt;Someone raises a topic that needs last quarter's data. Do you copy the entire Q4 report onto the board? No. You flip open your notebook, find the one relevant chart, add it to the board, discuss it, then erase it when you move on. (That's just-in-time retrieval.) The notebook stays on the table. You reference it when needed, but it doesn't consume board space.&lt;/p&gt;

&lt;p&gt;By afternoon, old notes are causing problems. A 9am assumption turned out to be wrong, but it's still on the board. Your teammate keeps building on it. The board is poisoned with outdated information. You need to actively clean it up.&lt;/p&gt;

&lt;p&gt;There's too much on the board now. Some notes are written in shorthand. Others are cramped into corners with tiny handwriting. Your teammate can technically see it all, but finding anything takes effort. Attention is diluted. (That's context distraction.)&lt;/p&gt;

&lt;p&gt;For a complex sub-problem, you send two people to side rooms with fresh whiteboards. They work independently, then return with one-page summaries. You add the summaries to your board and integrate the findings. You never needed their full whiteboards. (That's sub-agents.)&lt;/p&gt;

&lt;p&gt;The whiteboard is your teammate's entire context window. What's on it is all they can work with. Your job is to curate what goes on the board so they can focus on what matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Means Technically
&lt;/h2&gt;

&lt;p&gt;The whiteboard story maps directly to how AI models process information.&lt;/p&gt;

&lt;h3&gt;
  
  
  System Prompts vs User Prompts
&lt;/h3&gt;

&lt;p&gt;The ground rules at the top of the board are the &lt;strong&gt;system prompt&lt;/strong&gt;. You didn't write them. They were there when you walked in, set by whoever built the tool. They define how the model behaves, what it prioritizes, what it can do.&lt;/p&gt;

&lt;p&gt;What you add during the meeting is the &lt;strong&gt;user prompt&lt;/strong&gt;. Your requests, your context, your questions. It works within the frame the system prompt establishes.&lt;/p&gt;

&lt;p&gt;The model sees both. But system prompts carry more weight because they come first and set expectations.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Context Window
&lt;/h3&gt;

&lt;p&gt;The whiteboard's physical dimensions are the &lt;strong&gt;context window&lt;/strong&gt;. There's a fixed amount of space. Everything competes for it: system instructions, conversation history, files you've pulled in, tool definitions, and the model's own output. When it fills up, something has to go.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lost in the Middle
&lt;/h3&gt;

&lt;p&gt;Remember how your teammate focused on the top and bottom of the board but missed the middle? That's a real phenomenon. Research shows a U-shaped attention curve: information at the start and end of context gets more attention than information in the middle.&lt;/p&gt;

&lt;p&gt;&lt;a href="/assets/images/posts/2025-12-31-context-engineering/attention-curve.svg"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkarun.me%2Fassets%2Fimages%2Fposts%2F2025-12-31-context-engineering%2Fattention-curve.svg" alt="U-shaped attention curve showing high attention at start and end of context, with 'Lost in the Middle' highlighting the attention dip" width="500" height="280"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cramming everything into context can hurt performance&lt;/li&gt;
&lt;li&gt;Position matters: put important information first or last&lt;/li&gt;
&lt;li&gt;As context grows, accuracy often decreases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In &lt;a href="https://dev.to/javatarz/patterns-for-ai-assisted-software-development-4ga2"&gt;Patterns for AI-assisted Software Development&lt;/a&gt;, I described LLMs as "teammates with anterograde amnesia." They can hold information, but only within the context window. Understanding how to manage that window is key.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Attention Budget
&lt;/h3&gt;

&lt;p&gt;Even with everything visible on the board, your teammate can only actively focus on so much while reasoning. Each item costs attention. Add more, and something else gets less focus. Think of it as a budget: every token you add depletes some of the model's capacity to focus on what matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Different Tools Set Up the Room
&lt;/h2&gt;

&lt;p&gt;Here's why the same model behaves differently across tools: different rooms have different ground rules at the top of the board.&lt;/p&gt;

&lt;p&gt;Take Claude Sonnet 4.5. Same teammate. But put them in different rooms:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Room (Tool)&lt;/th&gt;
&lt;th&gt;Top of the board says&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Claude Code&lt;/td&gt;
&lt;td&gt;"Work autonomously. Read files, run terminal commands, complete multi-step tasks."&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cursor&lt;/td&gt;
&lt;td&gt;"Stay in the editor. Complete code inline, understand the open file, suggest edits."&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Copilot&lt;/td&gt;
&lt;td&gt;"Autocomplete as they type. Quick suggestions, stay out of the way."&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Windsurf&lt;/td&gt;
&lt;td&gt;"Maintain flow. Remember preferences across sessions, keep continuity."&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Your teammate reads the top of the board and behaves accordingly. That's why the same model feels different in each tool. The system prompt shapes everything.&lt;/p&gt;

&lt;p&gt;This also explains why prompts don't transfer directly between tools. A prompt that works well in Claude Code might fail in Cursor because the framing is different.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Goes Wrong
&lt;/h2&gt;

&lt;p&gt;When context fails, it fails in predictable ways. Recognizing these patterns helps you diagnose problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context Poisoning
&lt;/h3&gt;

&lt;p&gt;Early errors compound. Your teammate builds on incorrect assumptions, reinforcing mistakes with each exchange. By the time you notice, the board is thoroughly polluted with wrong information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Use backtrack to undo recent turns. &lt;a href="https://code.claude.com/docs/en/checkpointing" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;, &lt;a href="https://cursor.com/docs/agent/chat/checkpoints" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;, and &lt;a href="https://docs.windsurf.com/windsurf/cascade/cascade#named-checkpoints-and-reverts" rel="noopener noreferrer"&gt;Windsurf&lt;/a&gt; all support this. If the pollution runs deeper, compact to summarize past the bad section. Clear is the nuclear option when context is unsalvageable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context Distraction
&lt;/h3&gt;

&lt;p&gt;Too much information competes for attention. The model can technically process it all, but signal gets lost in noise.&lt;/p&gt;

&lt;p&gt;On the whiteboard: shorthand, tiny writing, notes crammed into corners. Your teammate can see it all, but finding anything takes effort.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Keep context lean. Compact proactively. Don't dump everything onto the board.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context Confusion
&lt;/h3&gt;

&lt;p&gt;Mixed content types muddle the model's understanding. Code snippets, prose explanations, JSON configs, and error logs all blur together. The model can't distinguish what's an instruction versus an example versus context.&lt;/p&gt;

&lt;p&gt;On the whiteboard: sticky notes, diagrams, tables, arrows, different colored markers. Your teammate can't parse what type of information to use for what purpose.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Use focused tools. Don't overload the board with too many formats or capabilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Context Clash
&lt;/h3&gt;

&lt;p&gt;Contradictory instructions coexist. "Prioritize speed" in one corner. "Prioritize quality" in another. Your teammate sees both, doesn't know which to follow, and produces something incoherent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fix:&lt;/strong&gt; Keep instructions centralized and current. Review your context files periodically for contradictions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Managing Context Well
&lt;/h2&gt;

&lt;p&gt;Five techniques make a difference.&lt;/p&gt;

&lt;h3&gt;
  
  
  Just-in-Time Retrieval
&lt;/h3&gt;

&lt;p&gt;Don't paste your whole codebase onto the board. Reference specific files and let the tool search.&lt;/p&gt;

&lt;p&gt;Bad: "Here's my entire src/ directory. Now fix the bug."&lt;br&gt;
Good: "There's a bug in the date parser. Check src/utils/dates.ts."&lt;/p&gt;

&lt;p&gt;The notebook stays on the table. You flip it open when needed, find the relevant page, add it to the discussion, then move on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compaction
&lt;/h3&gt;

&lt;p&gt;Context fills up during long sessions. Compaction summarizes conversation history, preserving key decisions while discarding noise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;When to compact:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After completing a major task (before starting the next one)&lt;/li&gt;
&lt;li&gt;During long sessions when you notice drift&lt;/li&gt;
&lt;li&gt;Before context hits limits (proactively, not reactively)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can provide custom instructions when compacting: "focus on architectural decisions" or "preserve the error messages we encountered." This guides what gets kept versus summarized away.&lt;/p&gt;

&lt;p&gt;My preference hierarchy:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Small tasks with &lt;code&gt;/clear&lt;/code&gt;&lt;/strong&gt; - fresh context beats compressed context&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Early compaction with custom instructions&lt;/strong&gt; - you control what matters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Early compaction with default prompt&lt;/strong&gt; - still gives thinking room&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Late compaction&lt;/strong&gt; - avoid this&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Late compaction (waiting until 95% capacity) is the worst option. The model has no thinking room, and the automatic summarization is opaque. You lose nuance without knowing what disappeared. Early compaction, ideally with custom instructions, gives you control and leaves space for the model to reason. Steve Kinney's &lt;a href="https://stevekinney.com/courses/ai-development/claude-code-compaction" rel="noopener noreferrer"&gt;guide to Claude Code compaction&lt;/a&gt; covers the mechanics well.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structured Note-Taking
&lt;/h3&gt;

&lt;p&gt;For complex, multi-hour work, maintain notes outside the conversation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A NOTES.md file tracking progress&lt;/li&gt;
&lt;li&gt;Decision logs capturing why you chose specific approaches&lt;/li&gt;
&lt;li&gt;TODO lists that persist across compactions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The model can reference these files when needed, but they're not consuming context constantly. The notebook on the table, not copied onto the board.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sub-Agents
&lt;/h3&gt;

&lt;p&gt;For large tasks, send people to side rooms with fresh whiteboards:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Main agent coordinates the overall task&lt;/li&gt;
&lt;li&gt;Sub-agents handle specific, focused work with clean context&lt;/li&gt;
&lt;li&gt;Sub-agents return condensed summaries&lt;/li&gt;
&lt;li&gt;Main agent integrates results without carrying full sub-task context&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="/assets/images/posts/2025-12-31-context-engineering/sub-agents.svg"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkarun.me%2Fassets%2Fimages%2Fposts%2F2025-12-31-context-engineering%2Fsub-agents.svg" alt="Sub-agent workflow: main agent delegates tasks to sub-agents with fresh context, receives summaries back, and integrates results" width="500" height="340"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This mirrors how teams work: delegate, get summaries, integrate. Claude Code supports this pattern for &lt;a href="https://www.geeky-gadgets.com/how-to-use-git-worktrees-with-claude-code-for-seamless-multitasking/" rel="noopener noreferrer"&gt;parallel issue work&lt;/a&gt; using git worktrees.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tool-Specific Tips
&lt;/h3&gt;

&lt;p&gt;Each tool has different mechanisms for managing what goes on the board.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Claude Code:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;CLAUDE.md files load automatically at session start. Keep them focused and current.&lt;/li&gt;
&lt;li&gt;Hierarchical loading: user-level, project-level, directory-level. More specific overrides more general.&lt;/li&gt;
&lt;li&gt;Trust the tool's search. Don't paste file contents manually unless retrieval fails.&lt;/li&gt;
&lt;li&gt;Use &lt;code&gt;/compact&lt;/code&gt; between logical units of work.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cursor:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rules files inject instructions with different scopes: global, project, file-type specific.&lt;/li&gt;
&lt;li&gt;Use @-mentions deliberately. More files isn't better; relevant files are better.&lt;/li&gt;
&lt;li&gt;Keep rule files short. They add to every interaction.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Copilot:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lighter touch. Works best for autocomplete and quick suggestions.&lt;/li&gt;
&lt;li&gt;Less configurable context, so prompt quality matters more.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Windsurf:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Memories persist across sessions automatically.&lt;/li&gt;
&lt;li&gt;Good for maintaining preferences and patterns over time.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Aider, Cline, and similar terminal-based tools&lt;/strong&gt; follow the same principles. Different mechanisms, same underlying constraints. For a deeper comparison, see &lt;a href="https://dev.to/javatarz/how-to-choose-your-coding-assistants-90k"&gt;How to choose your coding assistants&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Core Principle
&lt;/h2&gt;

&lt;p&gt;Anthropic's engineering team puts it well in their &lt;a href="https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents" rel="noopener noreferrer"&gt;guide to context engineering&lt;/a&gt;:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Find the smallest set of high-signal tokens that maximize the likelihood of your desired outcome.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;More context isn't better. Relevant context is better. Your job is to curate what goes on the board so your teammate can focus on what matters.&lt;/p&gt;

&lt;p&gt;Context drives quality. But "quality context" doesn't mean volume. It means signal: information the model needs to reason correctly. Everything else dilutes attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;Context engineering is a skill that develops with practice. Start by noticing when your tools perform well and when they drift. Ask why. Usually, the answer is in the context.&lt;/p&gt;

&lt;p&gt;Take a few minutes to examine how your tool handles context. Where do instructions go? How do files get included? What happens during long sessions?&lt;/p&gt;

&lt;p&gt;Understanding this is the difference between fighting your tools and working with them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Coming up:&lt;/strong&gt; Context engineering is one piece of the puzzle. In &lt;a href="https://dev.to/javatarz/intelligent-engineering-a-skill-map-for-learning-ai-assisted-development-3kaj"&gt;intelligent Engineering: A Skill Map for Learning AI-Assisted Development&lt;/a&gt;, I map out the full landscape of skills worth building.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>career</category>
    </item>
    <item>
      <title>intelligent Engineering: Principles for Building With AI</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Sat, 27 Dec 2025 17:46:56 +0000</pubDate>
      <link>https://dev.to/javatarz/intelligent-engineering-principles-for-building-with-ai-34aa</link>
      <guid>https://dev.to/javatarz/intelligent-engineering-principles-for-building-with-ai-34aa</guid>
      <description>&lt;p&gt;Software engineering is changing. Again.&lt;/p&gt;

&lt;p&gt;I've spent the last two years applying AI across prototyping, internal tools, production systems, and team workflows. I've watched it generate elegant solutions in seconds and confidently produce complete nonsense. I've seen it save hours on boilerplate and cost hours debugging hallucinated APIs.&lt;/p&gt;

&lt;p&gt;One thing has become clear: AI doesn't make engineering easier. It shifts where the hard parts are.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/posts/2025-11-06-intelligent-engineering-building-skills-and-shaping-principles/cover.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fkarun.me%2Fassets%2Fimages%2Fposts%2F2025-11-06-intelligent-engineering-building-skills-and-shaping-principles%2Fcover.jpg" alt="AI and human collaboration in software engineering" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The teams I've seen succeed with AI aren't the ones using it everywhere. They're the ones using it deliberately, knowing when to trust it, when to verify, and when to ignore it entirely.&lt;/p&gt;

&lt;p&gt;Here's a working set of principles I've found useful. They aren't finished and will evolve with the tools. But they help keep me grounded in what actually matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  intelligent Engineering Principles
&lt;/h2&gt;

&lt;p&gt;These principles fall into two buckets: what is new, and what remains timeless but more important than ever.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI-Native Principles
&lt;/h3&gt;

&lt;p&gt;These principles exist because of AI. They address challenges that didn't matter before.&lt;/p&gt;

&lt;h4&gt;
  
  
  AI augments, humans stay accountable.
&lt;/h4&gt;

&lt;p&gt;AI can help you move faster and see options you'd miss on your own. But it can't own the outcome. Engineering judgment stays with you. When something breaks in production, "the AI suggested it" isn't an acceptable answer.&lt;/p&gt;

&lt;h4&gt;
  
  
  Context is everything.
&lt;/h4&gt;

&lt;p&gt;AI output reflects what you put in. Vague requests get vague results. Bring useful context: project constraints, coding standards, relevant examples, what you've already tried.&lt;/p&gt;

&lt;p&gt;As systems grow, context management becomes a discipline of its own. How do new teammates get AI tools primed with the right information? How do you keep that context current? When context exceeds what fits in a prompt, you'll need solutions like modular documentation.&lt;/p&gt;

&lt;h4&gt;
  
  
  Smarter AI needs smarter guardrails.
&lt;/h4&gt;

&lt;p&gt;Faster generation demands sharper review. AI-produced code still needs validation: Is it correct? Secure? Does it solve the right problem?&lt;/p&gt;

&lt;h4&gt;
  
  
  Shape AI deliberately.
&lt;/h4&gt;

&lt;p&gt;I've seen teams adopt whatever AI tools are trending without asking whether they fit. Six months later, half the codebase assumed Copilot's import ordering, onboarding docs referenced prompts that no longer worked, and no one remembered why. Decide upfront: where does AI help us? Where does it not? What happens when we switch tools?&lt;/p&gt;

&lt;h4&gt;
  
  
  Learning never stops.
&lt;/h4&gt;

&lt;p&gt;At the start of 2025, AI practices evolved weekly. By year's end, monthly. That's still faster than most teams are used to. What didn't work three months ago might work now. The only way to know is to keep experimenting.&lt;/p&gt;

&lt;p&gt;I've settled on 90% getting work done, 10% experimenting. Try new ways to solve the same problem. Revisit old problems to see if there's a simpler solution now. Check if techniques you learned last quarter still make sense.&lt;/p&gt;

&lt;h3&gt;
  
  
  Timeless Foundations
&lt;/h3&gt;

&lt;p&gt;These aren't new, but AI makes them more important.&lt;/p&gt;

&lt;h4&gt;
  
  
  Learn fast, adapt continuously.
&lt;/h4&gt;

&lt;p&gt;Start small, validate often, and shorten feedback loops. If an AI-assisted workflow isn't helping, change it. Don't let sunk cost keep you on a bad path.&lt;/p&gt;

&lt;h4&gt;
  
  
  Fast doesn't mean good.
&lt;/h4&gt;

&lt;p&gt;AI makes it easy to generate code fast. That doesn't mean the code is worth keeping. Unmaintainable, insecure, or rigid solutions cost more than they save. Build the right thing, not just the quick thing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What This Looks Like in Practice
&lt;/h2&gt;

&lt;p&gt;Here's what this means day-to-day:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;I use AI to draft implementations, then spend more time reviewing than I saved generating. The review is where the real work happens.&lt;/li&gt;
&lt;li&gt;When AI suggests an approach, I ask "why?" If I can't explain the choice to a teammate, I don't use it.&lt;/li&gt;
&lt;li&gt;I've learned to be specific. "Write a function to parse dates" gets garbage. "Parse ISO 8601 dates, handle timezone offsets, return None for invalid input" gets something useful.&lt;/li&gt;
&lt;li&gt;I treat AI output like code from a confident junior developer: often correct, sometimes subtly wrong, occasionally completely off base.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The craft hasn't changed. I still need to understand the problem, reason about edge cases, and take responsibility for what ships.&lt;/p&gt;

&lt;h2&gt;
  
  
  Skills Worth Building
&lt;/h2&gt;

&lt;p&gt;Principles guide decisions. Skills make them possible.&lt;/p&gt;

&lt;p&gt;Here's what I've found worth investing in:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context engineering matters more than prompt engineering.&lt;/strong&gt; A clever prompt won't fix bad context. I spend more time curating what information the model sees than crafting how I ask for things. Project documentation, coding standards, relevant examples. These matter more than prompt tricks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understanding tokens and context windows helps.&lt;/strong&gt; You don't need to become an ML engineer. But it helps to know why your 50-file codebase overwhelms the model, or why it "forgets" earlier instructions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentic workflow primitives matter more than AI theory.&lt;/strong&gt; You won't build RAG systems from scratch. You'll use tools with these built in. What matters is configuring them: hooks that customize behavior, skills that extend capabilities, context management that keeps information relevant. I spend more time learning how my tools' hooks work or how to structure context files than reading ML papers.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;For a comprehensive guide to building these skills, see &lt;a href="https://dev.to/javatarz/intelligent-engineering-a-skill-map-for-learning-ai-assisted-development-3kaj"&gt;A Skill Map for Learning AI-Assisted Development&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Matters
&lt;/h2&gt;

&lt;p&gt;I've seen what happens when teams adopt AI without thinking it through. Prototypes that demo well but collapse under real load. Codebases where no one understands why decisions were made because "the AI suggested it." Bugs that take days to track down because the generated code looked plausible but handled edge cases incorrectly.&lt;/p&gt;

&lt;p&gt;The failure mode isn't dramatic. It's slow erosion: teams that gradually stop reasoning deeply because the model provides answers quickly.&lt;/p&gt;

&lt;p&gt;The alternative isn't avoiding AI. It's using it with intention. The engineers I've seen do this well have gotten faster &lt;em&gt;and&lt;/em&gt; more thoughtful. They use AI to handle the routine and focus on the hard problems.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;p&gt;These principles aren't final. I expect to revise them as tools improve and as I learn what actually works versus what sounds good in theory.&lt;/p&gt;

&lt;p&gt;If you're experimenting with AI in your engineering work, I'd be curious to hear what's working for you. What would you add? What would you challenge?&lt;/p&gt;

&lt;h2&gt;
  
  
  Credits
&lt;/h2&gt;

&lt;p&gt;&lt;em&gt;This blog would not have been possible without the review and feedback from&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/greg-reiser-6910462/" rel="noopener noreferrer"&gt;&lt;em&gt;Greg Reiser&lt;/em&gt;&lt;/a&gt;&lt;em&gt;,&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/gsong/" rel="noopener noreferrer"&gt;&lt;em&gt;George Song&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/karthika-vijayan/" rel="noopener noreferrer"&gt;&lt;em&gt;Karthika Vijayan&lt;/em&gt;&lt;/a&gt; &lt;em&gt;for reviewing multiple versions of this post and providing patient feedback 😀.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This content has been written on the shoulders of giants (at and outside&lt;/em&gt; &lt;a href="https://sahaj.ai" rel="noopener noreferrer"&gt;&lt;em&gt;Sahaj&lt;/em&gt;&lt;/a&gt;&lt;em&gt;).&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>career</category>
    </item>
    <item>
      <title>Level Up Code Quality with an AI Assistant</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Sat, 27 Dec 2025 17:46:40 +0000</pubDate>
      <link>https://dev.to/javatarz/level-up-code-quality-with-an-ai-assistant-5cdn</link>
      <guid>https://dev.to/javatarz/level-up-code-quality-with-an-ai-assistant-5cdn</guid>
      <description>&lt;p&gt;Using AI coding assistants to introduce, automate, and evolve quality checks in your project.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/uploads/code-quality-with-ai-cover-art.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fx2goflob85nsv9o387f2.png" alt="Chosing Coding Assistants Cover Art: Choose your tool" width="650" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I have talked about teams needing to have a &lt;a href="https://dev.to/javatarz/what-makes-developer-experience-world-class-4l3i"&gt;world class developer experience&lt;/a&gt; as a pre-requisite for a well functioning team. When teams lack such a setup, the most common response is a lack of time or buy in from stakeholders to build these things. With &lt;a href="https://dev.to/javatarz/how-to-choose-your-coding-assistants-90k"&gt;AI coding assistants being readily available to most developers today&lt;/a&gt;, the engineering effort and the cost investment for the business lesser reducing the barrier to entry.&lt;/p&gt;

&lt;h1&gt;
  
  
  Current State
&lt;/h1&gt;

&lt;p&gt;This post showcases an actual codebase that has not been actively maintained for over 5 years but runs a product that is actively used. It is business critical but did not have the necessary safety nets in place. Let us go through the journey, prompts inclusive, on how to make the code quality of this repository better, one prompt at a time.&lt;/p&gt;

&lt;p&gt;The project is a Django backend application that exposes APIs. We start off with a quick overview of the code and notice that there are tests and some documentation but a lack of consistent way to run and test the application.&lt;/p&gt;

&lt;h1&gt;
  
  
  The Journey
&lt;/h1&gt;

&lt;p&gt;I am assuming you are running these commands using Claude Code (with Claude Sonnet 4 in most cases). This is equally applicable across any coding assistant. Results will vary based on your choices of models, prompts and the codebase.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up Basic Documentation and Some Automation
&lt;/h2&gt;

&lt;p&gt;If you are using a tool like Claude Code, run &lt;code&gt;/init&lt;/code&gt; in your repository and you will get a significant part of this documentation.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Can you analyse the code and write up documentation in README.md that
 clearly summarises how to setup, run, test and lint the application.
Please make sure the file is concise and does not repeat itself. 
Write it like technical documentation. Short and sweet.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next step is to start setting up some automation (like just files) to help make the project easier to use. This will take a couple of attempts to get right but here is a prompt you can start off with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Please write up a just file. I would like the following commands
`just setup` - set up all the dependencies of the project
`just run` - start up the applications including any dependencies
`just test` - run all tests
If you require clarifications, please ask questions. 
Think hard about what other requirements I need to fulfill. 
Be critical and question everything. 
Do not make code changes till you are clear on what needs to be done.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will give you a base structure for you to modify quickly and get up and running. If you &lt;code&gt;README.md&lt;/code&gt; has a preferred way to run the application (locally vs docker), the just will automatically use it. If not, you will have to provide clarification.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setting up pre-commit for Early Feedback
&lt;/h2&gt;

&lt;p&gt;Let’s start small and build on it.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Please setup pre-commit with a single task to run all tests on every push.
Update the just script to ensure pre-commit hooks are installed locally
 during the setup process.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We probably didn’t need to be this explicit but I find managing context and keeping tasks small mean I move a lot quicker.&lt;/p&gt;

&lt;h2&gt;
  
  
  Curating Code Quality Tools
&lt;/h2&gt;

&lt;p&gt;Lets begin by finding good tools to use, create a plan for the change and then execute the plan. Start off by moving Claude Code to &lt;code&gt;Plan mode&lt;/code&gt; (shift+tab twice)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;What's a good tool to check the complexity of the python code this
 repository has and lint on it to provide the team feedback as a 
 pre-commit hook?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It came back with a set of tools I liked but it assumed that the commit will immediately go green. In an existing large codebase with tech debt, this will not happen. Let’s break this down further.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The list of tools you're suggesting sound good. 
The codebase currently will have a very large number of violations. 
I want the ability to incrementally improve things with every commit. 
How do we achieve this?
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Creating a Plan
&lt;/h2&gt;

&lt;p&gt;After you iterate on the previous prompt with the agent, you will get a plan that you’ll be happy with. The AI assistant will ask for permission to move forward and execute the plan but before doing so, it will be worth creating a save state. Imagine this as a video game save, if something goes wrong, come back and restore from this point. This also allows you to clear context since everything is dumped to markdown files on disk.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Can you create a plan that is executable in steps?
Write that plan to `docs/code-quality-improvements`.
Try to use multiple background agents if it helps speed up this process.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Give it a few minutes to analyse the code. In my case, the following files were created. &lt;code&gt;README.md&lt;/code&gt; says that “Tasks within the same phase can be executed in parallel by multiple Claude Code assistants, as long as prerequisites are met”. You are ready to hit &lt;code&gt;/clear&lt;/code&gt; and clear out the context window.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovxjoy0gaqqgiu4ox25b.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovxjoy0gaqqgiu4ox25b.jpg" alt="Plan as tasks" width="607" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Phase 1 sets up the basic tools, phase 2 configures them, phase 3 focuses on integration and automation and phase 4 adds monitoring and focuses on improving the code quality.&lt;/p&gt;

&lt;p&gt;Before executing the plan, I commit the plan (&lt;code&gt;docs/code-quality-improvement&lt;/code&gt;). This allows me to track any changes that have been made. When executing the plan, I do not check in the changes made to the plan. This allows me to drop the plan at the end of the process. As a team, we have discussed potentially keeping the plan around as an artifact. To do so, you would have to ask Claude Code to use relative paths (it uses absolute paths when asking for files to be updated in the plan).&lt;/p&gt;

&lt;h2&gt;
  
  
  Executing the Plan
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;I would like to improve code quality and I have come up with a plan to do 
so under `docs/code-quality-improvement`.
Can you analyse the plan and start executing it? The `README.md` has a 
quick start section which tasks about how to execute different phases of the 
plan. As you execute the plan, mark tasks as done to track state.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You will notice that Claude Code will add dependencies to &lt;code&gt;requirements-dev.txt&lt;/code&gt; and try to run things without installing them. Also, it will add dependencies that do not exist. Stop the execution (by pressing &lt;code&gt;Esc&lt;/code&gt; ) and use the following prompt to course correct&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;For every pip dependency you add to `requirements-dev.txt`, please run 
`pip install`. 
Before adding a dependency to the dependency file, please check if it is 
available on `pip`.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once phase 1 and phase 2 of the plan are complete, the following files are created and ready to be committed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xbydjyi1v45ix5bkgjv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6xbydjyi1v45ix5bkgjv.jpg" alt="Linting tools setup" width="253" height="142"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When the quality gates are added on phase 3, run the command once to test if everything works and create another commit. After this, I had to prompt it once more to integrate the lint steps into a simplified developer experience.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Please add `just lint` as a command to run all quality checks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Test the brand new lint command and then run a commit. Ask claude code to proceed to phase 4.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzo9ktrp5b8z4dk901cv.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmzo9ktrp5b8z4dk901cv.jpg" alt="Claude Code’s self doubt" width="538" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You might see Claude Code doubt a plan that it has created. It is a good question because the system is &lt;em&gt;functional&lt;/em&gt; but if we prefer the more advanced checks, we should request it pushes on with Phase 4 implementation.&lt;/p&gt;

&lt;p&gt;After phase 4, we have a codebase that checks for code quality every time a developer is pushing code. Our repository has pre-commit hooks for linting, runs all quality checks once before pushing. The quality checks will fail if the code added has unformatted files, imports in the wrong order, &lt;code&gt;flake8&lt;/code&gt; lint issues or functions with higher code complexity. It checks this only in the files being touched (because we told it that we had debt that needs to be reduced and all checks will not pass by default)&lt;/p&gt;

&lt;p&gt;You still have debt, lets go over fixing this in the next step.&lt;/p&gt;

&lt;h2&gt;
  
  
  Fixing Existing Debt
&lt;/h2&gt;

&lt;p&gt;Tools like &lt;code&gt;isort&lt;/code&gt; can highlight issues and fix them. You should start off running such commands to fix the code. On most codebases, this will touch almost all of the files. The challenge with this is that all the issues that cannot be fixed automatically (like wildcard imports) will need to be fixed manually. This is where you make a choice either to fix issues manually or automatically. If you’re using Claude Code to fix these issues and there is a large number, you’re probably going to pay in upwards of $10 for this session on any decent sized codebase. I recommend moving to GitHub Copilot’s agent to help push down costs here.&lt;/p&gt;

&lt;p&gt;Ask your coding assistant of choice to run the lint command and fix the issues. Most of them will stop after 1–2 attempts because the list is large. You can tell it to “keep doing this task till there are no linting errors left. DO NOT stop till the lint command passes”. If your context file (&lt;code&gt;CLAUDE.md&lt;/code&gt;) does not talk about how to lint, be explicit and tell your coding assistant what the command to be run is.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Left?
&lt;/h2&gt;

&lt;p&gt;If you look at the &lt;code&gt;gradual-tightening&lt;/code&gt; task, it created a command to analyse the code and keep being gradually more strict. This command can either be run manually or automatically on a pipeline. One of the parameters it changes is the &lt;code&gt;max-complexity&lt;/code&gt; which is set to 20 by default. This complexity will be reduced over a period of time. Similarly, the complexity check tasks have a lower bar to begin with and should be improved periodically to tighten the quality guidelines on this repository.&lt;/p&gt;

&lt;p&gt;While our AI coding pair has helped design and improve the code quality to a large extent, the last mile has to be walked by all of our teammates. We now have a strong feedback mechanism for bad code that will fail the pipeline and stop code from being committed or pushed. The last bit requires team culture to be built. On one of my teams, we had a soft check in every retro to see if every member had made the codebase a little bit better in a sprint. A sprint is 10 days and “a little bit” can include refactoring a tiny 2–3 line function and making it better. The bar is really low but the social pressure of wanting to make things better motivated all of us to drive positive change.&lt;/p&gt;

&lt;p&gt;Having a high quality codebase with a good developer experience is not a pipe dream and making it a reality is easier than ever with AI coding assistants like Claude Code or Copilot. What have you been able to improve recently? 😃&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>testing</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>How to choose your coding assistants</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Sat, 27 Dec 2025 17:46:25 +0000</pubDate>
      <link>https://dev.to/javatarz/how-to-choose-your-coding-assistants-90k</link>
      <guid>https://dev.to/javatarz/how-to-choose-your-coding-assistants-90k</guid>
      <description>&lt;p&gt;Why it’s harder for a professional developer to use a tool despite the wide variety of choices&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/uploads/choose-coding-assistants-cover-art.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffkbvwj9dtveisd5ljptc.jpg" alt="Chosing Coding Assistants Cover Art: Choose your tool" width="650" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Coding assistants like &lt;a href="https://cursor.com/" rel="noopener noreferrer"&gt;Cursor&lt;/a&gt;, &lt;a href="https://windsurf.com/" rel="noopener noreferrer"&gt;Windsurf&lt;/a&gt;, &lt;a href="https://docs.anthropic.com/en/docs/claude-code/overview" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt;, &lt;a href="https://github.com/google-gemini/gemini-cli" rel="noopener noreferrer"&gt;Gemini CLI&lt;/a&gt;, &lt;a href="https://openai.com/index/openai-codex/" rel="noopener noreferrer"&gt;Codex&lt;/a&gt;, &lt;a href="https://aider.chat/" rel="noopener noreferrer"&gt;Aider&lt;/a&gt;, &lt;a href="https://github.com/sst/opencode" rel="noopener noreferrer"&gt;OpenCode&lt;/a&gt;, &lt;a href="https://www.jetbrains.com/ai/" rel="noopener noreferrer"&gt;JetBrains AI&lt;/a&gt; etc. have been making the news for the last few months. Yet, the choice of tools is a lot harder and limited for some of us than it seems.&lt;/p&gt;

&lt;p&gt;TL;DR: OpenCode &amp;gt; Claude Code &amp;gt; Aider &amp;gt; Copilot &amp;gt; *&lt;/p&gt;

&lt;h1&gt;
  
  
  Understanding the tools
&lt;/h1&gt;

&lt;p&gt;Not all tools are created equal. Tools evolve fairly rapidly so the examples listed here might be invalid fairly soon.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovo5u8z73wytppv6qa8o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fovo5u8z73wytppv6qa8o.png" alt="Coding assistants scale" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can plot the different types of coding assistants on a graph showcasing the amount of human involvement required (&lt;code&gt;lesser involvement = more automation&lt;/code&gt;). The first GitHub Copilot release I used allowed tab completions. It would either complete single lines or entire blocks of code. You could describe your intent by creating a function with a good name or by writing a comment. GitHub Copilot then supported inline prompting or chat sessions.&lt;/p&gt;

&lt;p&gt;Coding agents are the current state of the art toolset for most developers on a day to day basis. They allow you to have conversations with them and you should treat them as team mates, albeit ones with anterograde amnesia.&lt;/p&gt;

&lt;p&gt;Some problems can be parallelised and background agents triggered locally are incredibly powerful. Claude code &lt;a href="https://www.anthropic.com/engineering/claude-code-best-practices" rel="noopener noreferrer"&gt;supports subagents&lt;/a&gt; is frequently used for analysis and &lt;a href="https://www.geeky-gadgets.com/how-to-use-git-worktrees-with-claude-code-for-seamless-multitasking/" rel="noopener noreferrer"&gt;solving multiple issues in parallel&lt;/a&gt; using &lt;code&gt;git worktree&lt;/code&gt;s. Similarly, some people hook up agents to remote instances for things like code reviews using &lt;a href="https://docs.anthropic.com/en/docs/claude-code/github-actions" rel="noopener noreferrer"&gt;Claude code&lt;/a&gt; or &lt;a href="https://docs.github.com/en/copilot/how-tos/agents/copilot-code-review/using-copilot-code-review" rel="noopener noreferrer"&gt;Copilot&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The extreme version of this is pure &lt;a href="https://x.com/karpathy/status/1886192184808149383" rel="noopener noreferrer"&gt;vibe coding&lt;/a&gt;. There is enough content out there about why this is a bad idea and the number of issues on real systems because of this.&lt;/p&gt;

&lt;h1&gt;
  
  
  Challenges with using these tools
&lt;/h1&gt;

&lt;p&gt;When picking up a tool, I have started looking at different aspects of these tools&lt;/p&gt;

&lt;h2&gt;
  
  
  Choice of models
&lt;/h2&gt;

&lt;p&gt;LLMs change quite quickly. Claude Sonnet 3.7 started off being the favourite model for most developers I know. When Claude Sonnet 4 came out at the same cost as 3.7, it became the new favourite model. Claude Opus 4 is great for larger codebases but expensive.&lt;/p&gt;

&lt;p&gt;As I write this (mid-July 2025), the word on the street is that Grok 4 is currently the best model on the block. Choose something that has good coding insights and a large context window. Claude Sonnet has some of the smaller context windows but is tuned quite well for software development.&lt;/p&gt;

&lt;p&gt;Cursor supports most of the best models and provides diversity. Tools like Claude Code and Gemini CLI are built and maintained primarily for use with a single model.&lt;/p&gt;

&lt;h2&gt;
  
  
  Ease of use
&lt;/h2&gt;

&lt;p&gt;This one is fairly subjective and dependent on the developer’s preference. Tools like Cursor are VS Code forks and thus provide tight integration with the editor. Others like Claude Code, Codex and Gemini CLI run on the terminal. Claude Code provides decent integration with the IDEs from the JetBrains family and thus provide good support to pair with your AI assistant.&lt;/p&gt;

&lt;p&gt;Speed factors into ease of use too. While Jetbrains AI is the best integrated tool amongst all of these (if you prefer using their IDEs), their AI tool is one of the slowest. Slower tools mean slower feedback cycles. Slower feedback cycles are &lt;a href="https://dev.to/javatarz/what-makes-developer-experience-world-class-4l3i"&gt;some of the worst things for dev experience&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost per change
&lt;/h2&gt;

&lt;p&gt;Cost pays a huge part in someone’s choice of tools and running LLMs are fairly expensive to run. Most tools charge you per use, some by tokens, some by APIs. Since we’re in the relatively early days of these tools and they are competing to capture the market, some still provide fixed investment offers in exchange for “unlimited plans.&lt;/p&gt;

&lt;p&gt;Cursor used to be $20/month with &lt;em&gt;unlimited&lt;/em&gt; usage till June 2025. While all “unlimited” usage is rate limited, if the usage limits are generous or the rate limits are not severe, users can manage to have a decent developer experience. More recently, Cursor updated their prices to make the $20/month Pro plan for “light users”. Daily users are recommended to use their $60/month Pro+ plan and power users are recommended to use their $200/month Ultra plan. Users on reddit have complained about &lt;a href="https://www.reddit.com/r/cursor/comments/1lywpdj/ive_got_ultra_last_night_already_got_warned_about/" rel="noopener noreferrer"&gt;how the Ultra plan is insufficient&lt;/a&gt;, though Cursor’s documentation says that &lt;a href="https://docs.cursor.com/account/pricing#expected-usage-within-limits" rel="noopener noreferrer"&gt;it should be sufficient&lt;/a&gt;. This seems to primarily be because of heavy Claude Opus 4 usage, one of the most expensive models.&lt;/p&gt;

&lt;p&gt;Another fixed usage tool is Claude Code for individuals with it’s Pro and Max plans. The $100/month Max plan seems to be the sweet spot for most heavy users and is probably the best value for money, at least until you look at the licensing.&lt;/p&gt;

&lt;p&gt;Google’s Gemini CLI, at launched, announced the most insane free tier (that allows you to spend an estimated $620/day) but at the cost of training on your projects. More on this, in the next section. The free tier might not be this generous forever so if the “training on your data” bit isn’t a concern, enjoy Google’s generosity.&lt;/p&gt;

&lt;h2&gt;
  
  
  IP ownership indemnity and licensing
&lt;/h2&gt;

&lt;p&gt;Licensing is a complicated topic and I go off of the advice that people much more qualified than me give in this space. The current understanding of this space is that you want to be on&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; company licensing (avoid individual licenses)&lt;/li&gt;
&lt;li&gt; a tool that does not train on your data&lt;/li&gt;
&lt;li&gt; provides you indemnity against IP claims&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;You should avoid individual licenses since the protections usually apply to you, not the organisation you work for. If you work with a services company and create IP for your clients, you want to avoid the risk of the protections not covering your clients.&lt;/p&gt;

&lt;p&gt;Avoid tools that train on your data if you’re building something commercially. If you’re on a FOSS tool/system, you can ignore this fact. Google Gemini CLI’s free tier is a great example of this. They get to use your data to make the system better in exchange for you having a good coding assistant free of cost.&lt;/p&gt;

&lt;p&gt;Anthropic, the creator of Claude Code, &lt;a href="https://www.anthropic.com/legal/commercial-terms" rel="noopener noreferrer"&gt;indemnifies its commercial users&lt;/a&gt; against lawsuits. Most other tools tend to do this too. Interestingly, &lt;a href="https://cursor.com/terms-of-service" rel="noopener noreferrer"&gt;Cursor does not&lt;/a&gt;, at least as of the writing of this article. Their &lt;a href="https://www.cursor.com/terms/msa" rel="noopener noreferrer"&gt;MSA&lt;/a&gt; provides this protection, however, they only do this for customers signing up for more than 250 seats. This may change in the future and talking to their support is the best way to clarify this.&lt;/p&gt;

&lt;h1&gt;
  
  
  What do I use and recommend at this point?
&lt;/h1&gt;

&lt;p&gt;For team members who are new to using coding assistants, start off with Copilot where users will appreciate the fixed cost. Learn, experiment. Strengthen your core skills in this new world: &lt;a href="https://www.promptingguide.ai/techniques" rel="noopener noreferrer"&gt;Prompt Engineering&lt;/a&gt; and &lt;a href="https://www.llamaindex.ai/blog/context-engineering-what-it-is-and-techniques-to-consider" rel="noopener noreferrer"&gt;Context Engineering&lt;/a&gt; (&lt;em&gt;more on these skills in another blog&lt;/em&gt;).&lt;/p&gt;

&lt;p&gt;When you have mastered these skills, you should consider moving to an API based tool that allows you to switch between models. Personally, I’m a fan of the Claude Sonnet and Opus models over OpenAI (and to some extent, Gemini). If you can manage costs well, move to Claude Code (or an open source tool like OpenCode or Aider). I would put OpenCode above Claude Code due to it’s flexibility.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>productivity</category>
      <category>programming</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Patterns for AI assisted software development</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Sat, 27 Dec 2025 17:46:09 +0000</pubDate>
      <link>https://dev.to/javatarz/patterns-for-ai-assisted-software-development-4ga2</link>
      <guid>https://dev.to/javatarz/patterns-for-ai-assisted-software-development-4ga2</guid>
      <description>&lt;p&gt;Moving beyond tools: habits, prompts, and patterns for working well with AI&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/uploads/patterns-aifse-cover-art.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq7f6stywbriurgjypgo.jpg" alt="Patterns AIfSE Cover Art: Team collaboration" width="650" height="339"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the last post — &lt;a href="https://dev.to/javatarz/ai-for-software-engineering-not-only-code-generation-4d1n"&gt;&lt;strong&gt;AI for Software Engineering, not (only) Code Generation&lt;/strong&gt;&lt;/a&gt; — we explored how AI is transforming software engineering beyond just writing code. Now, let’s look at what that means for teams and individuals in practice.&lt;/p&gt;

&lt;p&gt;There are a few patterns that people running teams and on teams that are going to build software with assistance from AI tools should remember.&lt;/p&gt;

&lt;h1&gt;
  
  
  For people building teams
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Focus on value
&lt;/h2&gt;

&lt;p&gt;With the AI ecosystem shifting weekly, C-level and VP-level stakeholders who prioritise modular documentation, model pairing, scoped context, and tooling agility will drive the highest ROI while keeping teams nimble and ready for whatever comes next. Make it work, make it right and &lt;strong&gt;then&lt;/strong&gt; make it fast/cheap.&lt;/p&gt;

&lt;h2&gt;
  
  
  Journey per software delivery stage, one stage at a time per team
&lt;/h2&gt;

&lt;p&gt;This journey is going to be transformational for teams. Like most transformations, you do not want to change too much too quickly.&lt;/p&gt;

&lt;p&gt;When bringing change to a single team, introduce it one software delivery stage at a time to easily verify effectiveness. In a large organisation, you could try different tools for the same stage on different teams to A/B test effectiveness while taking into account the nuances of the individual teams themselves. We don’t recommend this approach if you would like to converge towards a single tool throughout the organisation because changing tool choices after the team gets used to it causes more friction.&lt;/p&gt;

&lt;p&gt;When you have multiple teams willing to take this journey, you can have each of them pick tools in different stages to help reduce the time that your organisation takes to make a decision on a toolset. A couple of teams can try AI tools for requirements analysis while others can try agentic coding tools for development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Expect a learning curve
&lt;/h2&gt;

&lt;p&gt;Especially if you’re an experienced developer, you will feel slower when you start off on this journey. This is no different than working with a new teammate and feeling that your overall productivity is lower. You trade off your own speed against the value you will get when your teammate is onboarded and can deliver by themselves.&lt;/p&gt;

&lt;p&gt;From our experience, you are looking at a 2–4 week drop in perceived productivity before the gains will start showing up. As a result, the costs will go up (slower delivery and cost of tools) before they come back down (faster delivery and more time to focus on quality).&lt;/p&gt;

&lt;h2&gt;
  
  
  Quality guardrails are a prerequisite
&lt;/h2&gt;

&lt;p&gt;Do not bolt on quality and security guardrails after the fact. Start with them. Ensure a &lt;a href="https://martinfowler.com/articles/practical-test-pyramid.html" rel="noopener noreferrer"&gt;robust test pyramid&lt;/a&gt; and implement shift-left strategies for both testing and &lt;a href="https://snyk.io/articles/shift-left-security/" rel="noopener noreferrer"&gt;security&lt;/a&gt;, enabling quick and early feedback. These guardrails will be invaluable when your team is moving at breakneck speeds through newer features.&lt;/p&gt;

&lt;p&gt;If you don’t have these guardrails first, you can use AI to help generate them and review these plans. Like the &lt;a href="https://en.wikipedia.org/wiki/Maker-checker" rel="noopener noreferrer"&gt;Maker-Checker&lt;/a&gt; process, if an AI coding assistant has helped you plan and create these guardrails, they should be thoroughly reviewed by someone who has the expertise in these fields to catch the small bugs that can have disastrous consequences later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Autonomous agents are far away
&lt;/h2&gt;

&lt;p&gt;Humans are required in the loop for software development. 10+ years after the first demos of driverless cars, we’re still waiting for a general purpose implementation. While we have made massive progress, it takes time. While agents have made massive progress in the last 2 years, we still need to exist to make sure things work well and that the systems are maintainable. The skill to build maintainable systems is more important now than ever.&lt;/p&gt;

&lt;h2&gt;
  
  
  Watch out for ‘AI Slop’
&lt;/h2&gt;

&lt;p&gt;Without the right guardrails and structures in place, teams will produce more code, faster while sacrificing quality and security. Teams that have been given access to AI tools without helping them build skills first often point out longer pull requests coming in faster than ever before making people reviewing the code a bottleneck. Eventually, the reviewers end up accepting pull requests due to pressure or fatigue leading to important issues being missed.&lt;/p&gt;

&lt;p&gt;Individuals should focus on small chunks of work and teams should look at key metrics to measure the effectiveness of their tool usage &lt;em&gt;(we talk about both of these later in the post)&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Changes to individual responsibilities and team composition over time
&lt;/h2&gt;

&lt;p&gt;If teams in your organisation currently contain distinct individuals playing different roles like business analyst, architect, developer, quality analyst, infrastructure engineer and production support engineer, you will see that the distinct responsibilities of these roles will rely less on administrative tasks freeing each of them to focus on thinking strategically and the core responsibilities of their roles. Different organisations will see a merger of different roles. Some will see a merger of the business analyst and product manager roles. Some will see product and project managers merge. Some will see project managers’ responsibilities be split between technical leads and product owners.&lt;/p&gt;

&lt;p&gt;In doing so, individuals will emerge that pick up or demonstrate their ability wear multiple hats for example, talk to the business, design the system, develop, validate, deploy and monitor it. These individuals will understand the challenges of the business and work end to end to address it. We have been calling such individuals &lt;a href="https://www.youtube.com/watch?v=FTdpjlq8IcY" rel="noopener noreferrer"&gt;Solution Consultants at Sahaj&lt;/a&gt; and believe that most teams will need such individuals on their team in the near future once they leverage AI in their delivery.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beware of reduced intuition for decision making
&lt;/h2&gt;

&lt;p&gt;As teams move towards using automated notetakers to help capture more detailed conversations, we should be on the lookout for a few anti-patterns&lt;/p&gt;

&lt;p&gt;While conversation summaries help with a quick read, they are often misleading or inaccurate. Please read the full transcription to help improve confidence in what was spoken about. Transcripts are not a replacement for actually having real conversations, an anti-pattern we have seen come up on recent teams.&lt;/p&gt;

&lt;p&gt;Transcripts are also not a replacement for remembering context yourself. Context helps build intuition for decisions and one of our worries is that intuition will reduce over a period of time.&lt;/p&gt;

&lt;h1&gt;
  
  
  For people on teams
&lt;/h1&gt;

&lt;h2&gt;
  
  
  The ‘new teammate’ mindset
&lt;/h2&gt;

&lt;p&gt;Treat the AI system as a new team mate or a collaborative partner and not a tool. You can use a tool, be unhappy about the way the tool works and stop using it. When a new team mate joins your team, the fundamental thought process is different. You try to onboard the team mate and give it better context. Writing good instructions or prompts is key to success.&lt;/p&gt;

&lt;p&gt;LLMs are like team mates with &lt;a href="https://my.clevelandclinic.org/health/diseases/23221-anterograde-amnesia" rel="noopener noreferrer"&gt;anterograde amnesia&lt;/a&gt;. They can have some memories but these are fairly limited by the size of their &lt;a href="https://towardsdatascience.com/de-coded-understanding-context-windows-for-transformer-models-cd1baca6427e/" rel="noopener noreferrer"&gt;context windows&lt;/a&gt;. Understanding how to manage context windows is key to being able to work with our new team mates effectively. Keep only what is necessary in the context window and clear it when it isn’t required. Common context should be added to a file (check rules section below) and included only when necessary.&lt;/p&gt;

&lt;p&gt;If your prompts to a coding assistant are vague, the tool will keep going around in circles and not make any progress on the task or do the wrong thing.&lt;/p&gt;

&lt;p&gt;For example, when you ask the agent: &lt;code&gt;I have noticed that [http://localhost:4000/create-profile](http://localhost:4000/create-profile) has alignment issues and contains text that is spreading outside the buttons. Can you please fix this?&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;If the agent has access to the &lt;a href="https://mcpcursor.com/server/puppeteer" rel="noopener noreferrer"&gt;puppeteer MCP&lt;/a&gt;, it will open up the UI, take a screenshot, process and fix it. If your application has a login page, it will see that the Create Profile view is not being loaded and decide to “fix” this issue by removing authentication 😞. Adding “&lt;code&gt;Please wait for me to login if required&lt;/code&gt;” to the prompt helps avoid this issue.&lt;/p&gt;

&lt;p&gt;If your prompts have not told the system that you need a solution that has been simplified or one that does not hard code solutions, it will not follow these instructions. Add your general coding standards to a document and include that in the base context. If you have rules around test quality, split that into a smaller document explaining what good tests look like for the team.&lt;/p&gt;

&lt;h2&gt;
  
  
  Small chunks of work
&lt;/h2&gt;

&lt;p&gt;Break your work down. Reviewing a 1000 line review has always been hard. You can generate large code diffs with AI quickly. You, the developer, are the bottleneck. You are still responsible for quality and security.&lt;/p&gt;

&lt;p&gt;Work on smaller chunks. Review regularly. Do small commits. &lt;a href="https://softwareengineering.stackexchange.com/a/74765/95571" rel="noopener noreferrer"&gt;Age old practices still apply&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configure the tool based on your team’s rules
&lt;/h2&gt;

&lt;p&gt;Each tool requires configuration. Configurations take time to test. It might take a few tries over multiple days to get these configurations correct. Each tool has a different way to be configured and there is no standardisation. In the Agentic code pairing tool space, every tool has its own configuration mechanism. Cursor has &lt;a href="https://cursor.directory" rel="noopener noreferrer"&gt;Cursor Rules&lt;/a&gt;, Claude has &lt;a href="https://docs.anthropic.com/en/docs/claude-code/memory" rel="noopener noreferrer"&gt;memory&lt;/a&gt;, Windsurf has &lt;a href="https://docs.windsurf.com/windsurf/cascade/memories" rel="noopener noreferrer"&gt;Memories &amp;amp; Rules&lt;/a&gt; and IntelliJ’s Junie has &lt;a href="https://www.jetbrains.com/guide/ai/article/junie/intellij-idea/" rel="noopener noreferrer"&gt;guidelines&lt;/a&gt;. Each of these looks like a markdown file but has slightly different formats. If you’re experimenting between multiple tools (or different teammates prefer different tools), you will have to keep these rules in sync by hand. What’s worse is that the same instructions do not have the same effectiveness across different tools because their system prompts are different. Testing regularly and tweaking is key. Tools also rapidly update. Claude Code releases &lt;a href="https://www.npmjs.com/package/@anthropic-ai/claude-code?activeTab=versions" rel="noopener noreferrer"&gt;every couple of days&lt;/a&gt; (at the time of writing). Rules may need to be updated based on changes to the tool of your choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Shift in time spent on different responsibilities
&lt;/h2&gt;

&lt;p&gt;Teams will increasingly spend more time upfront in planning what needs to be built and what the right thing to build is than in actually building things. This does not mean that teams are walking away from agile but truly embracing it. The time spent on analysis and planning will go up as a proportion but the overall time taken to deliver a version will go down. Each of the individual activities (analysis, development etc.) will be done in thin slices helping build the system up incrementally.&lt;/p&gt;

&lt;h2&gt;
  
  
  Over-reliance on AI instead of thinking and remembering yourself
&lt;/h2&gt;

&lt;p&gt;Since AI works fast, it’s easier to be lulled into a sense of security and thus have a sense of reliance on the tools. Over time, some individuals may spend less time thinking critically and making decisions.&lt;/p&gt;

&lt;p&gt;For example, if a good note-taking app takes notes and summarises them correctly 95% of the time, it is easy to forget that the 5% of mistakes, especially if they happen in critical parts of the conversation, can be quite expensive to fix. Summaries are good but they are not a replacement for reading the transcript which itself cannot beat actually having a conversation with people.&lt;/p&gt;

&lt;p&gt;We need to use these systems to help us be better at our roles. Critical thinking is not optional, now more so than ever. We need to put guardrails in place to spot and correct intellectual laziness. If an issue is found that you missed during review, check if you thought about it critically enough. Do so for teammates too and help provide feedback if they are slipping.&lt;/p&gt;

&lt;h1&gt;
  
  
  How do you know AI is helping software delivery?
&lt;/h1&gt;

&lt;p&gt;Use both qualitative and quantitative measures. Early stages focus on “leading” indicators: developer sentiment, tool usage, and workflow metrics. Conduct developer surveys and track AI usage statistics (active users, acceptance rates) as &lt;a href="https://resources.github.com/learn/pathways/copilot/essentials/measuring-the-impact-of-github-copilot/" rel="noopener noreferrer"&gt;GitHub recommends&lt;/a&gt;. Complement these with engineering metrics: cycle time (time from commit to deploy), pull-request size and review duration, deployment frequency, and change‑failure rates. &lt;a href="https://waydev.co/ai-coding-tools-are-impacting-productivity/#:~:text=,whether%20AI%20increases%20this%20measure" rel="noopener noreferrer"&gt;These DORA‑style metrics help ensure speedups don’t sacrifice quality&lt;/a&gt;. Align these KPIs to business outcomes (e.g. shorter time-to-market, fewer critical bugs). Set “clear, measurable goals” for AI use and monitor both productivity and code quality over time.&lt;/p&gt;

&lt;p&gt;Up next, we’ll dive into strategies for &lt;a href="https://dev.to/javatarz/how-to-choose-your-coding-assistants-90k"&gt;managing tech debt and elevating developer experience&lt;/a&gt; in a world where AI is part of the team. We’ll explore why it’s now easier than ever to stay ahead of the curve — and share the exact prompts and techniques that make it possible.&lt;/p&gt;

&lt;h1&gt;
  
  
  Credits
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;This blog would not have been possible without the constant support and guidance from&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/greg-reiser-6910462/" rel="noopener noreferrer"&gt;&lt;em&gt;Greg Reiser&lt;/em&gt;&lt;/a&gt;&lt;em&gt;,&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/priyaaank/" rel="noopener noreferrer"&gt;&lt;em&gt;Priyank Gupta&lt;/em&gt;&lt;/a&gt;&lt;em&gt;,&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/veda-kanala/" rel="noopener noreferrer"&gt;&lt;em&gt;Veda Kanala&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/akshaykarle/" rel="noopener noreferrer"&gt;&lt;em&gt;Akshay Karle&lt;/em&gt;&lt;/a&gt;&lt;em&gt;. I would also like&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/gsong/" rel="noopener noreferrer"&gt;&lt;em&gt;George Song&lt;/em&gt;&lt;/a&gt; &lt;em&gt;and&lt;/em&gt; &lt;a href="https://www.linkedin.com/in/carmenmardiros/" rel="noopener noreferrer"&gt;&lt;em&gt;Carmen Mardiros&lt;/em&gt;&lt;/a&gt; &lt;em&gt;for reviewing multiple versions of this post and providing patient feedback 😀.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This content has been written on the shoulders of giants (at and outside&lt;/em&gt; &lt;a href="https://sahaj.ai" rel="noopener noreferrer"&gt;&lt;em&gt;Sahaj&lt;/em&gt;&lt;/a&gt;&lt;em&gt;) that I have done my best to quote throughout.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>AI for Software Engineering, not (only) Code Generation</title>
      <dc:creator>Karun Japhet</dc:creator>
      <pubDate>Sat, 27 Dec 2025 17:45:54 +0000</pubDate>
      <link>https://dev.to/javatarz/ai-for-software-engineering-not-only-code-generation-4d1n</link>
      <guid>https://dev.to/javatarz/ai-for-software-engineering-not-only-code-generation-4d1n</guid>
      <description>&lt;p&gt;Rethinking the role of AI across the entire software lifecycle&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/uploads/aifse-cover-art.jpg" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fta1nk2ru58470p5zo66t.jpg" alt="AIfSE Cover Art: Team collaboration" width="650" height="366"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Everyone has been talking about using coding assistants to aid with software delivery. There is more to delivering good software than writing code.&lt;/p&gt;

&lt;p&gt;Every software development project requires a few different activities from analysis (what), to planning and design (how), to development (build), to testing (validate), to deployment (implement). Each of these activities depends on different skills and techniques that can benefit from the effective use of modern AI technologies.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/uploads/aifse-1-software-delivery-stages.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wr5lsx7gy54yquawkv2.png" alt="Software Delivery Stages" width="800" height="268"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All software development methodologies, from waterfall to the different agile techniques, fundamentally follow the same cycle. We feel this cycle is not changing yet but there are improvements waiting to be unlocked for organisations.&lt;/p&gt;

&lt;p&gt;This post aims to demonstrate how teams of the future can gear themselves to build better products faster.&lt;/p&gt;

&lt;h1&gt;
  
  
  Use of AI tools across software delivery
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;The tools mentioned in this section are examples to help the reader understand the idea and not recommendations on what to use.&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  During Analysis
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Improved analysis
&lt;/h3&gt;

&lt;p&gt;Many teams have integrated AI into their analysis process. Starting with &lt;a href="https://medium.com/inspiredbrilliance/an-agile-kickstart-with-generative-ai-for-business-analysis-484f641ccf6e" rel="noopener noreferrer"&gt;single agent flows&lt;/a&gt; that support definition of features, epic and stories, to multi-agent flows that help with addressing different parts of a problem space in parallel. My colleague Carmen Mardiros showcases &lt;a href="https://github.com/cmardiros/claude-code-power-pack" rel="noopener noreferrer"&gt;how to revise a plan using Claude Code&lt;/a&gt; where individual agents perform specific tasks to help the analyst optimise a plan before execution. Effectively using AI in support of critical analysis and planning can provide benefits beyond basic requirements definition. &lt;a href="https://www.anthropic.com/engineering/built-multi-agent-research-system" rel="noopener noreferrer"&gt;Multi-agent systems out-perform single agent systems but spend significantly more tokens&lt;/a&gt; (and thus money) to do so.&lt;/p&gt;

&lt;p&gt;Taskmaster is an AI powered tool that, together with an interactive coding assistant such as Claude Code, can serve as a virtual technical project manager by helping with defining requirements, offering feedback on edge cases, writing stories and setting up and managing the product backlog.&lt;/p&gt;

&lt;p&gt;Since you can also ask Claude Code to analyse the codebase to identify technical debt, you can use the same tools to manage both the technical and feature backlogs of the product. This is particularly important when working with mature (legacy) systems as teams and product owners often struggle with balancing technical debt reduction (payback) and new feature development. Although these tools do not replace the expertise required to effectively manage a backlog and prioritise work, they can significantly reduce the administrative burden of doing so.&lt;/p&gt;

&lt;p&gt;If all requirements are documented as PRDs, it becomes easier to measure drift as well as look at cards that might be created but might have parts that have already been implemented. You can run this analysis as a weekly or monthly job to clean up your backlog of tasks that are no longer needed.&lt;/p&gt;

&lt;p&gt;Not all administrative tasks have been eliminated. When you transition from PRDs to epics on your backlog, there is a time period when both remain active and during this time, the two need to be consciously kept in sync. Over a period of time, the importance of the PRD wanes and it can be killed off. The same is true for other transitions like the one between stories and code.&lt;/p&gt;

&lt;h4&gt;
  
  
  Changes in roles for Business Analysts and Project Managers
&lt;/h4&gt;

&lt;p&gt;The roles of business analysts included note taking, summarising and analysing and helping shape the right product for the business. This role is shifting to focus on being more strategic in nature focusing on finding good opportunities for your products, taking away the transcription/administration parts of the role. Similarly, the roles of Project Managers will include less time on administrative tasks and more time on making sure the right features are being built.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;This is true for all roles we’re going to be speaking about in this post to some extent, calling this out explicitly since this is the first.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Improved iterative UI/UX design
&lt;/h3&gt;

&lt;p&gt;Tools such as Canva and Figma have helped minimise the time taken to go through a complete feedback cycle with users. AI tools have now started linking up with these tools to help spot implementation drift during development. These tools also have the ability to spot requirements gaps and help us foresee problems. &lt;em&gt;More on this during the feedback cycles section.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Clair Mary Sebastian also talks about &lt;a href="https://medium.com/inspiredbrilliance/an-agile-kickstart-with-generative-ai-for-business-analysis-484f641ccf6e" rel="noopener noreferrer"&gt;using generative AI for requirements analysis and wireframing&lt;/a&gt; using OpenAI’s APIs alongside &lt;a href="https://www.figma.com/community/plugin/1228969298040149016/wireframe-designer" rel="noopener noreferrer"&gt;Figma’s wireframe designer&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  AI note taking apps for requirement analysis
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://appsource.microsoft.com/en-us/product/web-apps/2101440ontarioinc.copilot4devops_official" rel="noopener noreferrer"&gt;Copilot4Devops&lt;/a&gt; that will take text summaries and help generate user stories or feature specs. This can be a particularly powerful technique to aide in quicker iterations with generating stories and feature specs.&lt;/p&gt;

&lt;p&gt;Note taking apps like &lt;a href="http://fireflies.ai" rel="noopener noreferrer"&gt;fireflies.ai&lt;/a&gt; have fairly accurate notes across multiple languages with user detection in conversations and help improve user experience and recall for conversations.&lt;/p&gt;

&lt;p&gt;While conversation summaries help with a quick read, they are often misleading or inaccurate. A best practice (or should we say “must have practice”) is for participants to review the notes shortly after the meeting and correct any errors before the notes are accepted. In addition to preventing the dissemination of inaccurate information, this practice improves information retention amongst participants and contributes to an improved shared understanding. This is in contrast to the anti-pattern of relying on unreviewed transcripts and meeting notes, an anti-pattern that discourages critical thinking and delays establishment of a shared understanding that is critical to successful delivery.&lt;/p&gt;

&lt;p&gt;Transcripts are not a replacement for actually having real conversations, an anti-pattern we have seen come up on recent teams. Transcripts are also not a replacement for remembering context yourself. Context helps build intuition for decisions and one of our worries is that intuition will reduce over a period of time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Improved communication and context
&lt;/h3&gt;

&lt;p&gt;Currently, users from the business (or product owners as a proxy) work with business analysts from delivery teams to collaboratively help shape the product. This communication usually requires experienced product owners who understand technology well enough at a distance to know what questions to ask and how to shape the conversation to build quick consensus on what the product’s vision is. This communication also requires experienced business analysts who know how to extract details of how the system should work, anticipate challenges during building the product and pre-empt them with questions. Teams who do a good job at analysing the system require individuals at the top of their game. If either of these individuals does not have the pre-requisite knowledge, communication is sub-optimal.&lt;/p&gt;

&lt;p&gt;We see that this status-quo is ripe for disruption. Doing so requires us to build a system (or product) that absorbs domain context before it can be used.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/uploads/aifse-2-ai-collaboration-for-analysis.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwuln7vu852tkozz7hn34.png" alt="AI collaboration for analysis" width="717" height="785"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Since most teams are distributed, a conversational AI can help users prepare for their synchronous or asynchronous communication with the team given that the AI has the persona of a developer who is an expert at the specific tech that is used to work on the product. Similarly, delivery team members can use a conversational AI system to help understand the business context better and anticipate pushback and prep for it. Being able to understand the devil’s advocate stance in their head and prepare for it is something most people struggle with. Important conversations still happen through direct communication, however, both the users and the business analysts can help pair on preparing for the actual conversation with real people on the other side.&lt;/p&gt;

&lt;p&gt;Over a period of time, the conversational AI system can help improve the quality of preparation conversations for both actors providing quicker feedback.&lt;/p&gt;

&lt;h2&gt;
  
  
  During System Design
&lt;/h2&gt;

&lt;p&gt;AI makes it possible to more quickly and thoroughly define and compare different solution designs for a given problem space. The ability to quickly and thoroughly evaluate the impact of different architectural decisions can multiply the value of experienced architects and may even enable more advanced practices such as emergent architecture as AI can help teams safely adjust the solution design as requirements change or new requirements emerge.&lt;/p&gt;

&lt;p&gt;When a system is built, the system design is built to meet some constraints and have a target state. Both the target state and constraints evolve over time. Good teams will track these constraints in the beginning and through the evolution of the product as &lt;a href="https://github.com/joelparkerhenderson/architecture-decision-record" rel="noopener noreferrer"&gt;ADR&lt;/a&gt;s and &lt;a href="https://evolutionaryarchitecture.com/ffkatas/index.html" rel="noopener noreferrer"&gt;fitness functions&lt;/a&gt;. Some teams find it hard to keep track of the delta between the current and target state (current debt). Using AI tools, this debt is easier to identify, track and address. Teams can use specific prompts in different areas to identify these challenges and help evolve the system in the right direction.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/uploads/aifse-4-emergent-design-with-ai.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqj4iuhtgqetfd0e5s2hh.png" alt="Software Delivery Stages" width="800" height="66"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Tools like &lt;a href="http://eraser.io" rel="noopener noreferrer"&gt;eraser.io&lt;/a&gt; exist to allow generation of architectural documents through text. Combining this with the ability to generate documentation based on the code, systems can ensure architectural documents are always up to date.&lt;/p&gt;

&lt;h2&gt;
  
  
  During Development and validation
&lt;/h2&gt;

&lt;p&gt;In today’s fast-evolving AI landscape, engineers must embrace a dual-mode workflow (planner and executor) to get the most out of coding assistants. As a planner, you leverage a high-reasoning model (for example, Claude Sonnet 4 over 3.7 or GPT-4o) to deconstruct monolithic docs into modular guides (e.g. splitting a bulky claude.md into coding-practices.md and development-workflow.md), map out architectural changes, and draft a detailed implementation roadmap. Once the blueprint is locked in, switch to a specialized coding model (like Sonnet, GitHub Copilot with tailored instructions, or Claude Code) for hands-on development, refactoring, and validation. By matching each task to the model best suited for it and scoping prompts to only the relevant files or services you streamline token usage, accelerate processing, and cut context-window bloat.&lt;/p&gt;

&lt;p&gt;Executing at scale also demands a culture of experimentation and flexibility. Expect a learning curve as teams test different assistants (Copilot, Cursor, Claude-Code, etc.) and prompt strategies for different tasks like migrating an entire codebase versus tweaking a single method signature, for example. Build in continuous feedback loops around prompt-to-PR cycle times, code quality metrics, and token costs to identify what works best in each scenario. Agentic integrations via &lt;a href="https://modelcontextprotocol.io/introduction" rel="noopener noreferrer"&gt;Model Context Protocols&lt;/a&gt; and tools like Puppeteer, Slack bots, and GitHub Actions can then automate routine tasks — from branch creation to dependency updates and test orchestration right within your existing toolchain.&lt;/p&gt;

&lt;h2&gt;
  
  
  During Deployment and Operationalisation
&lt;/h2&gt;

&lt;p&gt;Over the past decade, practices in the DevOps space have changed quite significantly with the focus on automation (CI/CD) observability and improved monitoring tools. As this data became more centralised in platforms like AppDynamics, DataDog and NewRelic, these systems have been able to spot errors, intelligently alert users and help spot anomalies.&lt;/p&gt;

&lt;p&gt;Platforms like Harness now support &lt;a href="https://developer.harness.io/docs/platform/harness-aida/ai-devops/#error-analyzer-demo" rel="noopener noreferrer"&gt;automated error analysis&lt;/a&gt; to help understand the root cause of issues and help provide steps to fix them.&lt;/p&gt;

&lt;h2&gt;
  
  
  During Feedback Cycles
&lt;/h2&gt;

&lt;p&gt;Traditionally, individuals caught drifts in software development. There are tools being built in place to help catch different types of drift. Tools such as &lt;a href="https://www.cubyts.com/" rel="noopener noreferrer"&gt;Cubyts&lt;/a&gt; catch both requirement drift (between requirement specs and stories) and implementation drift (between requirement specs, application mock ups and implementation). This is possible because these tools connect with tools like JIRA, Figma, GitHub etc. to analyse the contents of that platform and find possible challenges using the capabilities LLMs provide.&lt;/p&gt;

&lt;h1&gt;
  
  
  How do you enable this transformation
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Preparation
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Identify a candidate project&lt;/li&gt;
&lt;li&gt; Ensure the candidate project has good safety nets&lt;/li&gt;
&lt;li&gt; Ensure the candidate project has a stable product team with good shared context&lt;/li&gt;
&lt;li&gt; Identify the right stage of software development, which is most painful and will benefit from introducing AI tools&lt;/li&gt;
&lt;li&gt; Identify seed individuals with prior experience in the space, the right opinions and the ability to mentor team members&lt;/li&gt;
&lt;li&gt; Identify the tool to introduce&lt;/li&gt;
&lt;li&gt; Set up success criteria for this transformation&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The journey
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt; Set up time to up-skill team members (on the skills from the “For people on teams” section). &lt;a href="https://martinfowler.com/articles/on-pair-programming.html" rel="noopener noreferrer"&gt;Pair&lt;/a&gt; team members with seed individuals for maximum effectiveness.&lt;/li&gt;
&lt;li&gt; Set up weekly retrospective meetings to catch trends and course correct as necessary. Timely feedback is critical.&lt;/li&gt;
&lt;li&gt; Set up a checkpoint to see if the team members require less support from seed individuals weekly. Until a threshold of independence is reached, keep repeating steps 1–3.&lt;/li&gt;
&lt;li&gt; Seed individuals depart from the team and only join retrospectives for support.&lt;/li&gt;
&lt;li&gt; Set up a checkpoint to check if seed individuals are required in the retros and to confirm that the team is meeting the success criteria.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;em&gt;The 4-week period are indicative examples of what teams may need. Tweak the time period on a need basis.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://karun.me/assets/images/uploads/aifse-3-ai-assisted-delivery-upskilling.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqxue1vllvvgbff62yltp.png" alt="Software Delivery Stages" width="800" height="245"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI’s role in software engineering goes far beyond code generation — it’s reshaping how we design systems, make decisions, and collaborate. To truly unlock its potential, we need to rethink not just our tools, but how our teams operate. In the next post, we’ll explore &lt;a href="https://dev.to/javatarz/patterns-for-ai-assisted-software-development-4ga2"&gt;&lt;strong&gt;patterns for AI-assisted software delivery&lt;/strong&gt;&lt;/a&gt; — focusing on how to build more effective teams, and how individuals can work differently to make the most of AI in their day-to-day practice.&lt;/p&gt;

&lt;h1&gt;
  
  
  Credits
&lt;/h1&gt;

&lt;p&gt;This blog would not have been possible without the constant support and guidance from &lt;a href="https://www.linkedin.com/in/greg-reiser-6910462/" rel="noopener noreferrer"&gt;Greg Reiser&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/priyaaank/" rel="noopener noreferrer"&gt;Priyank Gupta&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/veda-kanala/" rel="noopener noreferrer"&gt;Veda Kanala&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/akshaykarle/" rel="noopener noreferrer"&gt;Akshay Karle&lt;/a&gt;. I would also like &lt;a href="https://www.linkedin.com/in/swapnil-sankla-30525225/" rel="noopener noreferrer"&gt;Swapnil Sankla&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/gsong/" rel="noopener noreferrer"&gt;George Song&lt;/a&gt;, &lt;a href="https://www.linkedin.com/in/rhushikesh-apte-685a5948/" rel="noopener noreferrer"&gt;Rhushikesh Apte&lt;/a&gt; and &lt;a href="https://www.linkedin.com/in/carmenmardiros/" rel="noopener noreferrer"&gt;Carmen Mardiros&lt;/a&gt; for reviewing multiple versions of this document and providing patient feedback 😀.&lt;/p&gt;

&lt;p&gt;This content has been written on the shoulders of giants (at and outside &lt;a href="https://sahaj.ai" rel="noopener noreferrer"&gt;Sahaj&lt;/a&gt;) that I have done my best to quote throughout.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
