<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Hossein Najmi</title>
    <description>The latest articles on DEV Community by Hossein Najmi (@hoss_nj).</description>
    <link>https://dev.to/hoss_nj</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/hoss_nj"/>
    <language>en</language>
    <item>
      <title>Scrum Is Dead. Your Backlog Is a Graveyard.</title>
      <dc:creator>Hossein Najmi</dc:creator>
      <pubDate>Tue, 24 Feb 2026 16:58:55 +0000</pubDate>
      <link>https://dev.to/hoss_nj/scrum-is-deadyour-backlog-is-a-graveyard-5gom</link>
      <guid>https://dev.to/hoss_nj/scrum-is-deadyour-backlog-is-a-graveyard-5gom</guid>
      <description>&lt;h1&gt;
  
  
  Scrum Is Dead. Your Repo Replaced It.
&lt;/h1&gt;

&lt;p&gt;I never believed in Scrum.&lt;/p&gt;

&lt;p&gt;I want to be upfront about this because I think it matters for what follows. This isn't a post-hoc rationalization where I discovered AI coding tools and then decided Scrum was bad. I have felt, for years, that something was fundamentally wrong with the way our industry manages software development — and I just couldn't prove it.&lt;/p&gt;

&lt;p&gt;I remember sitting in sprint planning sessions watching a room full of talented engineers spend two hours debating whether a task was a 5-point or an 8-point story. I remember thinking: we could have built the thing in the time we just spent talking about building the thing. I remember retrospectives where we wrote the same sticky notes every two weeks ("communication could be better", "too much context-switching") and nothing changed. I remember velocity charts that told managers a comforting fiction about predictability while developers quietly knew the numbers were gamed to look smooth.&lt;/p&gt;

&lt;p&gt;But I kept my mouth shut, mostly. When you question Scrum in a software organization, people look at you like you just questioned gravity. Scrum has certifications. It has an industry of consultants. It has books and conferences and a manifesto that everyone treats like scripture. Questioning it marks you as someone who "doesn't get it" or "isn't a team player." So I smiled through the standups and planned the sprints and felt like an outcast who could see something nobody else wanted to see.&lt;/p&gt;

&lt;p&gt;Then AI coding assistants arrived. And suddenly I had proof.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. The Moment Everything Changed
&lt;/h2&gt;

&lt;p&gt;The shift didn't happen in one dramatic moment. It crept up on me over 2024 and 2025 as the tools got better. First it was GitHub Copilot doing decent autocomplete. Then Cursor turned the entire IDE into a conversation. Then Claude Code emerged as what Karpathy correctly identified as "the first convincing demonstration of what an LLM Agent looks like — something that in a loopy way strings together tool use and reasoning for extended problem solving."&lt;/p&gt;

&lt;p&gt;I build production SaaS products. Not toy projects — complex systems with domain-specific business logic, multi-tenant architecture, licensing systems, third-party integrations, the works.&lt;/p&gt;

&lt;p&gt;At some point in mid-2025, I realized that my actual workflow had completely diverged from our "official" Scrum process. On paper, we had sprints and tickets and a backlog. In practice, I was doing something entirely different:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;I'd open Claude Code or Cursor and describe the feature I wanted to build&lt;/li&gt;
&lt;li&gt;Together with the AI, I'd write a detailed markdown spec — edge cases, data model, API contracts, UX flow&lt;/li&gt;
&lt;li&gt;I'd save that as a &lt;code&gt;.md&lt;/code&gt; file in the repo&lt;/li&gt;
&lt;li&gt;Then I'd execute it — again with AI assistance&lt;/li&gt;
&lt;li&gt;AI would write tests. AI would write docs. I'd update the plan file.&lt;/li&gt;
&lt;li&gt;Ship.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The entire cycle — from idea to shipped, tested, documented feature — was happening so fast that the Scrum process around it became the bottleneck. By the time I would have created a Jira ticket, explained it in sprint planning, estimated it, assigned it, and waited for the sprint to start, the feature was already done. Not half-done. Done. Tested. Documented. Merged.&lt;/p&gt;

&lt;p&gt;That's when I stopped pretending.&lt;/p&gt;




&lt;h2&gt;
  
  
  2. What My Actual Workflow Looks Like (With Real Tools)
&lt;/h2&gt;

&lt;p&gt;I want to be concrete here because I've read too many thought pieces that stay at the abstract "AI will change everything" level. Here's specifically what I do and which tools I use.&lt;/p&gt;

&lt;h3&gt;
  
  
  Planning: Claude Code + Cursor
&lt;/h3&gt;

&lt;p&gt;Every feature starts as a conversation. I open Claude Code in my terminal (or Cursor's Composer) and describe what I need. Not in ticket format — in the way you'd explain something to a very smart colleague. The AI asks clarifying questions, proposes data models, flags edge cases I missed, and suggests approaches.&lt;/p&gt;

&lt;p&gt;The output is a markdown plan file. It looks something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Feature: Floating License Pool Management&lt;/span&gt;

&lt;span class="gu"&gt;## Context&lt;/span&gt;
The platform currently supports named-user licenses. We need to add
floating license support where a pool of N licenses is shared across
an organization, with concurrent usage limits enforced in real-time.

&lt;span class="gu"&gt;## Data Model&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; LicensePool: { orgId, productId, totalSeats, activeCheckouts[] }
&lt;span class="p"&gt;-&lt;/span&gt; LicenseCheckout: { userId, poolId, checkedOutAt, expiresAt, heartbeatAt }

&lt;span class="gu"&gt;## API Contract&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; POST /api/license-pools/{id}/checkout → 200 | 409 (pool exhausted)
&lt;span class="p"&gt;-&lt;/span&gt; POST /api/license-pools/{id}/checkin
&lt;span class="p"&gt;-&lt;/span&gt; GET  /api/license-pools/{id}/status → { total, available, checkouts[] }

&lt;span class="gu"&gt;## Edge Cases&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; User closes app without checking in → heartbeat timeout (5 min)
&lt;span class="p"&gt;-&lt;/span&gt; Two users checkout simultaneously when 1 seat left → optimistic locking
&lt;span class="p"&gt;-&lt;/span&gt; Org admin force-revokes a checkout mid-session

&lt;span class="gu"&gt;## Implementation Notes&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Use Redis for real-time seat counting (atomic INCR/DECR)
&lt;span class="p"&gt;-&lt;/span&gt; Postgres as source of truth with eventual consistency
&lt;span class="p"&gt;-&lt;/span&gt; WebSocket push to notify clients when seats become available

&lt;span class="gu"&gt;## Status: IN PROGRESS (started 2026-02-20)&lt;/span&gt;
&lt;span class="gu"&gt;## Tests: Unit + integration written, E2E pending&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That single file is richer, more detailed, and more useful than any Jira ticket I've ever written. And it took 20 minutes to create collaboratively with AI, not 2 hours of sprint planning.&lt;/p&gt;

&lt;h3&gt;
  
  
  Execution: Claude Code + Cursor (with Copilot for speed)
&lt;/h3&gt;

&lt;p&gt;For implementation, I use a combination:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Claude Code&lt;/strong&gt; for complex, multi-file changes. It understands the whole repo and can make coordinated changes across services, models, controllers, and tests in one go. The terminal-native approach lets me work inside my existing environment with all my git config, env variables, and tooling. This is what Karpathy meant when he said Anthropic "got the order of precedence correct" — the AI runs where your code runs, with your context, not in a cloud sandbox.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cursor&lt;/strong&gt; for the IDE-level flow when I want to see changes visually, use its Composer for iterative refinement, and leverage its codebase indexing. The multi-model flexibility is huge — Claude for complex reasoning, faster models for quick edits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;GitHub Copilot&lt;/strong&gt; still runs in the background for inline completions. It's excellent at the small stuff — finishing a function signature, writing a repetitive pattern, suggesting the obvious next line. Think of it as cruise control while Claude Code and Cursor are the autopilot.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A typical execution session looks like this in Claude Code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;claude

&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Look at plans/features/floating-license.md and implement it.
  Start with the data model and migrations, &lt;span class="k"&gt;then &lt;/span&gt;the API endpoints,
  &lt;span class="k"&gt;then &lt;/span&gt;the Redis integration &lt;span class="k"&gt;for &lt;/span&gt;real-time seat counting.
  Follow the patterns &lt;span class="k"&gt;in &lt;/span&gt;our existing license code.
  Write tests &lt;span class="k"&gt;for &lt;/span&gt;each layer as you go.

&lt;span class="c"&gt;# [Claude Code reads the plan file, examines existing codebase patterns,&lt;/span&gt;
&lt;span class="c"&gt;#  creates migration files, models, services, controllers, tests...&lt;/span&gt;
&lt;span class="c"&gt;#  commits each logical chunk with meaningful messages]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Karpathy recently evolved his terminology from "vibe coding" to &lt;strong&gt;"agentic engineering"&lt;/strong&gt; — because for professional use, it's not about vibes at all. It's about orchestrating AI agents with oversight and scrutiny to produce production-quality code. That's exactly what this is. I review every change. I guide the architecture. But the AI does the typing, the pattern-matching, the boilerplate, and increasingly, the edge-case thinking.&lt;/p&gt;

&lt;h3&gt;
  
  
  Testing: AI-Generated with Human Review
&lt;/h3&gt;

&lt;p&gt;After implementation, I ask Claude Code to write comprehensive tests:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Write tests &lt;span class="k"&gt;for &lt;/span&gt;the floating license module. Include:
  - Unit tests &lt;span class="k"&gt;for &lt;/span&gt;the LicensePool service methods
  - Integration tests &lt;span class="k"&gt;for &lt;/span&gt;the API endpoints
  - Specific tests &lt;span class="k"&gt;for &lt;/span&gt;the edge cases &lt;span class="k"&gt;in &lt;/span&gt;the plan file
  - Test the Redis seat counting under concurrent access
  Run them and fix any failures.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The AI writes tests that cover the implementation it just built — plus the edge cases from the plan file. Important nuance: I also ask it to write adversarial tests. "Try to break the concurrent checkout logic. What happens if Redis goes down mid-checkout? What if the heartbeat job crashes?" This catches things that happy-path tests miss.&lt;/p&gt;

&lt;p&gt;I won't pretend AI-generated tests are perfect. They tend to test what the AI built, which means they share the same mental model and can have blind spots. I do periodic manual exploratory testing and occasionally bring in real users for QA. But the baseline test coverage AI produces in minutes used to take days.&lt;/p&gt;

&lt;h3&gt;
  
  
  Documentation: A Byproduct, Not an Afterthought
&lt;/h3&gt;

&lt;p&gt;This is the part that still amazes me. Documentation used to be the thing nobody wanted to do and everyone complained was outdated. Now it's generated automatically from the plan files and the code itself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt; Update the API documentation &lt;span class="k"&gt;for &lt;/span&gt;the license management endpoints.
  Generate it from the actual route definitions and the plan file.
  Include request/response examples from the &lt;span class="nb"&gt;test &lt;/span&gt;fixtures.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The docs are always current because they're generated from the source of truth — the code and the plan files. They decay at the same rate as the code itself, which is to say: they don't decay, because when the code changes, the docs get regenerated.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. The Repo Is the Project Management Tool
&lt;/h2&gt;

&lt;p&gt;Here's the paradigm shift that I think most teams haven't internalized yet. Once you adopt this workflow, your Git repository is no longer just where code lives. It &lt;strong&gt;IS&lt;/strong&gt; your project management system. Your knowledge base. Your documentation hub. Your decision log. Your onboarding guide. Everything.&lt;/p&gt;

&lt;p&gt;Here's what my repo structure looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;my-product/
├── src/                          # the product
├── tests/                        # AI-generated + manual regression
├── docs/                         # AI-generated, always current
│   ├── api/                      # endpoint documentation
│   ├── architecture/             # system design docs
│   └── guides/                   # user-facing help content
├── plans/
│   ├── features/                 # MD spec per feature (past + present)
│   │   ├── floating-license.md
│   │   ├── invoice-pdf-gen.md
│   │   └── ...
│   ├── bugs/                     # reported → investigated → resolved
│   │   ├── BUG-session-timeout-race.md
│   │   └── ...
│   └── decisions/                # architectural decision records
│       ├── ADR-001-redis-for-licensing.md
│       ├── ADR-002-auth0-over-custom-auth.md
│       └── ...
├── ROADMAP.md                    # living priority list
├── CHANGELOG.md                  # what shipped and when
├── CLAUDE.md                     # AI agent instructions for this repo
└── README.md                     # onboarding for next human or AI
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Every plan, every reported bug, every architectural decision, every feature spec — it all lives in the repo. It's versioned. It's searchable. It's diffable. It's branchable. And now — critically — it's queryable by AI agents in natural language.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;CLAUDE.md&lt;/code&gt; file (or &lt;code&gt;AGENTS.md&lt;/code&gt; in some setups) is worth highlighting. It's a file that gives AI coding agents context about your project — coding conventions, architecture patterns, how to run tests, what to avoid. Every AI agent that touches your repo reads this first. It's the onboarding document for your AI team members.&lt;/p&gt;

&lt;p&gt;Think about what this means for the classic "bus factor" problem. If I get hit by a bus tomorrow, the next person (or AI agent) who picks up this project can:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clone the repo&lt;/li&gt;
&lt;li&gt;Read &lt;code&gt;README.md&lt;/code&gt; for orientation&lt;/li&gt;
&lt;li&gt;Read &lt;code&gt;ROADMAP.md&lt;/code&gt; to understand priorities&lt;/li&gt;
&lt;li&gt;Browse &lt;code&gt;plans/features/&lt;/code&gt; to see what's been built and what's queued&lt;/li&gt;
&lt;li&gt;Browse &lt;code&gt;plans/decisions/&lt;/code&gt; to understand &lt;em&gt;why&lt;/em&gt; things were built a certain way&lt;/li&gt;
&lt;li&gt;Browse &lt;code&gt;plans/bugs/&lt;/code&gt; to see known issues and their resolution history&lt;/li&gt;
&lt;li&gt;Read &lt;code&gt;CHANGELOG.md&lt;/code&gt; to understand what shipped recently&lt;/li&gt;
&lt;li&gt;Start working&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is more context than any Jira board, Confluence wiki, or sprint retro has ever provided. And it's all in one place, version-controlled, and maintained as a natural byproduct of the development workflow — not as a separate documentation tax.&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Why Tickets Were Always a Bad Abstraction
&lt;/h2&gt;

&lt;p&gt;Now that I've described what replaced them, let me explain why I think tickets were always the wrong abstraction — even before AI.&lt;/p&gt;

&lt;p&gt;A Jira ticket is fundamentally a &lt;em&gt;communication artifact&lt;/em&gt;. It exists to transfer understanding from the person who knows what to build to the person who will build it. Every field is designed to solve a coordination problem:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Title/Description&lt;/strong&gt;: so someone else can understand the work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Story Points&lt;/strong&gt;: so managers can forecast timelines&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Assignee&lt;/strong&gt;: so we know who's doing it&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Status columns&lt;/strong&gt;: so we know where it is in the pipeline&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sprint&lt;/strong&gt;: so we batch work into time-boxed cycles&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Acceptance Criteria&lt;/strong&gt;: so QA knows when it's done&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Comments&lt;/strong&gt;: so the team can discuss asynchronously&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every one of these fields is a lossy compression of the real plan. The description never captures every edge case. The story points are a rough guess dressed up as science. The status column has at best 5 states for what's actually a continuous process. The acceptance criteria are a shadow of the real test suite.&lt;/p&gt;

&lt;p&gt;The plan file in the repo is the uncompressed version. It has everything — the context, the data model, the API contract, the edge cases, the decisions, the status, and it evolves with the work. The ticket was always a proxy. We just didn't have a better option.&lt;/p&gt;

&lt;p&gt;Now we do. And the proxy can go away.&lt;/p&gt;




&lt;h2&gt;
  
  
  5. The Documentation Structure That Actually Works
&lt;/h2&gt;

&lt;p&gt;I've iterated on the repo structure above through trial and error. Here are the patterns that work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Feature Plan Files
&lt;/h3&gt;

&lt;p&gt;These should follow a consistent template:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Feature: [Name]&lt;/span&gt;

&lt;span class="gu"&gt;## Context&lt;/span&gt;
Why are we building this? What problem does it solve?

&lt;span class="gu"&gt;## Data Model&lt;/span&gt;
Tables, fields, relationships. Actual schema, not hand-wavy descriptions.

&lt;span class="gu"&gt;## API Contract (if applicable)&lt;/span&gt;
Endpoints, methods, request/response shapes.

&lt;span class="gu"&gt;## UI/UX (if applicable)&lt;/span&gt;
Key screens, user flows, interaction patterns.

&lt;span class="gu"&gt;## Edge Cases&lt;/span&gt;
The things that will bite you if you don't think about them now.

&lt;span class="gu"&gt;## Implementation Notes&lt;/span&gt;
Technical approach, libraries, patterns to follow.

&lt;span class="gu"&gt;## Dependencies&lt;/span&gt;
What needs to exist before this can be built?

&lt;span class="gu"&gt;## Status: [NOT STARTED | IN PROGRESS | COMPLETE | SHIPPED]&lt;/span&gt;
&lt;span class="gu"&gt;## Shipped: [date, or blank]&lt;/span&gt;
&lt;span class="gu"&gt;## Tests: [coverage status]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Bug Reports
&lt;/h3&gt;

&lt;p&gt;These should include reproduction steps, root cause analysis, and the fix — not just "it's broken":&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# BUG: Session timeout race condition&lt;/span&gt;

&lt;span class="gu"&gt;## Reported: 2026-02-15&lt;/span&gt;
&lt;span class="gu"&gt;## Severity: High&lt;/span&gt;
&lt;span class="gu"&gt;## Status: RESOLVED&lt;/span&gt;

&lt;span class="gu"&gt;## Symptoms&lt;/span&gt;
Users occasionally get logged out mid-action when their session
token refreshes at the same moment as an API call.

&lt;span class="gu"&gt;## Reproduction&lt;/span&gt;
&lt;span class="p"&gt;1.&lt;/span&gt; Set token expiry to 30 seconds (for testing)
&lt;span class="p"&gt;2.&lt;/span&gt; Start a long-running action (e.g., save a large record)
&lt;span class="p"&gt;3.&lt;/span&gt; Wait until the token refresh fires mid-request
&lt;span class="p"&gt;4.&lt;/span&gt; Observe 401 response on the original request

&lt;span class="gu"&gt;## Root Cause&lt;/span&gt;
The refresh token endpoint was not atomic — there was a window
where the old token was invalidated but the new one hadn't been
stored yet. Any request during that window would fail.

&lt;span class="gu"&gt;## Fix&lt;/span&gt;
Implemented token overlap window: old token remains valid for 30s
after new token is issued. See commit abc1234.

&lt;span class="gu"&gt;## Regression Test&lt;/span&gt;
tests/integration/auth/session-refresh-race.test.ts
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That bug report is a permanent piece of institutional knowledge. Six months from now, when someone encounters a similar issue, they (or an AI agent) can search the repo and find not just &lt;em&gt;that&lt;/em&gt; it happened, but &lt;em&gt;why&lt;/em&gt; it happened and &lt;em&gt;how&lt;/em&gt; it was fixed. Try doing that with a closed Jira ticket.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architectural Decision Records (ADRs)
&lt;/h3&gt;

&lt;p&gt;The most underrated practice in all of software engineering. They answer the question everyone asks but nobody documents: &lt;em&gt;why did we build it this way?&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# ADR-001: Redis for Real-Time License Seat Counting&lt;/span&gt;

&lt;span class="gu"&gt;## Date: 2026-02-18&lt;/span&gt;
&lt;span class="gu"&gt;## Status: Accepted&lt;/span&gt;

&lt;span class="gu"&gt;## Context&lt;/span&gt;
Floating licenses need real-time seat counting with sub-second
response times. Two users checking out the last seat simultaneously
must be handled atomically.

&lt;span class="gu"&gt;## Options Considered&lt;/span&gt;
&lt;span class="p"&gt;1.&lt;/span&gt; Postgres with SELECT FOR UPDATE → too slow under contention
&lt;span class="p"&gt;2.&lt;/span&gt; Redis INCR/DECR with Postgres as source of truth → fast + reliable
&lt;span class="p"&gt;3.&lt;/span&gt; In-memory counter in the app server → doesn't work multi-instance

&lt;span class="gu"&gt;## Decision&lt;/span&gt;
Option 2. Redis for real-time counting, Postgres for durability.
Reconciliation job runs every 5 minutes to catch any drift.

&lt;span class="gu"&gt;## Consequences&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Adds Redis as an infrastructure dependency
&lt;span class="p"&gt;-&lt;/span&gt; Need monitoring for Redis/Postgres drift
&lt;span class="p"&gt;-&lt;/span&gt; Checkout latency drops from ~200ms to ~5ms
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Without this, six months from now someone will look at the Redis dependency and ask "why don't we just use Postgres for this?" And nobody will remember the answer. With this, the answer is permanent, searchable, and versioned.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. The Bold Predictions
&lt;/h2&gt;

&lt;p&gt;I'll borrow from Karpathy's honesty here and say these are personally held beliefs, not certainties. But I believe them strongly enough to bet my own workflow on them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Jira, Linear, and traditional PM tools will either pivot or become legacy.&lt;/strong&gt; The entire value proposition of ticket-based project management — create, estimate, assign, track, review — assumes human throughput as the bottleneck. When AI is the executor and the repo is the tracker, the ticket abstraction has no audience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Scrum Master role will disappear from most small-to-mid-size teams within three years.&lt;/strong&gt; Not because the people are bad — many are excellent facilitators. But because the coordination problems the role exists to solve are evaporating. The best Scrum Masters will evolve into what I'd call "AI workflow architects" — people who design and optimize the human-AI development loop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Story points will be remembered the way we remember punch cards&lt;/strong&gt; — a quaint artifact of a constraint that no longer exists. When AI can execute a feature in hours, historical velocity is meaningless for forecasting. Teams will shift to appetite-based planning: how much time is this &lt;em&gt;worth&lt;/em&gt;, not how long will it &lt;em&gt;take&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Solo developers and tiny teams will outship departments.&lt;/strong&gt; This is already happening. The leverage that AI provides to one skilled person with clear product vision and domain expertise is staggering. A product leader who can work directly with AI coding agents will produce more than a 10-person team running Scrum with 30% of their time spent on ceremonies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The repository will become the universal system of record.&lt;/strong&gt; Code, plans, bugs, decisions, documentation, project status — all in one place, all versioned, all AI-queryable. The era of scattered tools (one for tickets, one for docs, one for code, one for chat, one for CI) is ending.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. What Agile Got Right (And What Dies)
&lt;/h2&gt;

&lt;p&gt;I want to be precise here because I'm not arguing against Agile's principles. The Agile Manifesto's four values are timeless:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Individuals and interactions over processes and tools&lt;/li&gt;
&lt;li&gt;Working software over comprehensive documentation&lt;/li&gt;
&lt;li&gt;Customer collaboration over contract negotiation&lt;/li&gt;
&lt;li&gt;Responding to change over following a plan&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;My AI-native workflow embodies these more purely than Scrum ever did. I'm shipping working software continuously. I'm responding to change in real time. I'm not hiding behind sprint boundaries or process overhead.&lt;/p&gt;

&lt;p&gt;What's dying is not Agile the philosophy. What's dying is &lt;strong&gt;Scrum the framework&lt;/strong&gt; — the ceremonies, the roles, the certifications, the two-week time boxes, and the entire industry of consultants, tools, and training that grew up around them.&lt;/p&gt;

&lt;p&gt;The irony is poetic: the methodology that told us to value "individuals and interactions over processes and tools" became the most process-heavy, tool-dependent methodology in the history of software engineering.&lt;/p&gt;




&lt;h2&gt;
  
  
  8. The Wake-Up Call
&lt;/h2&gt;

&lt;p&gt;If you're a developer, here's the thing I wish someone had told me earlier: you don't need permission to stop doing something that doesn't work. The feeling I had in those sprint planning meetings — that something was wrong, that this overhead wasn't necessary, that the ceremonies had become cargo cult — that feeling was correct. I just didn't have an alternative to point to.&lt;/p&gt;

&lt;p&gt;Now I do. And so do you.&lt;/p&gt;

&lt;p&gt;Start small. Pick your next feature. Instead of creating a Jira ticket, write a markdown plan file. Put it in your repo. Use Claude Code or Cursor to help you spec it out, build it, test it, and document it. See how it feels. See how fast you ship.&lt;/p&gt;

&lt;p&gt;Then ask yourself honestly: did the sprint ceremony add anything that the plan file didn't?&lt;/p&gt;

&lt;p&gt;Karpathy wrote that "vibe coding will terraform software and alter job descriptions." I think he's right, but I'd go further: it's not just coding that's being terraformed. It's the entire development process — planning, estimating, tracking, documenting, and communicating. All of it is being compressed into a tighter, faster loop where the repo is the single source of truth and AI is both the builder and the librarian.&lt;/p&gt;

&lt;p&gt;The developers who figure this out first won't just be faster. They'll be operating in a fundamentally different paradigm. And catching up to a paradigm shift is much harder than catching up to a competitor.&lt;/p&gt;

&lt;p&gt;The future of software development isn't a better framework. It's no framework at all — just a well-organized repo, a clear vision, and an AI that never gets tired.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;TLDR.&lt;/strong&gt; I never believed in Scrum, even before AI proved me right. My current workflow — AI-planned, AI-executed, AI-tested, AI-documented, all tracked in repo markdown files — ships faster than any sprint ever did. The repo is the project management tool. Tickets are dead. Ceremonies are dead. The Agile &lt;em&gt;principles&lt;/em&gt; survive, but the Scrum &lt;em&gt;framework&lt;/em&gt; doesn't. If you're still arguing about story points in a Monday morning meeting, you're playing a game that already ended.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Hossein Najmi — Head of Product at ScaffPlan&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
