<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: snehasankaran</title>
    <description>The latest articles on DEV Community by snehasankaran (@snehasankaran).</description>
    <link>https://dev.to/snehasankaran</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/snehasankaran"/>
    <language>en</language>
    <item>
      <title>Agile Sprint Orchestrator-Agents for Impact</title>
      <dc:creator>snehasankaran</dc:creator>
      <pubDate>Sun, 29 Mar 2026 19:34:24 +0000</pubDate>
      <link>https://dev.to/snehasankaran/agile-sprint-orchestrator-agents-for-impact-272</link>
      <guid>https://dev.to/snehasankaran/agile-sprint-orchestrator-agents-for-impact-272</guid>
      <description>&lt;h1&gt;
  
  
  We Built 5 AI Agents That Run Your Entire Sprint -- Here's What Happened
&lt;/h1&gt;

&lt;p&gt;Every two weeks, the same cycle repeats. Tickets enter the sprint half-baked. The team over-commits. PRs merge without anyone checking acceptance criteria. The retro produces the same three action items as last time. Nobody follows up. Next sprint, repeat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;We asked: what if AI didn't just assist Agile teams -- what if it actually ran the ceremonies?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not a chatbot. Not a summary tool. A system of specialized agents that refine your backlog, plan your sprint, evaluate every deliverable, run the retro, and &lt;em&gt;remember what happened&lt;/em&gt; so next sprint is better than this one.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; We built a multi-agent AI orchestrator that automates all 7 Agile sprint phases -- from backlog refinement to intelligence reporting. Each agent has its own dashboard. A central orchestrator coordinates them, maintains cross-sprint memory, and generates strategic insights for POs and Scrum Masters.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  The Pain
&lt;/h2&gt;

&lt;p&gt;If you've worked in Agile, you know these:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Backlog chaos.&lt;/strong&gt; Tickets enter sprints without acceptance criteria, estimates, or dependency mapping. Teams discover missing requirements mid-sprint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Planning by gut feel.&lt;/strong&gt; Nobody connects actual capacity to historical velocity. Result: 30-40% spillover rates, every sprint.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Invisible quality.&lt;/strong&gt; Work product quality is a mystery until the review demo. "Done" doesn't mean "done right."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Amnesia.&lt;/strong&gt; Every sprint starts from zero. The retro insights from three sprints ago? Gone. The same patterns repeat for months.&lt;/p&gt;

&lt;p&gt;The data to fix all of this exists -- in JIRA, GitHub, Teams transcripts, and team memory. But nothing connects it.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Idea
&lt;/h2&gt;

&lt;p&gt;What if every Agile ceremony had a dedicated AI agent, and a central brain coordinated all of them?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;flowchart LR
    B["Backlog Agent\n:3000"] --&amp;gt; P["Planning Agent\n:3020"]
    P --&amp;gt; D["Dev + Standup Agent\n:4040"]
    D --&amp;gt; R["Review + Retro Agent\n:5050"]
    R --&amp;gt; O["Orchestrator\n:6060"]
    O --&amp;gt;|"cross-sprint\nmemory"| B
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Five services. Seven phases. One pipeline. Real-time streaming. Persistent memory across sprints.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Each Agent Actually Does
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Backlog Agent -- &lt;em&gt;For the Product Owner&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Takes raw requirements (pasted, CSV, or fetched from JIRA) and transforms them into sprint-ready tickets. Validates against JSON schema, estimates story points from historical similarity, detects dependencies, and flags risks. Every decision is audit-logged.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sprint Planning Agent -- &lt;em&gt;For the Scrum Master&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Loads the refined backlog, reads historical velocity, and builds an optimal plan within team capacity. If past sprints show overcommitment, it automatically reduces suggested capacity. Uses Azure OpenAI + RAG to match tickets to team members.&lt;/p&gt;

&lt;h3&gt;
  
  
  Iterative Dev + Standup Agent -- &lt;em&gt;For the Team&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Processes daily standup transcripts (pasted or fetched from Microsoft Teams) and extracts per-developer updates, blockers, and at-risk tickets. Evaluates work products against acceptance criteria using a rule engine -- no LLM hallucination on pass/fail decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Review + Retro Agent -- &lt;em&gt;For PO and Scrum Master&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Evaluates every sprint deliverable through a 3-layer pipeline:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Rule Engine&lt;/strong&gt; -- deterministic acceptance criteria check&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Foundry Local&lt;/strong&gt; -- on-device AI extraction (phi model, zero data leaves the machine)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure OpenAI&lt;/strong&gt; -- LLM-powered decision with confidence scoring&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Then runs a data-driven retrospective: pattern detection, action items with owners, velocity charts, CSV export.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Orchestrator -- &lt;em&gt;The Brain&lt;/em&gt;
&lt;/h3&gt;

&lt;p&gt;Coordinates all agents through a 7-phase pipeline (Backlog, Planning, Development, Review, Retro, Velocity, Intelligence). Maintains &lt;strong&gt;cross-sprint memory&lt;/strong&gt; -- velocity, completion rates, retro actions, recurring patterns. Each sprint learns from the last.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Differentiator: Sprints That Learn
&lt;/h2&gt;

&lt;p&gt;Most tools treat each sprint as isolated. We don't.&lt;/p&gt;

&lt;p&gt;After every cycle, the orchestrator persists what happened -- which tickets spilled over, which retro items were never addressed, how estimates compared to actuals. The next sprint's planning phase automatically consumes this context.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;After 3 cycles, the system suggested reducing capacity by 15% because historical data showed consistent overcommitment. No human asked for this. The memory surfaced it automatically.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It also detects &lt;strong&gt;cross-phase correlations&lt;/strong&gt; that humans miss:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Overcommitment in planning correlating with spillover in review&lt;/li&gt;
&lt;li&gt;Recurring retro patterns that were never addressed&lt;/li&gt;
&lt;li&gt;Estimation drift on certain ticket types&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An &lt;strong&gt;AI Manager&lt;/strong&gt; layer evaluates team performance trends across sprints -- velocity, quality, predictability, and action follow-through -- giving POs and Scrum Masters a strategic view.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Bug That Almost Killed the Demo
&lt;/h2&gt;

&lt;p&gt;Here's a war story. During testing, every sprint showed &lt;strong&gt;0% completion&lt;/strong&gt;. Every ticket failed. We spent hours debugging.&lt;/p&gt;

&lt;p&gt;The root cause? One character.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// BEFORE: 0 is falsy in JavaScript, so 0 || 100 = 100 (failure!)&lt;/span&gt;
&lt;span class="nc"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;review&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;testFailureRatePercent&lt;/span&gt; &lt;span class="o"&gt;||&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// AFTER: ?? only falls back on null/undefined, not 0&lt;/span&gt;
&lt;span class="nc"&gt;Number&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;review&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;metrics&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nx"&gt;testFailureRatePercent&lt;/span&gt; &lt;span class="o"&gt;??&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;===&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A &lt;code&gt;testFailureRatePercent&lt;/code&gt; of &lt;code&gt;0&lt;/code&gt; (perfect score) was being treated as &lt;code&gt;100&lt;/code&gt; (total failure) because &lt;code&gt;0&lt;/code&gt; is falsy in JavaScript. Changing &lt;code&gt;||&lt;/code&gt; to &lt;code&gt;??&lt;/code&gt; -- one character -- fixed the entire system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Lesson:&lt;/strong&gt; Always use nullish coalescing (&lt;code&gt;??&lt;/code&gt;) for numeric values. &lt;code&gt;||&lt;/code&gt; will betray you.&lt;/p&gt;




&lt;h2&gt;
  
  
  Responsible AI -- Not an Afterthought
&lt;/h2&gt;

&lt;p&gt;Every AI decision in the system includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Transparency&lt;/strong&gt; -- Each output lists its data sources (RuleEngine, FoundryLocal, AzureLLM, RAG) and a confidence score (0-100). Nothing is a black box.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Safety&lt;/strong&gt; -- All inputs are sanitized. Rate limiting at 60 req/min. RBAC with public/supervisor/admin roles. LLM outputs are validated before display.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Accountability&lt;/strong&gt; -- Per-agent audit logs with timestamps. A Responsible AI Dashboard aggregates events, flags, and data sources across all agents.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Offline Mode&lt;/strong&gt; -- One toggle switches the entire system to Foundry Local + Ollama. Zero data leaves the machine. Critical for enterprises with sensitive sprint data.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Runtime:&lt;/strong&gt; Node.js 20+, Express.js&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM (Cloud):&lt;/strong&gt; Azure OpenAI GPT-4o via LangChain.js&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;LLM (Local):&lt;/strong&gt; Microsoft Foundry Local (phi model)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Embeddings:&lt;/strong&gt; Ollama nomic-embed-text for RAG&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Protocol:&lt;/strong&gt; MCP (Model Context Protocol) -- 11 tools exposed to VS Code / Copilot / Claude Desktop&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Frontend:&lt;/strong&gt; React 18 (CDN) + Chart.js, unified dark theme across 5 dashboards&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Integrations:&lt;/strong&gt; JIRA Cloud, GitHub REST, Microsoft Graph&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deploy:&lt;/strong&gt; Azure Developer CLI (azd) + Container Apps + Bicep IaC&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Try It Yourself
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/snehasankaran/agile-sprint-orchestrator.git
&lt;span class="nb"&gt;cd &lt;/span&gt;agile-sprint-orchestrator
npm &lt;span class="nb"&gt;install
cp&lt;/span&gt; .env.example .env   &lt;span class="c"&gt;# Add your API keys&lt;/span&gt;

&lt;span class="c"&gt;# Start all 5 services&lt;/span&gt;
node backlog_agent_final.js       &lt;span class="c"&gt;# :3000&lt;/span&gt;
node sprint_planning_agent.js     &lt;span class="c"&gt;# :3020&lt;/span&gt;
node iterative_standup_agent.js   &lt;span class="c"&gt;# :4040&lt;/span&gt;
node review_agent.js              &lt;span class="c"&gt;# :5050&lt;/span&gt;
node orchestrator.js              &lt;span class="c"&gt;# :6060&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;code&gt;http://localhost:6060&lt;/code&gt;. Click &lt;strong&gt;Run Full Cycle&lt;/strong&gt;. Watch all 7 phases execute in real-time.&lt;/p&gt;




&lt;h2&gt;
  
  
  Built With AI
&lt;/h2&gt;

&lt;p&gt;This project was built with &lt;strong&gt;Cursor (Claude)&lt;/strong&gt; throughout -- architecture, code generation, debugging, and this blog post. Our approach:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Started with high-level intent ("build a sprint review agent") and iterated with focused follow-ups&lt;/li&gt;
&lt;li&gt;Used AI for debugging: the &lt;code&gt;??&lt;/code&gt; bug, proxy issues, and cross-sprint memory parsing were all diagnosed through iterative prompting&lt;/li&gt;
&lt;li&gt;Breaking complex tasks into small, specific prompts worked better than large "build everything" prompts&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What's Next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;WebSocket for bidirectional communication (replacing one-way SSE)&lt;/li&gt;
&lt;li&gt;Proper event bus (Redis/NATS) for production scale&lt;/li&gt;
&lt;li&gt;End-to-end integration tests across all 5 agents&lt;/li&gt;
&lt;li&gt;Fine-tuned local models for domain-specific evaluation&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Links
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;GitHub:&lt;/strong&gt; &lt;a href="https://github.com/snehasankaran/agile-sprint-orchestrator" rel="noopener noreferrer"&gt;snehasankaran/agile-sprint-orchestrator&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Video Demo:&lt;/strong&gt; &lt;em&gt;(add YouTube/Vimeo link)&lt;/em&gt;
&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;&lt;em&gt;Built for the JavaScript AI Build-a-thon Hack 2026 -- Agents for Impact&lt;/em&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>automation</category>
      <category>showdev</category>
    </item>
  </channel>
</rss>
