<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Alexander Velikiy</title>
    <description>The latest articles on DEV Community by Alexander Velikiy (@great_cto).</description>
    <link>https://dev.to/great_cto</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/great_cto"/>
    <language>en</language>
    <item>
      <title>AI dropped my per-feature ship time from 3 days to 3 hours. Here's the actual stack.</title>
      <dc:creator>Alexander Velikiy</dc:creator>
      <pubDate>Sat, 16 May 2026 12:51:54 +0000</pubDate>
      <link>https://dev.to/great_cto/ai-dropped-my-per-feature-ship-time-from-3-days-to-3-hours-heres-the-actual-stack-2d91</link>
      <guid>https://dev.to/great_cto/ai-dropped-my-per-feature-ship-time-from-3-days-to-3-hours-heres-the-actual-stack-2d91</guid>
      <description>&lt;p&gt;I keep getting the same DM:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"Cool, but does AI actually speed up shipping or is this just hype?"&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;So here's the table from one MVP build that ended last quarter. Numbers measured, not vibed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Per-feature time, with and without agents
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Activity&lt;/th&gt;
&lt;th&gt;Traditional senior team&lt;/th&gt;
&lt;th&gt;With agentic SDLC&lt;/th&gt;
&lt;th&gt;Speedup&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Plan a feature (ARCH doc + tasks)&lt;/td&gt;
&lt;td&gt;2–4h human discussion&lt;/td&gt;
&lt;td&gt;15 min (architect agent + &lt;code&gt;gate:plan&lt;/code&gt;)&lt;/td&gt;
&lt;td&gt;~10×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code a small feature&lt;/td&gt;
&lt;td&gt;1–3 days senior dev&lt;/td&gt;
&lt;td&gt;1–2h human review of agent output&lt;/td&gt;
&lt;td&gt;~10–15×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Code review&lt;/td&gt;
&lt;td&gt;2–4h, async over 1–2 days&lt;/td&gt;
&lt;td&gt;30 min (5 reviewers in parallel)&lt;/td&gt;
&lt;td&gt;~10×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;QA / test suite&lt;/td&gt;
&lt;td&gt;1 day&lt;/td&gt;
&lt;td&gt;15 min (qa-engineer agent + spot check)&lt;/td&gt;
&lt;td&gt;~25×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Deploy (canary + monitoring)&lt;/td&gt;
&lt;td&gt;~4h&lt;/td&gt;
&lt;td&gt;~10 min (auto-canary)&lt;/td&gt;
&lt;td&gt;~25×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;End-to-end per feature&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~3–5 days&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~3–5 hours&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~10×&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Shipping one feature drops from &lt;em&gt;"we'll have it next week"&lt;/em&gt; to &lt;em&gt;"we'll have it after lunch."&lt;/em&gt; For a real working developer, that's the metric that matters more than any "55% cost reduction" headline.&lt;/p&gt;

&lt;h2&gt;
  
  
  The full MVP picture
&lt;/h2&gt;

&lt;p&gt;OK, but a single-feature speedup doesn't necessarily mean the MVP ships faster. Sometimes you just spend the saving on more reviews. So here's the end-to-end:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Work area&lt;/th&gt;
&lt;th&gt;Traditional (1 PM + 4 eng, ~3 months)&lt;/th&gt;
&lt;th&gt;With agents + voice-pack (1 PM + 2 eng + agents, ~6–8 weeks)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Architecture + ADRs&lt;/td&gt;
&lt;td&gt;~$20K&lt;/td&gt;
&lt;td&gt;~$10K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Backend (Twilio, OpenAI, call routing)&lt;/td&gt;
&lt;td&gt;~$80K&lt;/td&gt;
&lt;td&gt;~$30K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Frontend (operator dashboard)&lt;/td&gt;
&lt;td&gt;~$40K&lt;/td&gt;
&lt;td&gt;~$15K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Database + migrations&lt;/td&gt;
&lt;td&gt;~$15K&lt;/td&gt;
&lt;td&gt;~$5K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Test suite + QA&lt;/td&gt;
&lt;td&gt;~$25K&lt;/td&gt;
&lt;td&gt;~$10K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Security review + pen test&lt;/td&gt;
&lt;td&gt;~$20K&lt;/td&gt;
&lt;td&gt;~$15K (external pen test still required)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Compliance (voice-pack)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$42K&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$22K&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Deployment + CI/CD&lt;/td&gt;
&lt;td&gt;~$15K&lt;/td&gt;
&lt;td&gt;~$8K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Documentation&lt;/td&gt;
&lt;td&gt;~$10K&lt;/td&gt;
&lt;td&gt;~$3K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;PM + buffer&lt;/td&gt;
&lt;td&gt;~$20K&lt;/td&gt;
&lt;td&gt;~$10K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$287K&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$128K&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;LLM compute&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;$0&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~$500–$1,500&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Wall-clock&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;~3 months&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;~6–8 weeks&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Headcount&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1 PM + 4 engineers&lt;/td&gt;
&lt;td&gt;1 PM + 2 engineers + agents&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;Cost saving: ~55%. Time saving: ~40–50%. Headcount: 4 → 2 (not 0).&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Two important honest details for working devs:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;LLM cost across the whole MVP is $500–$1,500.&lt;/strong&gt; That's not a few cents – it's four-figure money burned across architecture drafting, code generation, parallel reviewers, deployment automation, and the memory feedback loop. Don't compare a single agent prompt to the full build.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;You still need engineers.&lt;/strong&gt; "2 engineers + agents" means real humans operating the pipeline, reviewing agent output, fixing the bugs agents create, integrating Twilio (or whatever), and shipping the code. The startup that ships an MVP with zero humans in 2026 doesn't exist.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  What is "the agents" actually doing?
&lt;/h2&gt;

&lt;p&gt;This is the part where most posts wave hands. The reality: thirty-four specialist agents, eight stages, two human gates per feature. Architecture diagram here: &lt;a href="https://greatcto.systems/architecture" rel="noopener noreferrer"&gt;greatcto.systems/architecture&lt;/a&gt; – every box on the SVG is clickable to that agent's source on GitHub.&lt;/p&gt;

&lt;p&gt;Daily-driver agents you'll see fire most:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;architect&lt;/strong&gt; – drafts ARCH.md + ADR + cost estimate, before &lt;code&gt;gate:plan&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;pm&lt;/strong&gt; – decomposes into beads tasks with explicit dependencies, parallel-friendly&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;senior-dev&lt;/strong&gt; (×N) – claims a task, TDD, isolated worktree, ships diff&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;qa-engineer&lt;/strong&gt; – type-check + lint + tests + coverage&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;security-officer&lt;/strong&gt; – OWASP, CVE scan, secret detection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;code-reviewer&lt;/strong&gt; – 12-angle review on the final diff&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;devops&lt;/strong&gt; – canary + health checks + auto-rollback&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;l3-support&lt;/strong&gt; – production triage + postmortem&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;continuous-learner&lt;/strong&gt; – extracts lessons → &lt;code&gt;.great_cto/lessons.md&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Plus 26 archetype-specific reviewers that fire only when their domain triggers – voice-AI, healthcare, fintech, robotics, etc. The point isn't 34 always-on agents. The point is 5–7 fire on any given PR, and which 7 depend on what your repo looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  The compliance packs (10 of them)
&lt;/h2&gt;

&lt;p&gt;If you ship into a regulated industry, agentic SDLC alone isn't enough – you also need the right reviewer agents to know which gates to wire. Hence: packs.&lt;/p&gt;

&lt;p&gt;A pack triggers on industry signals in your repo (e.g. &lt;code&gt;twilio&lt;/code&gt; in &lt;code&gt;package.json&lt;/code&gt; → voice). It attaches a specialist reviewer agent, generates a threat model, and wires named human gates. One-line each:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;voice-pack&lt;/strong&gt; – &lt;code&gt;twilio&lt;/code&gt;, &lt;code&gt;livekit&lt;/code&gt;, &lt;code&gt;deepgram&lt;/code&gt;, &lt;code&gt;elevenlabs&lt;/code&gt; → TCPA + state recording consent + STIR/SHAKEN + PCI redaction&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;clinical-pack&lt;/strong&gt; – &lt;code&gt;clinical&lt;/code&gt;, &lt;code&gt;PHI&lt;/code&gt;, &lt;code&gt;SaMD&lt;/code&gt;, &lt;code&gt;CDS&lt;/code&gt; → FDA SaMD classification + HIPAA + 21 CFR Part 11&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;hr-ai-pack&lt;/strong&gt; – &lt;code&gt;recruit&lt;/code&gt;, &lt;code&gt;candidate&lt;/code&gt;, &lt;code&gt;ATS&lt;/code&gt; → NYC LL 144 AEDT bias audit + EEOC + EU AI Act Annex III&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;api-platform-pack&lt;/strong&gt; – &lt;code&gt;REST&lt;/code&gt;, &lt;code&gt;GraphQL&lt;/code&gt;, &lt;code&gt;webhook&lt;/code&gt;, &lt;code&gt;OpenAPI&lt;/code&gt; → OAuth 2.1 + RFC 8594 Sunset + HMAC webhook signing + idempotency&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;lending-pack&lt;/strong&gt; – &lt;code&gt;loan&lt;/code&gt;, &lt;code&gt;BNPL&lt;/code&gt;, &lt;code&gt;credit&lt;/code&gt;, &lt;code&gt;FCRA&lt;/code&gt;, &lt;code&gt;ECOA&lt;/code&gt; → ECOA Reg B adverse-action + BISG fair-lending + NMLS state matrix&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;clinical-trials-pack&lt;/strong&gt; – &lt;code&gt;CTMS&lt;/code&gt;, &lt;code&gt;EDC&lt;/code&gt;, &lt;code&gt;eConsent&lt;/code&gt;, &lt;code&gt;FHIR&lt;/code&gt;, &lt;code&gt;HL7&lt;/code&gt; → ICH-GCP + Part 11 audit trail + CDISC + IRB-ready&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;robotics-pack&lt;/strong&gt; – &lt;code&gt;cobot&lt;/code&gt;, &lt;code&gt;ROS 2&lt;/code&gt;, &lt;code&gt;surgical robot&lt;/code&gt; → ISO 10218 + IEC 61508 + HARA + SROS2&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;em-fintech-pack&lt;/strong&gt; – &lt;code&gt;RBI&lt;/code&gt;, &lt;code&gt;CBN&lt;/code&gt;, &lt;code&gt;BSP&lt;/code&gt;, &lt;code&gt;UPI&lt;/code&gt;, &lt;code&gt;PIX&lt;/code&gt;, &lt;code&gt;M-Pesa&lt;/code&gt; → India DPDP + cross-border + license strategy&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;climate-pack&lt;/strong&gt; – &lt;code&gt;Verra&lt;/code&gt;, &lt;code&gt;Gold Standard&lt;/code&gt;, &lt;code&gt;Scope 1/2/3&lt;/code&gt;, &lt;code&gt;CDP&lt;/code&gt;, &lt;code&gt;CSRD&lt;/code&gt; → MRV methodology + biosecurity&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;drug-discovery-pack&lt;/strong&gt; – &lt;code&gt;binding affinity&lt;/code&gt;, &lt;code&gt;ADMET&lt;/code&gt;, &lt;code&gt;AlphaFold&lt;/code&gt;, &lt;code&gt;LIMS&lt;/code&gt;, &lt;code&gt;GLP&lt;/code&gt; → applicability domain + IQ/OQ/PQ + ALCOA+&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Each pack adds 1–4 reviewer agents, named human gates, eval fixtures, and a required-artefact list. Full breakdown with company catalogues at &lt;a href="https://greatcto.systems/packs" rel="noopener noreferrer"&gt;greatcto.systems/packs&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  How detection works (the part HN readers will ask)
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nl"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;voice-pack&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;signals&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;deps&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;twilio&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;@livekit/agents&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;deepgram-sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="nx"&gt;keywords&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;voice agent&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;IVR&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;phone tree&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="nx"&gt;files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;    &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;twilio.config.*&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;livekit.yaml&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="nx"&gt;attaches&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nl"&gt;archetypes&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ai-system&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;agent-product&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="nx"&gt;reviewer&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;   &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;voice-ai-reviewer&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nx"&gt;gates&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;      &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gate:voice-compliance&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Exact-match keyword scanning, not fuzzy substring. &lt;code&gt;'twilio'&lt;/code&gt; matches &lt;code&gt;'twilio'&lt;/code&gt; in &lt;code&gt;dependencies&lt;/code&gt;, not &lt;code&gt;'twilio-helpers'&lt;/code&gt; in README. Keeps false-positive overlay attachment under 1%.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Confession on that 1%: v0.1 did fuzzy substring matching and voice-pack triggered on a static-site-generator repo whose README said "we explicitly do not use Twilio." Spent an hour wondering why a blog generator was getting a TCPA threat model. Also, I shipped voice-pack without &lt;code&gt;'phone'&lt;/code&gt; in the keyword list for two weeks. Two startups installed it, shipped voice features, the pack sat there politely without firing once. The boilerplate every new pack now starts from has a rule: &lt;em&gt;include the most obvious keyword first, not last.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Packs stack additively. &lt;code&gt;twilio&lt;/code&gt; + &lt;code&gt;stripe&lt;/code&gt; + &lt;code&gt;livekit&lt;/code&gt; → &lt;code&gt;voice-pack&lt;/code&gt; + &lt;code&gt;commerce-pack&lt;/code&gt;. If two packs name the same gate, the kernel dedupes by name. Reviewers run in parallel on the same PR; verdicts aggregate to one APPROVED / BLOCKED chip at &lt;code&gt;gate:ship&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Source: &lt;a href="https://github.com/avelikiy/great_cto/tree/main/skills/great_cto/packs" rel="noopener noreferrer"&gt;&lt;code&gt;skills/great_cto/packs/&lt;/code&gt;&lt;/a&gt;, &lt;a href="https://github.com/avelikiy/great_cto/blob/main/packages/cli/src/packs.ts" rel="noopener noreferrer"&gt;&lt;code&gt;packages/cli/src/packs.ts&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Install + try
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx great-cto init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Runs locally. MIT. Pay your own LLM API. Works inside Claude Code, Cursor, OpenAI Codex CLI, Aider, and Continue via AGENTS.md + MCP.&lt;/p&gt;

&lt;p&gt;After init:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;/start &lt;span class="s2"&gt;"add a voice agent for restaurant order-taking"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Architect agent drafts ARCH doc. PM decomposes into beads tasks. &lt;code&gt;gate:plan&lt;/code&gt; waits for your approval. Then senior-dev agents claim tasks in parallel; 5 reviewer agents fan out on the resulting diff; &lt;code&gt;gate:ship&lt;/code&gt; waits for your approval again. Two clicks per feature. The rest runs unattended.&lt;/p&gt;

&lt;h2&gt;
  
  
  What does NOT speed up
&lt;/h2&gt;

&lt;p&gt;The honest disclaimer because it matters more than the speedup headline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;External audit cycles&lt;/strong&gt; still take their natural time (LL 144 auditor ~2-4 weeks, FDA pre-sub 60-90 days)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;IRB approval&lt;/strong&gt; still takes 2-3 months&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regulator meetings&lt;/strong&gt; still need to be scheduled&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Wet-lab validation&lt;/strong&gt; is still real biology&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;HARA signoff&lt;/strong&gt; is a single calendar moment a human owns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Anything requiring another organization to commit time runs at human speed. The LLM accelerates &lt;em&gt;your&lt;/em&gt; codebase and &lt;em&gt;your&lt;/em&gt; compliance discovery. It doesn't accelerate someone else's calendar.&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Per-feature time drops ~10× (3–5 days → 3–5 hours). MVP wall-clock drops ~40–50% (3 months → 6–8 weeks). Cost drops ~55%.&lt;/li&gt;
&lt;li&gt;LLM cost across the WHOLE MVP is $500–$1,500. Not free, not trivially cheap.&lt;/li&gt;
&lt;li&gt;Headcount drops 4 → 2 engineers + agents. Not 0. You still need humans.&lt;/li&gt;
&lt;li&gt;10 compliance packs cover voice-AI, clinical, HR-AI, API platforms, lending, clinical trials, robotics, EM fintech, climate-MRV, drug discovery.&lt;/li&gt;
&lt;li&gt;Architecture diagram: &lt;a href="https://greatcto.systems/architecture" rel="noopener noreferrer"&gt;greatcto.systems/architecture&lt;/a&gt;. One real run walked stage-by-stage: &lt;a href="https://greatcto.systems/proof" rel="noopener noreferrer"&gt;greatcto.systems/proof&lt;/a&gt;. MTTR benchmark methodology: &lt;a href="https://github.com/avelikiy/great_cto/blob/main/docs/benchmarks/MTTR.md" rel="noopener noreferrer"&gt;docs/benchmarks/MTTR.md&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Try: &lt;code&gt;npx great-cto init&lt;/code&gt;. ⭐ if useful: &lt;a href="https://github.com/avelikiy/great_cto" rel="noopener noreferrer"&gt;github.com/avelikiy/great_cto&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Full deep-dive with per-pack details + the realistic MVP economics breakdown + the runway math is on Hashnode: &lt;a href="https://avelikiy.hashnode.dev/ten-compliance-packs-for-ten-regulated-industries" rel="noopener noreferrer"&gt;Ten compliance packs for ten regulated industries&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>claudecode</category>
      <category>sdlc</category>
    </item>
  </channel>
</rss>
