<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Marat Kee</title>
    <description>The latest articles on DEV Community by Marat Kee (@marat_kiniabulatov_8432bd).</description>
    <link>https://dev.to/marat_kiniabulatov_8432bd</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/marat_kiniabulatov_8432bd"/>
    <language>en</language>
    <item>
      <title>How AI is Shrinking the SDLC: in greenfield, brownfield, regulated industries</title>
      <dc:creator>Marat Kee</dc:creator>
      <pubDate>Wed, 01 Apr 2026 11:07:50 +0000</pubDate>
      <link>https://dev.to/marat_kiniabulatov_8432bd/how-ai-is-shrinking-the-sdlc-in-greenfield-brownfield-regulated-industries-542m</link>
      <guid>https://dev.to/marat_kiniabulatov_8432bd/how-ai-is-shrinking-the-sdlc-in-greenfield-brownfield-regulated-industries-542m</guid>
      <description>&lt;h1&gt;
  
  
  How AI is Shrinking the SDLC
&lt;/h1&gt;

&lt;p&gt;&lt;em&gt;I work with experimental AI-first teams, exploring how agentic engineering impacts Lead Time. Here's what I'm seeing.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;And unlike some people say, I think that SDLC is not killed by agents. I think it compresses into something more lightweight.&lt;/p&gt;

&lt;p&gt;One person with AI can generate what used to require a team. The bottleneck shifts from writing code to validating it.&lt;/p&gt;

&lt;p&gt;But this isn't uniform across all contexts. Greenfield, brownfield, and regulated environments each compress differently.&lt;/p&gt;




&lt;h2&gt;
  
  
  Scenario 1: Greenfield / MVP / Internal Tools
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Context:&lt;/strong&gt; New project, no users, low error cost, speed is critical.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What changes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tiered Code Review:&lt;/strong&gt; security-critical code (auth, crypto) — 100% human review; everything else — automated checks + spot-check&lt;/li&gt;
&lt;li&gt;Observability as primary safety net (canary releases, auto-rollback)&lt;/li&gt;
&lt;li&gt;Iterations are now significantly faster, to the point where customer gets updates during the demo&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fov5nikmoskt9su805eks.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fov5nikmoskt9su805eks.png" alt="How SDLC looks for Greenfield Project" width="800" height="210"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Important: even in greenfield, AI code contains &lt;a href="https://www.coderabbit.ai/blog/state-of-ai-vs-human-code-generation-report" rel="noopener noreferrer"&gt;1.7x more issues&lt;/a&gt;. Skipping review entirely is risky.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to ensure we didn't make it worse:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Lead Time ≤ 1 day&lt;/li&gt;
&lt;li&gt;Deployment Frequency &amp;gt; 1/day&lt;/li&gt;
&lt;li&gt;Change Failure Rate within DORA "Good" threshold (0-15%)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Scenario 2: Brownfield / Existing Product
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Context:&lt;/strong&gt; Real users, established reputation, existing technical debt. Errors cost money.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What changes:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We need a &lt;strong&gt;tiered approach by risk&lt;/strong&gt;:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Code Type&lt;/th&gt;
&lt;th&gt;Human Review&lt;/th&gt;
&lt;th&gt;Examples&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Security-critical&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;100%&lt;/strong&gt; senior review&lt;/td&gt;
&lt;td&gt;Auth, payments, PII, crypto&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Business logic&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;30-40%&lt;/strong&gt; peer review&lt;/td&gt;
&lt;td&gt;Features, API, data flows&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Utility&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Spot-check&lt;/strong&gt; + automated&lt;/td&gt;
&lt;td&gt;Tests, docs, configs&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Found interesting stats: Per &lt;a href="https://linearb.io/dev-interrupted/podcast/linearb-2026-benchmarks-ai-pr-merge-rate" rel="noopener noreferrer"&gt;LinearB 2026 Benchmarks&lt;/a&gt;, AI PRs merge at 32.7% vs 84.5% for human code — most require rework. Per my experience, we accept only 18% of 100% ai written PRs.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu6tpogs7g69h2bb46433.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu6tpogs7g69h2bb46433.png" alt="AI-native SDLC for Brownfield Project" width="800" height="179"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to ensure we didn't make it worse:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code Review Time doesn't grow (despite more PRs)&lt;/li&gt;
&lt;li&gt;Defect Rate stable or declining&lt;/li&gt;
&lt;li&gt;SLA/SLO maintained&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Scenario 3: Regulated Industries (Fintech, Healthcare, Insurance)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Context:&lt;/strong&gt; Compliance requires human accountability. Audit trail is mandatory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What changes:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;100% audit trail&lt;/strong&gt; for all AI-generated code: &lt;a href="https://blog.pcisecuritystandards.org/ai-principles-securing-the-use-of-ai-in-payment-environments" rel="noopener noreferrer"&gt;who requested, what was generated, who approved&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;AI accelerates stages, but &lt;strong&gt;humans make decisions&lt;/strong&gt; — this is a regulatory requirement (FDA, PCI-DSS, HIPAA)&lt;/li&gt;
&lt;li&gt;Stages can merge, but the artefacts are still reqiured for audit&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvuhz0q95etdh8k6lo4mi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvuhz0q95etdh8k6lo4mi.png" alt="SDLC with AI for regulated projects" width="800" height="138"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to ensure we didn't make it worse:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Change Failure Rate doesn't increase&lt;/li&gt;
&lt;li&gt;Compliance review time gradually decreases&lt;/li&gt;
&lt;li&gt;Audit trail complete and verifiable&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;There's no universal "new SDLC" with only agents and zero humans. Reality is a spectrum depending on project context.&lt;/p&gt;

&lt;p&gt;Besides the context, team and its culture dramatically influence whether AI makes it better or worse. It accelerates good practices and bad practices equally. Get your quality gate together and embrace the result.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>programming</category>
      <category>productivity</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
