<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Felix Ortiz</title>
    <description>The latest articles on DEV Community by Felix Ortiz (@felixortizdev).</description>
    <link>https://dev.to/felixortizdev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/felixortizdev"/>
    <language>en</language>
    <item>
      <title>Security Is a Delivery Accelerator, Not a Gate</title>
      <dc:creator>Felix Ortiz</dc:creator>
      <pubDate>Tue, 07 Apr 2026 19:12:16 +0000</pubDate>
      <link>https://dev.to/felixortizdev/security-is-a-delivery-accelerator-not-a-gate-eel</link>
      <guid>https://dev.to/felixortizdev/security-is-a-delivery-accelerator-not-a-gate-eel</guid>
      <description>&lt;p&gt;The &lt;a href="https://dora.dev/research/2025/dora-report/" rel="noopener noreferrer"&gt;2025 DORA report&lt;/a&gt; found that most developers now use AI tools and individual productivity is up, yet organizational delivery metrics remain flat. AI acts as an amplifier: it magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones. The tools aren't the bottleneck. The underlying practices are.&lt;/p&gt;

&lt;p&gt;Security is one of those practices.&lt;/p&gt;

&lt;p&gt;DORA's core capability model includes &lt;a href="https://dora.dev/capabilities/pervasive-security/" rel="noopener noreferrer"&gt;pervasive security&lt;/a&gt;: integrating security into daily development work rather than treating it as a final gate. Their research shows high-performing teams spend significantly less time remediating security issues. That's time returned to shipping features. The report is blunt: AI productivity gains are "swallowed by bottlenecks in testing, security reviews, and complex deployment processes." Automate the security, and you remove a bottleneck.&lt;/p&gt;

&lt;h2&gt;
  
  
  What this looks like in practice
&lt;/h2&gt;

&lt;p&gt;This past week I needed to deploy and integrate a new API across two cloud providers. Nobody asked for fully automated IaC, a CI/CD pipeline, or security hardening. That's just how the work got delivered.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Everything is defined in code.&lt;/strong&gt; Infrastructure lives in Terraform: IAM policies, VPC rules, security groups, TLS configuration. New environments inherit the security posture automatically. No checklist, no drift, no ClickOps replication between staging and production.&lt;/p&gt;

&lt;p&gt;The application layer follows the same principle. Authentication is passwordless by default: IAM-based database auth and JWT service auth for cross-cloud API calls, no shared secrets anywhere in the stack. Network egress is locked to private ranges, with outbound traffic routed through known IPs so both sides can audit and whitelist every connection. TLS everywhere. CI/CD workflows are version-controlled and hardened the same way.&lt;/p&gt;

&lt;p&gt;The security review happens in the code review. On a small team, the engineer designing the infrastructure might also be the one making the security decisions. On a larger team, that's your InfoSec team in the room during design, shaping the posture before a line of code is written. Either way, it's shifting left. The security decisions are human. The enforcement is automated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters for compliance
&lt;/h2&gt;

&lt;p&gt;Every infrastructure change is versioned, peer-reviewed, and auditable. In regulated industries like healthcare, that traceability supports your HIPAA compliance posture and gives SOC 2 auditors exactly what they ask for: evidence that controls are in place and changes are tracked. The pipeline generates that evidence end-to-end: who proposed the change, who implemented it, who approved it, and who triggered the deploy. No separate compliance workflow. No after-the-fact documentation. The delivery pipeline is the audit trail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this matters for resilience
&lt;/h2&gt;

&lt;p&gt;There's no such thing as perfect security. Something will eventually get through, and when it does, it becomes unplanned work that competes with everything else on the roadmap. The faster you can respond, the less it costs. A recent &lt;a href="https://dev.to/felixortizdev/two-supply-chain-attacks-in-two-weeks-why-defense-in-depth-saved-me-2nd7"&gt;supply chain incident&lt;/a&gt; proved the point. The automation is what made the response fast: hardened the entire pipeline across all projects in hours, including agent skills that now enforce security practices on every future build.&lt;/p&gt;

&lt;p&gt;Your response time is part of your security posture. In DORA terms, your MTTR is a security metric, not just an operations one. &lt;a href="https://dora.dev/capabilities/pervasive-security/" rel="noopener noreferrer"&gt;Pervasive security&lt;/a&gt; and &lt;a href="https://dora.dev/capabilities/continuous-delivery/" rel="noopener noreferrer"&gt;continuous delivery&lt;/a&gt; aren't separate capabilities. They reinforce each other.&lt;/p&gt;

&lt;p&gt;Humans decide the security posture. Automation enforces it. That's the accelerator.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>cicd</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>Two Supply Chain Attacks in Two Weeks - Why Defense-in-Depth Saved Me</title>
      <dc:creator>Felix Ortiz</dc:creator>
      <pubDate>Sat, 04 Apr 2026 18:15:15 +0000</pubDate>
      <link>https://dev.to/felixortizdev/two-supply-chain-attacks-in-two-weeks-why-defense-in-depth-saved-me-2nd7</link>
      <guid>https://dev.to/felixortizdev/two-supply-chain-attacks-in-two-weeks-why-defense-in-depth-saved-me-2nd7</guid>
      <description>&lt;p&gt;Two supply chain attacks hit my CI/CD pipeline in under two weeks. Neither caused damage. Here's why, and what I hardened afterward.&lt;/p&gt;

&lt;h2&gt;
  
  
  The trend no one can ignore
&lt;/h2&gt;

&lt;p&gt;In late March 2026, the &lt;code&gt;aquasecurity/trivy-action&lt;/code&gt; GitHub Action was compromised via tag poisoning. A mutable version tag was silently redirected to a malicious commit.&lt;/p&gt;

&lt;p&gt;Less than two weeks later, a threat actor compromised an axios npm maintainer's account and published two backdoored versions (&lt;code&gt;1.14.1&lt;/code&gt; and &lt;code&gt;0.30.4&lt;/code&gt;) containing a hidden postinstall script that phoned home to a command-and-control server. Microsoft published a &lt;a href="https://www.microsoft.com/en-us/security/blog/2026/04/01/mitigating-the-axios-npm-supply-chain-compromise/" rel="noopener noreferrer"&gt;detailed technical analysis&lt;/a&gt; of the axios attack.&lt;/p&gt;

&lt;p&gt;Two different attack vectors. Two different ecosystems. Same target: CI/CD pipelines.&lt;/p&gt;

&lt;p&gt;This isn't a coincidence. Attackers are actively targeting build infrastructure because that's where the secrets live, where the deployments happen, and where a single compromised dependency can cascade into production. If your CI/CD pipeline isn't hardened against this class of attack, it's not a question of if, but when.&lt;/p&gt;

&lt;h2&gt;
  
  
  What happened
&lt;/h2&gt;

&lt;p&gt;One of my scheduled CI workflows ran an unlocked global npm install (&lt;code&gt;npm install -g&lt;/code&gt; without a pinned version) during the three-hour window the compromised axios versions were live on the registry. The runner pulled the malicious package and made contact with the attacker's C&amp;amp;C server for approximately six seconds.&lt;/p&gt;

&lt;p&gt;The catch: axios wasn't in my &lt;code&gt;package.json&lt;/code&gt;. It was a transitive dependency, pulled in by a tool I installed globally in the CI runner. My initial analysis checked every &lt;code&gt;package.json&lt;/code&gt; and its transitive dependency tree across all projects. That came back clean. It took a deeper investigation of the CI workflows themselves to find the exposure: a global install that bypasses lockfiles entirely and never appears in any project manifest. That's what makes supply chain attacks so effective. The obvious places to look aren't where the problem lives.&lt;/p&gt;

&lt;p&gt;The irony? It was my DAST (Dynamic Application Security Testing) scan. One of my security workflows got compromised.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it didn't matter
&lt;/h2&gt;

&lt;p&gt;This is where defense-in-depth earned its keep.&lt;/p&gt;

&lt;p&gt;The workflow followed least-privilege principles. The only credential present was a short-lived, read-scoped &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; with no access to cloud credentials, production secrets, deployment keys, or admin tokens. Per &lt;a href="https://docs.github.com/en/actions/concepts/security/github_token" rel="noopener noreferrer"&gt;GitHub's documentation&lt;/a&gt;, "The &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; expires when a job finishes or after a maximum of 24 hours." The job completed about seven minutes after the incident window, and the token expired with it.&lt;/p&gt;

&lt;p&gt;I also considered whether the token's &lt;code&gt;contents: read&lt;/code&gt; scope could have enabled a pivot to my cloud environment. My deployment workflows authenticate to cloud providers via &lt;a href="https://docs.github.com/en/actions/security-for-github-actions/security-hardening-your-deployments/about-security-hardening-with-openid-connect" rel="noopener noreferrer"&gt;OIDC-based workload identity federation&lt;/a&gt;, not the &lt;code&gt;GITHUB_TOKEN&lt;/code&gt;. Generating a cloud access token requires &lt;code&gt;id-token: write&lt;/code&gt; permission, which the compromised workflow did not have. Even if the attacker read every workflow file in the repo and reverse-engineered my cloud auth setup, the &lt;code&gt;GITHUB_TOKEN&lt;/code&gt; is the wrong credential entirely. It cannot be exchanged for a cloud access token.&lt;/p&gt;

&lt;p&gt;The blast radius was limited by design: zero-trust posture, short-lived credentials, and the assumption that any component could be compromised at any time.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I hardened
&lt;/h2&gt;

&lt;p&gt;I treated this as an opportunity to harden the entire pipeline, not just patch the immediate vector.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pinned the unlocked dependency
&lt;/h3&gt;

&lt;p&gt;The root cause was a global npm install without a version pin. Unlike &lt;code&gt;npm ci&lt;/code&gt; (which uses a lockfile), &lt;code&gt;npm install -g &amp;lt;package&amp;gt;&lt;/code&gt; fetches whatever the registry serves at runtime.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before&lt;/span&gt;
&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm install -g &amp;lt;package-name&amp;gt;&lt;/span&gt;

&lt;span class="c1"&gt;# After&lt;/span&gt;
&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;npm install -g &amp;lt;package-name&amp;gt;@&amp;lt;trusted-version-number&amp;gt;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  SHA-pinned all GitHub Actions
&lt;/h3&gt;

&lt;p&gt;Every action reference in my workflows used mutable version tags like &lt;code&gt;@v4&lt;/code&gt;. These tags can be silently redirected to malicious commits, which is exactly what happened in the trivy-action attack. I replaced every tag with an immutable 40-character commit SHA:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before - mutable tag&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@v4&lt;/span&gt;

&lt;span class="c1"&gt;# After - immutable SHA&lt;/span&gt;
&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;actions/checkout@de0fac2e...&lt;/span&gt; &lt;span class="c1"&gt;# v6&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;I also enabled the repository setting that requires all GitHub Actions to be referenced by SHA. This turns a convention into a gate: PRs that use mutable tags fail the policy check and can't merge.&lt;/p&gt;

&lt;h3&gt;
  
  
  Digest-pinned all container images
&lt;/h3&gt;

&lt;p&gt;Container image tags (&lt;code&gt;:latest&lt;/code&gt;, &lt;code&gt;:stable&lt;/code&gt;, &lt;code&gt;:17&lt;/code&gt;) are just as mutable as Git tags. I pinned every image reference to its immutable SHA256 digest:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before&lt;/span&gt;
&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres:17&lt;/span&gt;

&lt;span class="c1"&gt;# After&lt;/span&gt;
&lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgres@sha256:b994732f...&lt;/span&gt; &lt;span class="c1"&gt;# 17&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Fixed script injection patterns
&lt;/h3&gt;

&lt;p&gt;My review surfaced places where GitHub Actions expressions (&lt;code&gt;${{ }}&lt;/code&gt;) were interpolated directly inside shell scripts. This is a &lt;a href="https://docs.github.com/en/actions/security-for-github-actions/security-guides/security-hardening-for-github-actions#understanding-the-risk-of-script-injections" rel="noopener noreferrer"&gt;known script injection vector&lt;/a&gt;. I moved all such values into &lt;code&gt;env:&lt;/code&gt; blocks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Before - injection risk&lt;/span&gt;
&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;if [ -n "${{ steps.some-step.outputs.VALUE }}" ]; then&lt;/span&gt;

&lt;span class="c1"&gt;# After - safe&lt;/span&gt;
&lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;STEP_VALUE&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;${{ steps.some-step.outputs.VALUE }}&lt;/span&gt;
&lt;span class="na"&gt;run&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;|&lt;/span&gt;
  &lt;span class="s"&gt;if [ -n "$STEP_VALUE" ]; then&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Added automated update tooling
&lt;/h3&gt;

&lt;p&gt;SHA-pinning and digest-pinning are only effective if the pins stay current. I added Dependabot to automatically propose PRs when new versions are available. This turns a one-time hardening effort into an ongoing practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Supply chain security checklist
&lt;/h2&gt;

&lt;p&gt;If you take one thing from this post, go look at your CI/CD workflows right now.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] &lt;strong&gt;Pin global installs.&lt;/strong&gt; Any &lt;code&gt;npm install -g&lt;/code&gt; without a version pin is an open door. Lock it to a specific version.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;SHA-pin your action references.&lt;/strong&gt; If you see &lt;code&gt;@v4&lt;/code&gt; or &lt;code&gt;@v3&lt;/code&gt;, those are mutable tags. Replace them with immutable commit SHAs. Dependabot can keep them updated.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Digest-pin your container images.&lt;/strong&gt; Same problem, same fix. Pin to &lt;code&gt;@sha256:&lt;/code&gt; digests.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Fix script injection patterns.&lt;/strong&gt; Search for &lt;code&gt;${{ }}&lt;/code&gt; inside &lt;code&gt;run:&lt;/code&gt; blocks. Every one is a potential injection. Move them to &lt;code&gt;env:&lt;/code&gt; blocks.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Kill unused secrets.&lt;/strong&gt; List your repo secrets. If any aren't referenced in a workflows, delete them.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Enforce least-privilege permissions.&lt;/strong&gt; Does every workflow need the permissions it has? Use &lt;code&gt;permissions:&lt;/code&gt; blocks explicitly rather than relying on defaults.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Replace long-lived credentials with OIDC federation.&lt;/strong&gt; If your CI/CD workflows authenticate to cloud providers using static secrets, switch to OIDC-based workload identity federation. Short-lived tokens scoped to a single job run are harder to steal and impossible to reuse.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Add behavioral analysis for dependencies.&lt;/strong&gt; CVE databases don't catch zero-day supply chain attacks like the axios compromise. Tools that analyze package behavior at install time close that gap.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Verify lockfile integrity.&lt;/strong&gt; Tampered lockfiles can redirect dependencies to rogue registries without changing &lt;code&gt;package.json&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Check package provenance.&lt;/strong&gt; &lt;code&gt;npm audit signatures&lt;/code&gt; flags packages lacking OIDC attestations, which is a signal that the publish pipeline isn't verified.&lt;/li&gt;
&lt;li&gt;[ ] &lt;strong&gt;Generate SBOMs.&lt;/strong&gt; You need a bill of materials for compliance and incident response. When the next compromise drops, you want to answer "are we affected?" in minutes, not hours.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>cicd</category>
      <category>devops</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>The Mistakes Didn't Change. The Speed Did.</title>
      <dc:creator>Felix Ortiz</dc:creator>
      <pubDate>Sat, 28 Mar 2026 21:42:45 +0000</pubDate>
      <link>https://dev.to/felixortizdev/the-mistakes-didnt-change-the-speed-did-13i8</link>
      <guid>https://dev.to/felixortizdev/the-mistakes-didnt-change-the-speed-did-13i8</guid>
      <description>&lt;p&gt;Everyone is measuring how fast agents write code. Few are measuring what that code introduces.&lt;/p&gt;

&lt;p&gt;This year, independent researchers tested the major AI coding agents building applications from scratch. &lt;a href="https://www.helpnetsecurity.com/2026/03/13/claude-code-openai-codex-google-gemini-ai-coding-agent-security/" rel="noopener noreferrer"&gt;Most pull requests contained at least one vulnerability&lt;/a&gt;. Inside Fortune 50 companies, AI-generated code &lt;a href="https://apiiro.com/blog/4x-velocity-10x-vulnerabilities-ai-coding-assistants-are-shipping-more-risks/" rel="noopener noreferrer"&gt;introduced 10,000+ new security findings per month&lt;/a&gt;. Logic and syntax bugs went down. Privilege escalation paths jumped over 300%. Yikes!&lt;/p&gt;

&lt;p&gt;The code improved while the vulnerabilities got worse. Agents just produce the same old mistakes faster. One customer seeing another customer's data. Login flows that leave a back door wide open. Endpoints exposed to the entire internet.&lt;/p&gt;

&lt;h2&gt;
  
  
  The mistakes are harder to see
&lt;/h2&gt;

&lt;p&gt;The code looks clean. It follows the right patterns, uses the right frameworks, passes initial agent-driven code review. It just quietly skips the check that asks "should this user be allowed to do this?" or "has this request been authenticated?"&lt;/p&gt;

&lt;p&gt;These are judgment mistakes. The security tooling most teams rely on was built to catch known-bad patterns, not missing logic. &lt;a href="https://www.dryrun.security/blog/top-ai-sast-tools-2026" rel="noopener noreferrer"&gt;Over 80% of vulnerabilities in AI-generated code go undetected by traditional static analysis&lt;/a&gt;. Pattern matching catches code that is obviously wrong. It cannot catch logic that is missing.&lt;/p&gt;

&lt;h2&gt;
  
  
  The window to catch them is closing
&lt;/h2&gt;

&lt;p&gt;A human writes one insecure endpoint per sprint. An agent writes twenty in an afternoon. That alone changes the math on how much your security infrastructure needs to handle.&lt;/p&gt;

&lt;p&gt;It goes further. The agentic loop is getting tighter. Agents writing code, agents reviewing code, agents merging code. Each iteration shrinks the window between generation and production, and the human verification layer gets thinner every time.&lt;/p&gt;

&lt;p&gt;When that window was wide, pattern-matching tools and human reviewers could compensate for each other's blind spots. As it narrows, both have less time to work with, and the mistakes that slip through are the ones neither was designed to catch.&lt;/p&gt;

&lt;h2&gt;
  
  
  The tooling is evolving, and so is the attack surface
&lt;/h2&gt;

&lt;p&gt;The next generation of security tooling is starting to reason about code rather than just match patterns. Security review as a continuous practice embedded in the development loop, not a gate at the end of it. That direction is right.&lt;/p&gt;

&lt;p&gt;More tooling also means more surface area. Earlier this year, &lt;a href="https://www.heyuan110.com/posts/ai/2026-03-10-mcp-security-2026/" rel="noopener noreferrer"&gt;a wave of CVEs hit MCP infrastructure&lt;/a&gt;, many of them the same class of vulnerability these tools exist to catch. If you are going to trust your security pipeline, you have to secure the pipeline itself. OWASP and GitHub are already publishing &lt;a href="https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/" rel="noopener noreferrer"&gt;frameworks&lt;/a&gt; and &lt;a href="https://github.blog/ai-and-ml/generative-ai/under-the-hood-security-architecture-of-github-agentic-workflows/" rel="noopener noreferrer"&gt;reference architectures&lt;/a&gt; for this.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I'm doing about it
&lt;/h2&gt;

&lt;p&gt;On my own platform, I have the pattern-matching layer in place. Static analysis on every PR, dynamic scanning nightly. That catches what it was designed to catch. The floor is built.&lt;/p&gt;

&lt;p&gt;Now I need what goes above it: security agents that reason about logic-level gaps, tooling integrated at generation time via MCP instead of just at review time, and a hardened pipeline that gets the same isolation and least-privilege treatment as production.&lt;/p&gt;

&lt;p&gt;The mistakes agents make are not new. The speed at which they make them is.&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
      <category>devops</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>Fix the Process, Not the Gates</title>
      <dc:creator>Felix Ortiz</dc:creator>
      <pubDate>Wed, 18 Mar 2026 19:10:27 +0000</pubDate>
      <link>https://dev.to/felixortizdev/fix-the-process-not-the-gates-2h6b</link>
      <guid>https://dev.to/felixortizdev/fix-the-process-not-the-gates-2h6b</guid>
      <description>&lt;p&gt;Recently, Amazon &lt;a href="https://www.cnbc.com/2026/03/10/amazon-plans-deep-dive-internal-meeting-address-ai-related-outages.html" rel="noopener noreferrer"&gt;convened a mandatory engineering meeting&lt;/a&gt; after a string of outages tied to AI-assisted code changes. One outage took down the shopping experience for six hours. Another cost AWS hours of downtime after engineers let an AI coding tool make changes without adequate review.&lt;/p&gt;

&lt;p&gt;Amazon already had an approval process in place. It was &lt;a href="https://byteiota.com/amazon-ai-code-review-policy-senior-approval-now-mandatory/" rel="noopener noreferrer"&gt;either bypassed or not enforced&lt;/a&gt;. The fix was not to add a committee or freeze AI tool usage. It was targeted: developers now explicitly mark code as "AI-assisted" in commits, and that code gets a dedicated senior engineer review before merging, scoped to the highest-risk systems like checkout, payments, and inventory.&lt;/p&gt;

&lt;p&gt;Process that exists but is not followed is not process. And the fix was not more bureaucracy. It was visibility into what AI produced and accountability for what ships. That lines up with what a decade of &lt;a href="https://dora.dev" rel="noopener noreferrer"&gt;DORA research&lt;/a&gt; tells us: heavyweight approval processes are one of the strongest predictors of poor delivery performance. The answer to AI-related risk is not more gates. It is the right gates, in the right places, enforced consistently.&lt;/p&gt;

&lt;p&gt;Addy Osmani put it simply in his &lt;a href="https://addyosmani.com/blog/ai-coding-workflow/" rel="noopener noreferrer"&gt;AI coding workflow post&lt;/a&gt;: "I remain the accountable engineer." Review the code, understand it, never merge blindly. Trust but verify. The human stays in the loop not to slow things down, but because someone has to be accountable for what ships.&lt;/p&gt;

&lt;p&gt;The question is where human involvement creates the most value. I think it comes down to three things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Capture intent before code.&lt;/strong&gt; If the spec does not define what "done" looks like in concrete, verifiable terms, no amount of review downstream will catch a feature that works but misses the point. This matters more when AI writes the code, because AI will build exactly what you ask for, including the wrong thing if you asked poorly. Structured acceptance criteria and clear scope before any code gets written is the highest-leverage investment you can make.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automate the guardrails, not the gates.&lt;/strong&gt; AI makes writing tests dramatically faster. TDD, integration testing, performance testing, security scanning. These are practices that teams claimed they could not invest in because of feature delivery pressure. That barrier is gone. The practices that define high-performing teams should become table stakes, not aspirational.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep as few human gates as humanly possible.&lt;/strong&gt; The merge approval is where human judgment earns its keep. Not a committee. Not a change advisory board. An experienced engineer who understands the change, confirms it matches the intent, and approves it. If the upstream process and automated guardrails are doing their jobs, that should be enough.&lt;/p&gt;

&lt;p&gt;The Accelerate principles still hold: small changes, fast feedback, automated pipelines, deploy frequently. What changes when AI writes the code is that the process around intent and spec quality has to be more disciplined than it was before. The pipeline stays fast. The inputs get tighter.&lt;/p&gt;

&lt;p&gt;What does your process look like for ensuring intent before AI-generated code enters the pipeline?&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>security</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
