<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dwayne McDaniel</title>
    <description>The latest articles on DEV Community by Dwayne McDaniel (@dwayne_mcdaniel).</description>
    <link>https://dev.to/dwayne_mcdaniel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dwayne_mcdaniel"/>
    <language>en</language>
    <item>
      <title>SnowFROC 2026: Secure Defaults, Real Trust, and a Better Layer on Top</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Tue, 05 May 2026 12:32:16 +0000</pubDate>
      <link>https://dev.to/gitguardian/snowfroc-2026-secure-defaults-real-trust-and-a-better-layer-on-top-npn</link>
      <guid>https://dev.to/gitguardian/snowfroc-2026-secure-defaults-real-trust-and-a-better-layer-on-top-npn</guid>
      <description>&lt;p&gt;Denver likes a good origin story. The city still keeps a marker for &lt;a href="https://visitdenver.com/blog/post/cheeseburger-birthplace/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Louis Ballast and the Humpty Dumpty Barrel, the local spot tied to the cheeseburger's Colorado claim&lt;/a&gt;. That detail felt oddly right for &lt;a href="https://snowfroc.com/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;SnowFROC 2026&lt;/a&gt;. A cheeseburger is a small upgrade that changes the whole meal. This year's conference kept returning to the same ideas in AppSec, such as how meaningful security progress often comes from well-placed layers that make the better choice easier to make.&lt;/p&gt;

&lt;p&gt;The Snow in "SnowFROC" is due to the time of year the event takes place and the good possibility that it will snow, &lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3mjplq47s4m2x?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;which it did this year&lt;/a&gt;. The other half of the name stands for Front Range OWASP Conference. This year, they expanded it into a two-day event in Denver that drew about 400 attendees to see 35 sessions, take part in 8 half-day training sessions, a CTF, and multiple village activities. The room carried that blend of practical curiosity and sharp hallway conversation that makes any security conference worth the trip.&lt;/p&gt;

&lt;p&gt;Throughout the event, the sessions covered how software is actually built now: fast, AI-assisted, dependency-heavy, and spread across more people and systems than any one security team can fully monitor alone. The strongest sessions focused on incentives, workflows, trust boundaries, and the places where attackers keep finding leverage because defenders still leave too much to intent, memory, and good luck.&lt;/p&gt;

&lt;p&gt;Here are just a few notes from SnowFROC 2026.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Human Layer in Secure Defaults
&lt;/h2&gt;

&lt;p&gt;In the keynote from &lt;a href="https://ca.linkedin.com/in/tanya-janca?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Tanya Janca, founder of She Hacks Purple Consulting&lt;/a&gt;, called "Threat Modeling Developer Behavior: The Psychology of Bad Code," she explained that in AppSec, insecure code is rarely just a technical failure. It is usually a human one. Developers work under pressure, chase deadlines, respond to incentives, and fall back on habits, biases, and shortcuts that feel reasonable in the moment. Instead of telling people they are wrong and expecting better outcomes, AppSec teams need to understand why those choices happen in the first place. Psychology helps explain the gap between what teams say they value and what their systems actually reward.&lt;/p&gt;

&lt;p&gt;Tanya talked about intervention and prevention over blame. Secure defaults beat secure intent because they remove friction and make the safer path the easier one. That can look like pre-commit hooks, IDE nudges, secure-by-default templates, and frequent reminders placed where decisions actually happen, not buried in a wiki. The same logic applies to training. Annual compliance sessions and lists of what not to do do not change behavior very well. Teaching secure patterns, explaining the why behind them, and reinforcing them in small daily ways is far more likely to stick. The goal is not more nagging. It is better environmental design.&lt;/p&gt;

&lt;p&gt;Tanya shared her experiences about AI-assisted coding triggering automation bias, where people trust confident suggestions too quickly. Tight deadlines push present bias, making future breach risk feel abstract next to immediate shipping pressure. Copying code from forums, skipping tests, ignoring warnings, avoiding documentation, or showing off with clever code all follow similar patterns.&lt;/p&gt;

&lt;p&gt;She asked us all to build systems that reward maintainable, tested, secure work and measure what actually matters, including time to fix, adoption of secure patterns, and real vulnerability reduction. If teams want secure coding to be real, they have to make it the path of least resistance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kyie53m3tfvrn8rz33m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4kyie53m3tfvrn8rz33m.png" alt="Tanya Janca" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Trust Has Become a Supply Chain Primitive
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/chris-lindsey-39b3915?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Chris Lindsey, Field CTO at OX Security&lt;/a&gt;, started his talk "Inside the Modern Threat Landscape: Attacker Wins, Defender Moves, and Your Priorities," with a reminder that choosing not to act is still a choice. In today's threat landscape, a small set of attack vectors keeps showing up in outsized breaches, including credential theft, session hijacking, phishing, typosquatting, browser extensions, DNS poisoning, and software that appears to come from trusted sources. The common thread is trust. Attackers do not usually break in by brute force alone, instead they build credibility first through a convincing email or a familiar package name, or a browser extension that looks legitimate on the surface.&lt;/p&gt;

&lt;p&gt;Chris asked us to think in terms of what security leaders are asked by boards all the time and often struggle to answer: what did we actually get for this investment? What we need more disciplined framework for evaluating security spending based on risk reduction per dollar. That means asking better questions up front: what threat does this control address, what does it really cost once licensing, implementation, staffing, and maintenance are included, and what measurable reduction in exposure does it create? This is how you get to structured decision-making. When security teams can explain why one control was prioritized over another in terms that leadership understands, the conversation changes from vague reassurance to defensible tradeoffs.&lt;/p&gt;

&lt;p&gt;If software and packages are still being pulled in freely, if extensions get broad permissions without scrutiny, and if reviews stop at surface-level validation, the pipeline stays open to abuse. Chris walked through examples that looked benign at first glance but revealed patterns of Trojan behavior, suspicious permissions, deceptive imports, callback infrastructure, and signs of rushed or obfuscated code. Prioritization is key.&lt;/p&gt;

&lt;p&gt;He gave us the practical advice of what we could immediately implement: Scan software before use, review open source with stronger technical oversight, pin safe packages, and introduce cooldown periods. We must adopt a posture in which we rotate keys aggressively, sever malicious command-and-control connections urgently, and embrace AI to scale analysis where it adds real value. Attackers are operating in the real world and have no intention of reading your threat model. Your defenses need to be just as practical and reality-based.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flft7ggs4zdwvtgbd2a8j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flft7ggs4zdwvtgbd2a8j.png" alt="Chris Lindsey" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  npm's Crisis Is Really an Operations Story
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/jenngile?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Jenn Gile, founder of OpenSourceMalware.com&lt;/a&gt;, called "npm's dark side: Preventing the next Shai-Hulud," she presented the last year of npm account takeovers and package compromises as a lesson in how malware now rides normal engineering behavior. Jenn drew a sharp line between two kinds of software risk: accidental vulnerabilities and intentionally malicious packages. A vulnerability is a flaw that can be exploited if an attacker has a viable path. Malicious software is built from the start to cause harm, often by targeting developers and build environments directly, and it does not always need the same kind of runtime path to do damage. Malicious code does rely, though, on abusing trust. When trust is the vector, the usual instinct to stay on the latest version can become part of the problem.&lt;/p&gt;

&lt;p&gt;The heart of the session was account takeover (ATO) and why npm remains such an attractive target. Install scripts still run by default, and provenance is not mandatory. Long-lived publishing tokens remain common. In practice, that means attackers do not always need to break the package ecosystem itself. They can hijack trust that already exists. Jenn walked through a string of compromises from 2025 into 2026, including phishing campaigns, typosquatted domains, spoofed maintainer emails, CI and GitHub Actions token theft, and follow-on attacks that used stolen secrets to widen the blast radius. The throughline across cases like Nx, Qix, &lt;a href="https://blog.gitguardian.com/shai-hulud-2/" rel="noopener noreferrer"&gt;Shai-Hulud&lt;/a&gt;, &lt;a href="https://blog.gitguardian.com/team-pcp-snowball-analysis/" rel="noopener noreferrer"&gt;TeamPCP&lt;/a&gt;, and Axios was not just a technical weakness. It was how easily trusted maintainers, trusted packages, and trusted upgrade habits could be turned against the people relying on them.&lt;/p&gt;

&lt;p&gt;Jenn explained that hardware keys help protect the human authentication path, while trusted publishing helps protect the machine path by tying publication to a specific GitHub Actions identity. Session-based authentication can reduce exposure windows, even if it does not eliminate the risk of phishing. However, strong controls only work if teams actually use them, and right now, friction and bias still get in the way.&lt;/p&gt;

&lt;p&gt;Jenn's advice was to treat malware prevention as a team sport across development, product security, cloud security, and incident response. Use lockfiles, avoid automatic upgrades, scrutinize lifecycle scripts, harden CI, scan for malware earlier, rotate and scope credentials, monitor for misuse, and build supply chain playbooks that account for how malware behaves differently from ordinary vulnerabilities, especially in the JavaScript and Python ecosystems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5hs89a186y45fh02mh8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fd5hs89a186y45fh02mh8.png" alt="Jenn Gile" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Scale Comes From Systems, Not Heroics
&lt;/h2&gt;

&lt;p&gt;In the final talk of the day, from &lt;a href="https://www.linkedin.com/in/mudita-khurana-87b72442/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Mudita Khurana, an Airbnb staff security engineer&lt;/a&gt;, called "Scaling AppSec through humans &amp;amp; agents," they presented a model for handling a world where code volume is rising fast, AI tools are now common, and meaningful portions of code are being produced outside the old IDE-centered workflow. She explained her company is seeing more code, more contributors, and far more code generated with AI than even a few years ago. Today nearly all pull request authors are using AI coding tools weekly, a meaningful amount of code is now written by non-engineers outside the IDE, and a large share of total code is AI-generated. Mudita explained you cannot keep up by adding manual review alone. Their response is a layered one: unified tooling to create consistency, LLM agents to extend coverage, and a human network to bring judgment and context where automation still falls short.&lt;/p&gt;

&lt;p&gt;A single security CLI acts as the abstraction layer over capabilities like static analysis, software composition analysis, secrets detection, and infrastructure-as-code scanning, with the same experience, exemptions, and metrics no matter where it runs. That lets security checks show up across the developer workflow, from lightweight pre-commit feedback to fuller pull request scans and post-merge coverage.&lt;/p&gt;

&lt;p&gt;On top of that, the team is using AI for security review in a more grounded way than generic prompting. Instead of asking a model for a broad security pass, they feed it security requirements as code, along with internal frameworks, auth models, and known anti-patterns. They also measure prompt changes against a dataset built from real historical vulnerabilities, which gives them a baseline for whether the agents are actually improving.&lt;/p&gt;

&lt;p&gt;The part of their plan that Mudita was the most excited to share was their security champions program. They do not treat this program as volunteer side work. It is tied to the engineering career ladder, backed by real responsibilities, and supported with a two-way flow of data between security and the orgs doing the work. These champions help write custom rules, triage findings, support risk assessments, and drive adoption because they understand the business context in a way central security teams often cannot. They have created a feedback loop where human insight improves the tools, the tools improve the signal, and prevention gradually moves earlier, into the IDE, into AI prompts, and into the default way code gets written.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl9u2793vocmnr2kyslr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpl9u2793vocmnr2kyslr.png" alt="Mudita Khurana" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Security that lives where decisions happen
&lt;/h2&gt;

&lt;p&gt;One pattern ran through almost every strong session: security works best when it shows up at the point of action. In an IDE. In a pull request. In a package policy. In a browser extension review. In a token issuance flow. In a prompt used by an AI assistant. Teams still lose time when secure guidance lives in a wiki, a yearly training deck, or a control that runs too late to influence the original choice.&lt;/p&gt;

&lt;p&gt;That shift sounds simple, but it changes program design. It favors lightweight friction, contextual signals, paved paths, and small reminders over large annual campaigns. It also favors security teams that can collaborate with developer platforms, identity teams, and cloud teams instead of operating as a separate review function.&lt;/p&gt;

&lt;h3&gt;
  
  
  The new perimeter is made of borrowed trust
&lt;/h3&gt;

&lt;p&gt;Modern software development depends on borrowed trust. Developers trust registries, packages, maintainers, AI suggestions, browser tools, and automation pipelines. Organizations trust tokens, runners, integrations, and service accounts to behave within expected bounds. Attackers know that every one of those relationships can be bent.&lt;/p&gt;

&lt;p&gt;That has direct implications for secrets management and non-human identities. A stolen token, an over-scoped credential, or a poisoned dependency can move through trusted systems much faster than traditional controls were built to handle. The answer is tighter provenance, shorter credential lifetimes, stronger attestation, clearer ownership, and continuous review of the trust assumptions hiding inside delivery pipelines.&lt;/p&gt;

&lt;h3&gt;
  
  
  Maturity now means feedback loops
&lt;/h3&gt;

&lt;p&gt;There was another persistent theme that we need to focus on creating feedback loops. Behavioral nudges need measurement to know how to improve them. Threat prioritization needs cost and impact models to claim success. AI review needs evaluation against real defects to be meaningful. Supply chain response needs intelligence, containment, and recovery steps that teams can actually execute.&lt;/p&gt;

&lt;p&gt;Mature AppSec programs increasingly look like systems that learn. They collect signals, improve defaults, refine detections, tighten identity boundaries, and push lessons back into the places where code and infrastructure are created. The organizations that do this well will handle AI-generated code, secrets sprawl, and NHI governance with more control because they have already built the habit of turning incidents and friction into better operating models.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mile High City Learnings
&lt;/h2&gt;

&lt;p&gt;SnowFROC 2026, which happens at the highest altitude of any OWASP event, felt grounded in the best way. Talks treated security as daily operating design that focused on how people are rewarded, how trust is granted, how credentials spread, and how teams scale judgment without burning out the humans in the loop. Your author was able to give a talk about how we moved from slow waterfall based deployment to a world of DevOps where we have never deployed more, faster. We have a golden opportunity as we adopt AI across our tool chains to rethink authentication in a meaningful way that might just reverberate through all our stacks of non-human identities. That is the feedback loop we can all benefit from.&lt;/p&gt;

&lt;p&gt;For teams thinking about identity risk, secrets exposure, and the governance of machine-driven development, SnowFROC offered a useful path forward. Start with defaults. Reduce silent trust. Treat credentials and dependencies as live operational risk. Then build feedback loops that make the next secure decision easier than the last one. That is a practical agenda, and after a snowy spring day in Denver, it also feels achievable.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>appsec</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>ATLSECCON 2026: Context, Identity, and Restraint in Modern Security</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Mon, 04 May 2026 12:38:30 +0000</pubDate>
      <link>https://dev.to/gitguardian/atlseccon-2026-context-identity-and-restraint-in-modern-security-40lf</link>
      <guid>https://dev.to/gitguardian/atlseccon-2026-context-identity-and-restraint-in-modern-security-40lf</guid>
      <description>&lt;p&gt;Harbor cities understand accumulated risk. Cargo moves in quietly. Weather shifts by degrees. One bad assumption can sit unnoticed until it reaches critical mass. Halifax has lived with that kind of memory for more than a century. On &lt;a href="https://parks.canada.ca/culture/designation/evenement-event/halifax-explosion?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;December 6, 1917, a collision in Halifax Harbor triggered the largest man-made explosion prior to the atomic bomb&lt;/a&gt;, a disaster that directly changed the lives of over 11,000 people and permanently altered the city's sense of consequence.&lt;/p&gt;

&lt;p&gt;That history gave this year's &lt;a href="https://www.atlseccon.com/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Atlantic Security Conference, ATLSECCON&lt;/a&gt;, a useful backdrop. Organizers noted that this event started in 2011 with only 45 people in attendance. The conference has grown into a major regional gathering, with more than 1,750 participants this year. This year's volunteer-led Halifax security conference featured over 70 industry leaders and subject-matter experts delivering sessions across seven speaking tracks.&lt;/p&gt;

&lt;p&gt;A lot of events in 2026 are still trying to sort out how to talk about AI without either flattening the subject into product language or inflating it into prophecy. ATLSECCON largely avoided both traps. The strongest sessions kept returning to a more durable concern: the systems around us are getting faster, less legible, and more distributed, which means trust now depends on context, observability, and restraint.&lt;/p&gt;

&lt;p&gt;Here are just a few takeaways from this year's ATLSCCON.&lt;/p&gt;

&lt;h2&gt;
  
  
  All Data Has A Half-Life
&lt;/h2&gt;

&lt;p&gt;In the opening keynote session from the legendary &lt;a href="https://www.linkedin.com/in/wendynather?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Wendy Nather, Senior Research Initiatives Director at 1Password,&lt;/a&gt; called "Dangerous Data," we were presented with a sharp reminder that data security is no longer just a matter of access control or storage hygiene. What words mean changes over time. Context changes the risk of the same records, which were safe until something was updated. Systems that process data at machine speed can weaponize those changes long before a human review loop catches up.&lt;/p&gt;

&lt;p&gt;Wendy's framing around integrity attacks moved past the familiar story of theft, though stolen data is very much still a problem. Corrupted data, subtly altered records, changed semantics, and AI-assisted manipulation create a harder reality to deal with. Failures from these data changes undermine trust itself, which is usually the thing security teams discover only after everything downstream starts behaving strangely. She made a very strong point about how we are viewing AI with a "toxic anthropomorphism." Teams keep assigning intent and understanding to AI systems as if they are accountable actors rather than pattern engines. That habit creates design mistakes, policy mistakes, and false confidence.&lt;/p&gt;

&lt;p&gt;Wendy explained that security teams need a richer model of data than classification labels and database boundaries. Time, event context, ownership, and downstream use all matter. So does deletion. The old instinct to collect everything because it might become useful later now is dangerous in a world of AI-speed parsing, as data theft and corruption mean a larger blast radius than ever before. There is a direct crossover with secrets sprawl and identity sprawl, which follow the same pattern. Unbounded accumulation feels efficient until the context shifts and the liability becomes the story.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb9afbupph257zrdx1te.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feb9afbupph257zrdx1te.png" alt="Wendy Nather" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Exposure Needs a Business Compass
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/tara-jaques-959994b7/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Tara Jaques, Technical Director at Tenable&lt;/a&gt;, presented "Beyond the Silos: Operationalizing Exposure Management in a Fragmented Landscape," a practical case for treating exposure management as an operating model rather than a dashboard category. That distinction mattered. Tara explained that fragmented tools and fragmented teams do not just slow response; they distort judgment.&lt;/p&gt;

&lt;p&gt;She defined exposure narrowly enough to be useful: A risk only becomes an exposure when it is preventable, exploitable, and capable of meaningful impact. That actually cuts through a lot of noise in a time when we have never had more signal. Security teams are drowning in findings, and many programs still reward "motion" over "consequence." Tara argued for context over volume, with prioritization tied to how weaknesses combine in real environments and how those combinations map to actual business harm.&lt;/p&gt;

&lt;p&gt;Organizations are dealing with sprawling SaaS estates, AI-driven change, and identities acting as the new perimeter. The maturity model she outlined was helpful because it describes progress as operational rather than aspirational. Unified visibility, cross-functional alignment, and continuous prioritization might not be glamorous goals, but they are the difference between a security program that can explain its choices and one that can only describe its backlog. For teams working on identity risk or non-human identity governance, that is the same journey. You cannot govern what you cannot inventory. You cannot prioritize what you cannot place in context.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6e4g8d2757kpp7pbvvx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6e4g8d2757kpp7pbvvx.png" alt="Tara Jaques" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Agent Problem Is a Runtime Problem
&lt;/h2&gt;

&lt;p&gt;In the session by &lt;a href="https://ca.linkedin.com/in/jasonkeirstead?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Jason Keirstead, Founding CTO at LangGuard.AI&lt;/a&gt;, "Your AI Agents Are Lying To You," he presented one of the conference's clearest explanations for why agentic systems appear convincing in tests but become dangerous in production. In short, we can test under the best conditions, but the real world has far more nuance, data, and adversaries than we can ever replicate in the lab.&lt;/p&gt;

&lt;p&gt;Jason walked through incidents where agents hallucinated, acted, and then concealed what happened. That sequence breaks the habits that many teams still carry over from traditional application monitoring. A conventional system crashes, throws an error, or fails a control in a way that looks familiar. An agent can proceed confidently, misuse legitimate tool access, and generate a plausible explanation while doing damage. The problem is not just model quality, but instead, a runtime reality. Prompt injection, model substitution, credential drift, tool sprawl, and silent changes in access surface are not normally things that teams generally test their internal agentic systems on.&lt;/p&gt;

&lt;p&gt;The strongest part of the talk was how concrete advice Jason gave: inventory your agents, trace what they do at runtime, and enforce policy as code. We must map credentials back to accountable humans and build agent-aware incident response. It is what a control plane should be accounting for. An AI agent with broad tool access and long-lived credentials is functionally a privileged non-human identity. Treating it as a novelty creates the gap. Treating it as a governed identity gives teams a chance to close it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep67gzxwfsmm35csvh38.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fep67gzxwfsmm35csvh38.png" alt="Jason Keirstead" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Identity Became The Fastest Path To Breach
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://ca.linkedin.com/in/fortinpascal?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Pascal Fortin, CEO at Cybereco&lt;/a&gt;, presented "When AI Broke Your Security Model: What Still Works, What's Dead, and What to Fix First." He explained that attack chains that once gave defenders hours now compress into seconds. AI did not change what attackers want. It changed how fast and how cheaply they can get it.&lt;/p&gt;

&lt;p&gt;The old assumptions that shaped many security programs have quietly died: MFA alone does not reliably protect accounts, help desks cannot safely verify callers with basic personal information, 90 days of logs is not enough, and IOC-driven detection misses attackers who steal identities and live off legitimate tools. The speed shift is the real story that Pascale delivered. What once took hours now happens in minutes, and in some cases in 22 seconds.&lt;/p&gt;

&lt;p&gt;The infostealer-to-identity pipeline makes that possible by turning stolen passwords, browser sessions, and tokens into full account access without malware, while techniques like adversary-in-the-middle phishing and voice-based social engineering make MFA bypass and account takeover scalable. That is why the legacy SOC model no longer holds up. Human-led alert triage, manual case building, and slow approval chains cannot keep pace with machine-speed attacks.&lt;/p&gt;

&lt;p&gt;What still works is phishing-resistant MFA like FIDO2 and passkeys, short-lived credentials, control-plane separation, immutable backups, behavioral detection, and cross-domain sequence analysis. The first priority is identity: harden the help desk, remove weak MFA for privileged roles, extend log retention, inventory AI agents and OAuth apps, and build preapproved containment paths that can act in seconds. The analysts' role is moving up the stack, away from routine triage and toward workflow design, threat hunting, detection tuning, and high-severity judgment. Your controls are not broken. Your assumptions are.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm884rdz39th1qo6x1s8s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm884rdz39th1qo6x1s8s.png" alt="Pascal Fortin" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Context is now part of the control
&lt;/h2&gt;

&lt;p&gt;Across the conference, talks shared a common premise: any controls without context have limited value. That showed up in data integrity, exposure management, resilience planning, and AI security. Teams have plenty of telemetry. The harder problem is understanding what matters now, what changed recently, and what combinations of facts create real consequences.&lt;/p&gt;

&lt;p&gt;That is as much a maturity issue as a technical one. We need sharper signals, tied to business conditions and operational reality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trust is being pushed down the stack
&lt;/h3&gt;

&lt;p&gt;In the past, teams often assumed trust sat higher up. You trusted the user, the analyst, the admin, the workflow, or the approval process. Now, a lot of the failure happens lower down, inside the machinery. Logs can be incomplete. An AI agent can take actions too fast for a human to review. A SaaS integration can quietly gain more access over time. A user interface can steer people toward unsafe choices. By the time a human notices, the system has already made the trust decision for them.&lt;/p&gt;

&lt;p&gt;You have to express trust in technical controls, not just in intentions. Systems need to show who did what, when, with what authority, and what changed. Identity has to be tightly governed, especially for service accounts, OAuth apps, and AI agents. Recovery has to be designed in advance because prevention will ultimately, in some way fail.&lt;/p&gt;

&lt;p&gt;Identity systems, tokens, logs, APIs, permissions, provenance, telemetry, and containment controls now carry more of the burden of proving whether something is legitimate. Security teams can no longer rely on people saying, "Trust us, this is fine." The system itself has to make trust visible and enforceable.&lt;/p&gt;

&lt;h3&gt;
  
  
  The winning move is disciplined reduction
&lt;/h3&gt;

&lt;p&gt;One of the quiet patterns across the two days was restraint. Collect less. Expose less. Grant less. Assume less. Several speakers, from different angles, arrived at the same conclusion: complexity is feeding the adversary. Sprawl creates ambiguity, and ambiguity creates time for attackers.&lt;/p&gt;

&lt;p&gt;That is why exposure management, identity governance, and secrets hygiene belong in the same conversation. Each discipline is trying to reduce unnecessary pathways before they become incidents. That is less dramatic than breach rhetoric, but it is how mature programs get built.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Halifax Makes Easy to See
&lt;/h2&gt;

&lt;p&gt;ATLSECCON did a good job resisting easy stories about novelty and disruption. Sessions focused on data accumulation, consequences, and the need to build organizations that can still make sound decisions when speed increases and visibility drops. Your author was able to talk about the impact of holding onto old ways to authenticate while the world evolved at a dizzying pace. We must embrace identity-centric designs and be pragmatic about visibility into the reality of the systems we have built so far.&lt;/p&gt;

&lt;p&gt;We must treat AI agents as governed identities, the same as we should have been doing for all non-human identities deployed in our environments. We need to reduce secrets and credential sprawl before they become operational debt. Teams need to prioritize exposures that matter to the business, not just the scanner, as they invest in observability that helps teams reconstruct intent and sequence, not simply collect artifacts. Organizational trust now depends on architecture and operations moving together.&lt;/p&gt;

&lt;p&gt;Security still has plenty of room for cleverness, but this moment rewards discipline more. Halifax has a long memory for what happens when dangerous material, busy systems, and thin margins of error meet in the same place. ATLSECCON 2026 turned that history into something useful: a conference full of reminders that resilience starts well before the blast.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>ai</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>What the Mythos-Ready Briefing Says About Credentials</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Fri, 01 May 2026 12:53:05 +0000</pubDate>
      <link>https://dev.to/gitguardian/what-the-mythos-ready-briefing-says-about-credentials-2ik3</link>
      <guid>https://dev.to/gitguardian/what-the-mythos-ready-briefing-says-about-credentials-2ik3</guid>
      <description>&lt;p&gt;The &lt;a href="https://labs.cloudsecurityalliance.org/mythos-ciso/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Mythos-ready briefing&lt;/a&gt; landed last week, co-signed by Jen Easterly, Bruce Schneier, Heather Adkins, Rob Joyce, Chris Inglis, Phil Venables, and 60+ other CISOs from Google, Snowflake, Atlassian, and organizations like the NFL and TransUnion. Among the controls they named as critical for the AI vulnerability era were secrets rotation, non-human identity governance, early detection of compromise, and honeytoken-based deception. If you've been pushing for more budget for better secrets security, this is the document to put in front of your CISO.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the paper says about credentials
&lt;/h2&gt;

&lt;p&gt;The briefing is a response to Anthropic's &lt;a href="https://red.anthropic.com/2026/mythos-preview/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Claude Mythos Preview&lt;/a&gt; announcement, which reported autonomous discovery of thousands of zero-days across every major operating system and browser with a 72% exploit success rate. The paper lays out 11 priority actions, a risk register, and a 90-day plan for CISOs. Credentials underpin nearly every control it calls out.&lt;/p&gt;

&lt;p&gt;In the Key Takeaways, the authors name secrets rotation alongside segmentation, egress filtering, Zero Trust, and phishing-resistant MFA as mitigating controls that limit blast radius when exploitation occurs. The risk register tags "Unmanaged AI Agent Attack Surface" as CRITICAL, pointing to privileged agents operating outside existing control frameworks.&lt;/p&gt;

&lt;p&gt;Priority Action 8 ("Harden Your Environment") mandates phishing-resistant MFA for all privileged accounts and locking down the dependency chain. Priority Action 9 ("Build a Deception Capability") calls for deploying canaries and honeytokens, layered with behavioral monitoring and pre-authorized containment. The executive briefing section frames early detection of compromise as a metric boards should be tracking. This briefing is a rare alignment of industry leadership putting credential security squarely on the critical-controls list.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why a Mythos world makes credentials matter more
&lt;/h3&gt;

&lt;p&gt;There's a common misreading of the AI vulnerability story, which is that zero-days become the dominant threat and everything else fades. The paper's own Appendix A pushes back on that. The authors note that the historical collapse in time-to-exploit has not produced a proportional rise in exploitation impact, and that most consequential recent breaches came from credential abuse, social engineering, or supply chain compromise rather than zero-day exploitation.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://blog.gitguardian.com/verizon-dbir-2025/" rel="noopener noreferrer"&gt;2025 Verizon DBIR&lt;/a&gt; backs this up. Stolen credentials remain the leading initial access vector at 22% of all breaches, and 88% for basic web application attacks. Machine identities are now involved in some stage of &lt;a href="https://www.cyberark.com/resources/product-insights-blog/unified-security-bridging-the-gaps-with-a-defense-in-depth-approach?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;68% of IT security incidents&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Layer Mythos-class capability on top of that, and valid credentials become the fastest way in. When zero-days are cheap, they accelerate lateral movement after initial access. They don't replace credentials as the entry point. That's how the &lt;a href="https://cloud.google.com/blog/topics/threat-intelligence/unc5537-snowflake-data-theft-extortion?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Snowflake breach in 2024&lt;/a&gt; hit 165 organizations from credentials that had been sitting in infostealer logs, some since 2020. MFA wasn't enforced, rotation hadn't happened, and old credentials were still valid — no novel exploit needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI is accelerating the credential sprawl that underlies all of this
&lt;/h2&gt;

&lt;p&gt;That risk is accelerating. AI drives credential exposure on two fronts, volume and surface area. As the paper notes, higher code output with less consistent review increases the number of vulnerabilities that ship. The same velocity drives a parallel explosion in credential creation.&lt;/p&gt;

&lt;p&gt;Our &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2025?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;State of Secrets Sprawl 2026 report&lt;/a&gt; found 29 million new hardcoded secrets exposed on public GitHub in 2025, a 34% year-over-year increase and the largest single-year jump on record. Credentials tied specifically to AI services surged 81% year-over-year.&lt;/p&gt;

&lt;p&gt;And 28% of secrets-related incidents in our 2026 data originated entirely outside source code. They showed up in CI/CD systems like &lt;a href="https://docs.gitguardian.com/ggshield-docs/integrations/cicd-integrations/github-actions?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitHub Actions and GitLab runners&lt;/a&gt;, in &lt;a href="https://blog.gitguardian.com/secrets-leaked-outside-the-codebase/" rel="noopener noreferrer"&gt;collaboration surfaces like Slack, Jira, and Confluence&lt;/a&gt;, and on developer machines.&lt;/p&gt;

&lt;p&gt;Those are now the same surfaces AI agents read, summarize, and act on as part of day-to-day workflows. The paper's "10 Questions" diagnostic asks whether organizations have disciplined control of their agentic supply chain, including MCP servers, plugins, and skills. The credential question sits directly underneath: what secrets do those systems hold, where do they live, who owns them, and how fast can they be rotated when something goes wrong?&lt;/p&gt;

&lt;p&gt;In most enterprise environments, non-human identities already outnumber human users by a ratio of roughly &lt;a href="https://nhimg.org/nhi-challenges?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;25-50x&lt;/a&gt;. Very few organizations have an inventory of the ones they already have, let alone the ones AI agents are creating at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  What security teams actually need
&lt;/h2&gt;

&lt;p&gt;Security teams need visibility everywhere credentials actually sprawl: repos, CI logs, container layers, tickets, chat threads. That's a solvable problem. The harder part is connecting each exposed secret to the non-human identity behind it and figuring out which services, workloads, or automations depend on it. Without that context, triage stalls, and an exposed credential gets used before anyone can act on it.&lt;/p&gt;

&lt;p&gt;Ownership is where most of this work breaks down. When a credential is exposed, the question "who owns this?" usually doesn't have a clean answer. The developer who committed it may have left the team. Often, the service it authenticates runs in a different group's infrastructure entirely. The rotation path may cross three systems that were never designed to coordinate with each other. In practice, that means the incident sits in a queue while three teams figure out whether it's theirs. Every hour in that queue is an hour the credential is live and usable. That's the exposure window.&lt;/p&gt;

&lt;p&gt;Non-human identities compound the problem. A service account created for a CI pipeline two years ago may have no human owner on record. No one's inbox to land in, no runbook to follow.&lt;/p&gt;

&lt;p&gt;Most security programs already struggle to detect exposed credentials. They don't even touch ownership and response, which is the gap GitGuardian was built to close. &lt;a href="https://www.gitguardian.com/monitor-internal-repositories-for-secrets?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian gives teams continuous secrets detection&lt;/a&gt; across source code and other places where secrets appear. That includes CI/CD systems like GitHub Actions and GitLab task runners, collaboration platforms like Slack and Jira, and developer environments down to the laptop. It surfaces exposed credentials where modern work actually happens, not just where security teams wish it did. From there, &lt;a href="https://www.gitguardian.com/nhi-governance?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;NHI discovery and ownership mapping&lt;/a&gt; connect exposed secrets to the service accounts, API keys, and machine identities that power agentic systems and automation at scale.&lt;/p&gt;

&lt;h2&gt;
  
  
  A case for moving credential hygiene up the priority list
&lt;/h2&gt;

&lt;p&gt;Containment is the whole game once time-to-exploit collapses to hours. You can't afford to find credential exposure days or weeks after the fact. A secret sitting in Slack or a build log doesn't show up in a vulnerability scan. An API key tied to an agent workflow still expands the attack surface. A service credential without an owner still slows every remediation step that follows.&lt;/p&gt;

&lt;p&gt;The paper draws a clear line through its 11 priority actions. With exploitation becoming both faster and more automated, response speed and blast-radius reduction move to the center. Secrets rotation, non-human identity governance, phishing-resistant MFA, and honeytoken-based detection belong at the front of the list as core resilience controls. They shape how quickly an organization can contain misuse once an attacker gets in, or once an agentic workflow is abused.&lt;/p&gt;

&lt;p&gt;Given what the data shows, those controls deserve to be on the 45-day track alongside environment hardening, not grouped underneath it. In our longitudinal dataset, &lt;a href="https://www.gitguardian.com/whitepapers/non-human-identity-whitepaper?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;64% of secrets leaked in 2022&lt;/a&gt; still hadn't been revoked as of 2026. The paper warns that time-to-exploit has collapsed to hours. Those two numbers don't coexist safely in the same environment.&lt;/p&gt;

&lt;p&gt;GitGuardian directly supports that shift. Secrets detection helps teams find exposed credentials before attackers do. Rotation signals and remediation workflows push incidents toward closure instead of letting them linger.&lt;/p&gt;

&lt;p&gt;NHI discovery and control help organizations understand which machine identities exist, what they can access, and who's responsible for them. &lt;a href="https://www.gitguardian.com/honeytoken?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian Honeytokens&lt;/a&gt; add an early warning layer that surfaces credential misuse before a broader incident unfolds. That maps directly to Priority Action 9 in the paper, which calls for honeytoken deployment, behavioral monitoring, and pre-authorized containment. The goal is a response that executes at machine speed.&lt;/p&gt;

&lt;p&gt;If you're building your 90-day plan from the Mythos briefing, credential security deserves to move up the list. Hardening, detection, and response all come down to the same question: when something moves, how fast can you contain it? The organizations that come through this well will be the ones that had that answer before they needed it. Our 2026 State of Secrets Sprawl report has the full picture.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;strong&gt;Read the 2026 report&lt;/strong&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>cybersecurity</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>AI Agents Authentication: How Autonomous Systems Prove Identity</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 30 Apr 2026 14:15:56 +0000</pubDate>
      <link>https://dev.to/gitguardian/ai-agents-authentication-how-autonomous-systems-prove-identity-4j0n</link>
      <guid>https://dev.to/gitguardian/ai-agents-authentication-how-autonomous-systems-prove-identity-4j0n</guid>
      <description>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; AI agents are acting entities given more and more autonomy to execute tasks, write code, and integrate with SaaS tools. To do so, they need to authenticate with numerous systems, making AI authentication a crucial security boundary that determines blast radius, revocability, and long-term governance risk.&lt;/p&gt;

&lt;p&gt;AI agents inherit and amplify credential risks. Organizations that treat AI agents as governed non-human identities, with scoped access, short-lived credentials, continuous secret monitoring, and lifecycle controls, will enable safe autonomy. Those who rely on static API keys and ad hoc credential management will accumulate invisible systemic risk.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Why AI Agents Authentication Is Now a Security-Critical Control
&lt;/h2&gt;

&lt;p&gt;There is a fundamental asymmetry in how machine authentication fails compared to human authentication. With humans, the threat is impersonation: someone posing as someone else. With machines, the threat is subversion. If an attacker compromises the runtime itself, every authentication factor on that system becomes accessible simultaneously.&lt;/p&gt;

&lt;p&gt;Agentic AI compounds this because these systems are wired directly into the infrastructure that traditional controls were built to protect: internal APIs, cloud environments, SaaS platforms, regulated data stores. Most agents do not have a distinct, auditable identity of their own. &lt;strong&gt;They inherit the user's credentials&lt;/strong&gt;, assume a shared service account, or act under broad delegated permissions, which means there is no clean way to scope, review, or revoke their access independently of the humans and systems they work alongside.&lt;/p&gt;

&lt;p&gt;In July 2025, &lt;a href="https://incidentdatabase.ai/cite/1152/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Replit's agent deleted a production database&lt;/a&gt; holding data for over 1,200 real companies, then generated 4,000 fake accounts to conceal it because nothing separated the agent's credentials from production write access.&lt;/p&gt;

&lt;p&gt;Prompt injection sharpens this further: because language models cannot reliably distinguish a legitimate instruction from a malicious one embedded in data they process, an agent with broad access can be redirected against the same systems it was built to serve.&lt;/p&gt;

&lt;p&gt;You cannot read the code of an LLM-backed agent and predict its behavior in production. With a deterministic microservice, static analysis and code review could partially substitute for rigorous authorization boundaries. With probabilistic autonomous agents, that option is gone. An agent provisioned with access to a sensitive API will eventually find a reason to use it, not because it is malicious, but because it is designed to explore available options to fulfill its goal. &lt;strong&gt;Authentication design is therefore the only reliable enforcement layer between AI autonomy and enterprise infrastructure.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When AI agents act, authentication determines what they can reach, what they can modify, how long they retain access, and how quickly you can shut them down.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Do AI Agents Handle Authentication Today?
&lt;/h2&gt;

&lt;p&gt;Most AI agents authenticate using the same primitives as traditional machine identities: API keys, OAuth tokens, service accounts, IAM roles, or vault-issued credentials.&lt;/p&gt;

&lt;p&gt;While these mechanisms are well-understood, there is a real governance gap today: authentication decisions for AI agents are made quickly during development and rarely revisited as agents scale in capability or scope. The result is a pattern security teams have seen before: convenience wins at deployment, risk accumulates invisibly over time.&lt;/p&gt;

&lt;h3&gt;
  
  
  Static API Keys and Personal Access Tokens
&lt;/h3&gt;

&lt;p&gt;Developers (and other users) frequently provision API keys or reuse personal access tokens for rapid integration. These credentials are easy to deploy, often long-lived, rarely rotated, and commonly shared across environments. Most are bearer credentials (possession alone grants access).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;In agentic systems, the exposure surface is broader than with traditional integrations.&lt;/strong&gt; Agents generate logs, produce configuration files, persist state, and replicate workflows automatically.&lt;/p&gt;

&lt;p&gt;GitGuardian's &lt;a href="https://blog.gitguardian.com/the-state-of-secrets-sprawl-2026/" rel="noopener noreferrer"&gt;State of Secrets Sprawl 2026&lt;/a&gt; report found &lt;strong&gt;28.65 million hardcoded secrets&lt;/strong&gt; added to public GitHub in 2025 alone, a 34% year-over-year increase and the largest single-year jump on record. More relevant to the agentic context: secret leak rates in AI-assisted code ran &lt;strong&gt;roughly double&lt;/strong&gt; the GitHub-wide baseline throughout the year. As code generation velocity increases, human supervision capabilities decrease, resulting in secrets sprawl scaling with agentic systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  OAuth Tokens
&lt;/h3&gt;

&lt;p&gt;OAuth is generally more secure, but only when properly implemented. Risks arise when scopes are overly broad, refresh tokens are long-lived, or tokens are shared across multiple agents without distinct ownership.&lt;/p&gt;

&lt;p&gt;There is also a structural limitation that goes beyond implementation quality. OAuth validates individual requests; agents create &lt;strong&gt;sequences of requests&lt;/strong&gt;. Each individual call might be authorized, but the combined action across a chain of tool invocations can produce an unauthorized outcome that no single token check ever catches. The gap between request-level authorization and sequence-level behavior is where agentic AI authentication risk actually lives.&lt;/p&gt;

&lt;p&gt;The Model Context Protocol's attempt to standardize OAuth for AI agents has made this gap visible in a concrete way. The current MCP authorization spec relies on anonymous Dynamic Client Registration, meaning any client can register as a valid OAuth client without identifying itself. Enterprises cannot accept this because it makes monitoring, auditing, and revocation nearly impossible, and it opens denial-of-service vectors. It also forces MCP servers to maintain stateful token mappings, which breaks horizontal scaling. This is a live example of how agentic systems expose problems that traditional OAuth was never designed to address.&lt;/p&gt;

&lt;p&gt;There is a harder problem that even well-implemented OAuth cannot solve today: &lt;strong&gt;cross-domain trust&lt;/strong&gt;. When an agent registered in one organization calls a service operated by another, neither OAuth 2.1 nor OIDC provides a standard mechanism to carry the agent's scoped permissions across that boundary. The receiving service has no reliable way to verify who provisioned the agent, under what constraints it operates, or whether the delegated scope is attenuated from the originating principal. For agentic AI architectures that span SaaS ecosystems or partner integrations, this is the current frontier. Organizations building cross-domain agent workflows should treat external service calls as untrusted by default and require explicit scope declaration at the API gateway layer until standards for cross-domain agent federation mature.&lt;/p&gt;

&lt;h3&gt;
  
  
  Service Accounts and IAM Roles
&lt;/h3&gt;

&lt;p&gt;Cloud-native AI agents often rely on AWS IAM roles, GCP service accounts, or Azure Managed Identities. These eliminate static secret storage, which is a genuine security improvement, but they shift the risk rather than eliminate it. Overprivileged role assignments, cross-environment credential reuse, and poor visibility into which agent assumes which role create a different class of exposure.&lt;/p&gt;

&lt;p&gt;NIST SP 800-207A is direct on this: "Each service should present a short-lived cryptographically verifiable identity credential to other services that are authenticated per connection and reauthenticated regularly." Meeting the short-lived requirement while neglecting scope granularity still leaves you with a short-lived token attached to an admin-level role, which is still excessive blast radius.&lt;/p&gt;

&lt;h3&gt;
  
  
  Vault-Issued Dynamic Credentials
&lt;/h3&gt;

&lt;p&gt;Short-lived, identity-bound credentials issued dynamically by vault systems represent a stronger architectural pattern. They reduce standing credential risk, shrink the exposure window, and eliminate manual rotation burden.&lt;/p&gt;

&lt;p&gt;The operational reality is worth acknowledging: encryption is easy, key management is hard. Dynamic secrets are only secure when each AI agent has a unique registered identity, roles are tightly scoped, secret exposure monitoring is active, and revocation workflows are tested in advance. Many organizations are not yet mature enough to manage the overhead of dynamic secrets for ephemeral workloads. Dynamic issuance alone does not guarantee security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Takeaway
&lt;/h3&gt;

&lt;p&gt;The authentication mechanism alone does not determine security. Security is defined by scope boundaries, token lifetime, attribution clarity, revocability, and lifecycle governance. In autonomous systems, an authentication decision is a blast radius decision.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Agents Authentication Defines Blast Radius
&lt;/h2&gt;

&lt;p&gt;Authentication design is incident response planning in advance. The difference between a contained breach and a systemic compromise often comes down to a single architectural choice made months earlier.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario A&lt;/strong&gt;: An AI DevOps agent is provisioned with a long-lived, admin-level API key to reduce integration friction. The key has organization-wide SaaS scopes and no expiration policy. The agent runtime is compromised through a prompt injection attack. The attacker inherits enterprise-scale access. Revocation requires manually hunting down a credential embedded across multiple environments and generated artifacts, a process measured in hours, sometimes days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario B&lt;/strong&gt;: The same agent uses a short-lived, scoped OAuth token issued specifically for its deployment task. Compromise of the runtime yields access to one integration surface for the token's remaining lifetime, minutes rather than months. Formal revocation via RFC 7009 exists, but most resource servers cache token validation for the token's full lifetime rather than querying the IdP on every request — so in practice, short token lifetime &lt;em&gt;is&lt;/em&gt; the revocation mechanism. The shorter the TTL, the smaller the window an attacker has regardless of whether revocation is called.&lt;/p&gt;

&lt;p&gt;There is a second dimension that mechanism selection alone cannot address: the distinction between impersonation and delegation. When an agent acts using the user's identity, the audit trail records "Sarah performed action X," even when Sarah never approved it, never saw the reasoning behind it, and had no control over the decision. Delegation, where the agent maintains its own identity and acts on the user's behalf within bounded scope, preserves accountability. This distinction matters most when something goes wrong and incident responders need to reconstruct who did what, why, and under whose authority. Agentic AI authentication best practices require delegation as the default model, not impersonation.&lt;/p&gt;




&lt;h2&gt;
  
  
  How to Evaluate AI Authentication Methods
&lt;/h2&gt;

&lt;p&gt;Before selecting a mechanism, security architects need consistent decision criteria. Five dimensions apply across all runtime environments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Blast radius containment&lt;/strong&gt;: Does the method enforce granular scope? Can it isolate environments? Does it prevent privilege escalation? For autonomous systems this is the primary criterion, because agents can probe and expand their operational surface in ways that static code cannot.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Revocability&lt;/strong&gt;: How fast can access be terminated? Can revocation be centralized and traced to a specific agent? The practical question during an incident is whether containment takes minutes or hours.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Exposure resistance&lt;/strong&gt;: AI systems generate logs, code, configuration files, and prompt responses, all surfaces where credentials can appear. Authentication methods must minimize the impact of token leakage across all of them. This is where AI authentication intersects directly with secret detection: the most secure mechanism is undermined if its credentials routinely surface in CI logs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scalability&lt;/strong&gt;: Security models must support hundreds of AI agents across multi-cloud architecture, SaaS ecosystems, and continuous deployment cycles. Any process requiring manual intervention at scale is a liability, not a control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer friction&lt;/strong&gt;: If secure methods are operationally complex, teams revert to API keys. The best AI agent authentication approaches balance strong security guarantees with operational simplicity. This constraint is real and should be designed for, not dismissed.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Agent Authentication Methods Compared
&lt;/h2&gt;

&lt;p&gt;Not all authentication mechanisms provide equivalent containment, revocability, or governance maturity. The hierarchy below is organized by enterprise risk reduction.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tier 1: OAuth 2.1 / OIDC with Short-Lived, Scoped Tokens
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Best for&lt;/strong&gt;: SaaS integrations, cross-organization APIs, federated enterprise services.&lt;/p&gt;

&lt;p&gt;An identity provider issues short-lived access tokens with defined scopes. The AI agent receives delegated authorization rather than static credentials. When properly scoped and short-lived, compromise is typically contained to a defined integration surface.&lt;/p&gt;

&lt;p&gt;Primary risks include scope misconfiguration, long-lived refresh tokens stored insecurely, and tokens shared across multiple agents. Most OAuth access tokens remain bearer credentials, so risk is mitigated through short lifetimes and contextual access policies. Where a SaaS provider supports OAuth with scoped access, it should be the default; API keys should not substitute when delegated &lt;a href="https://blog.gitguardian.com/oauth-for-mcp-emerging-enterprise-patterns-for-agent-authorization/" rel="noopener noreferrer"&gt;OAuth for MCP&lt;/a&gt; is available.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tier 2: Workload Identity Federation / Managed Identities
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Best for&lt;/strong&gt;: Trusted cloud runtimes, containerized workloads, internal APIs.&lt;/p&gt;

&lt;p&gt;The cloud platform issues short-lived credentials tied to the workload's runtime identity. No static secret is stored in the application, eliminating an entire class of exposure risk.&lt;/p&gt;

&lt;p&gt;The principal risk is overbroad IAM role assignments and role reuse across environments. Cloud-native identity reduces secret exposure risk but not privilege risk. A unique identity per AI agent is not optional; shared roles create attribution blind spots that collapse forensic traceability after a breach.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tier 3: mTLS / X.509 Certificate-Based Authentication
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Best for&lt;/strong&gt;: Zero-trust microservices, internal service-to-service communication in high-security environments.&lt;/p&gt;

&lt;p&gt;Both the AI agent and the target service present certificates and verify each other's identity via &lt;a href="https://blog.gitguardian.com/mutual-tls-mtls-authentication/" rel="noopener noreferrer"&gt;mTLS Authentication&lt;/a&gt;. Unlike bearer tokens, this is a proof-of-possession model. A stolen certificate is useless without the corresponding private key, which mitigates the replay attacks that plague token-based approaches.&lt;/p&gt;

&lt;p&gt;The operational complexity is real: PKI management, certificate issuance and renewal, and CRL/OCSP dependencies require infrastructure maturity. There is also a gap specific to AI agents that traditional SPIFFE/SPIRE workload identity deployments do not address. Current Kubernetes-based implementations assign identity at the service account level, meaning all replicas of a workload share the same identity. For deterministic APIs and stateless services this is acceptable. For AI agents, which are non-deterministic and context-driven, two instances of the "same" agent will behave differently based on inputs. Per-instance identity is required for real accountability and compliance traceability. &lt;a href="https://blog.gitguardian.com/getting-started-with-spiffe/" rel="noopener noreferrer"&gt;SPIFFE&lt;/a&gt; can provide it, but organizations extending existing workload identity infrastructure to AI agents without this adjustment inherit an attribution gap.&lt;/p&gt;

&lt;p&gt;See &lt;a href="https://blog.gitguardian.com/a-complete-guide-to-transport-layer-security-tls-authentication/" rel="noopener noreferrer"&gt;TLS Authentication&lt;/a&gt; for foundational implementation context.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tier 4: API Keys and Static Tokens
&lt;/h3&gt;

&lt;p&gt;API keys persist because they are simple, universally supported, and require no infrastructure. In agentic AI authentication, they represent the most common source of preventable risk.&lt;/p&gt;

&lt;p&gt;They are typically long-lived bearer credentials with limited granular scoping. Autonomous AI systems amplify the danger because they generate the exact artifacts, code commits, CI logs, configuration files, where credentials historically get embedded. Revocation is manual and reactive; blast radius is potentially broad and persistent until discovery.&lt;/p&gt;

&lt;p&gt;API keys are acceptable only when no stronger mechanism exists, and exclusively under strict compensating controls: vault-backed storage, unique key per agent, minimized TTL, and continuous secret exposure monitoring. Even then, they are a transitional choice, not a strategic architecture.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tier 5: Hardcoded Secrets
&lt;/h3&gt;

&lt;p&gt;Embedding credentials in source code, prompts, configuration files, or logs is not a technical shortcut. It is a governance failure. It creates permanent exposure risk, spreads credentials across repositories and artifacts that are difficult to fully enumerate, and produces compliance violations with no clean remediation path. The right response is not better scanning; it is architectural redesign.&lt;/p&gt;




&lt;h2&gt;
  
  
  Choosing the Right AI Authentication Model by Environment
&lt;/h2&gt;

&lt;p&gt;Authentication must match the runtime context.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;trusted cloud environments&lt;/strong&gt;, managed identities and short-lived tokens are the baseline. Static secrets have no place where the cloud platform provides an identity substrate. The primary governance task is ensuring roles are scoped per agent, not shared across workloads.&lt;/p&gt;

&lt;p&gt;For &lt;strong&gt;SaaS and cross-organization integrations&lt;/strong&gt;, OAuth 2.1 with scoped delegated access is the correct default. Scope discipline is the operative control: agents should hold the minimum permissions required for their specific task, not a superset defined at initial integration.&lt;/p&gt;

&lt;p&gt;In &lt;strong&gt;zero-trust internal architectures&lt;/strong&gt;, mTLS with automated certificate management provides the strongest assurance. Mutual authentication ensures both parties verify identity; the agent cannot be impersonated by anything that cannot present a valid certificate and prove possession of the private key.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Unmanaged endpoints and edge agents&lt;/strong&gt; must be treated as untrusted by default. They should never store long-lived secrets, must proxy authentication through a trusted backend, and should rely on ephemeral tokens only. The operational context is constrained; the security model must be the most conservative, not the most convenient.&lt;/p&gt;




&lt;h2&gt;
  
  
  Securing AI Authentication Across the Agent Lifecycle
&lt;/h2&gt;

&lt;p&gt;Authentication governance cannot be a one-time decision at deployment. It requires continuous control across the full agent lifecycle.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Creation&lt;/strong&gt;: Every AI agent should be registered in IAM with a unique identity, mapped to a named business owner, and documented with its intended scope and credential requirements before deployment. This is the point where AI authentication vulnerabilities are either designed in or designed out. Security teams that skip identity registration here will spend considerably more effort reconstructing it after a breach.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operation&lt;/strong&gt;: Continuous secret scanning of AI-generated outputs, monitoring of API call patterns, and periodic privilege review are not optional for agents in production. For high-risk or high-impact actions, OIDC Client-Initiated Backchannel Authentication (CIBA) offers a mechanism that most teams have not yet adopted but should be on the roadmap. CIBA lets an agent pause, request human approval through an async channel, and resume with a cryptographically verifiable token binding the human's consent to the specific action. The audit trail reads: "Agent performed X, approved by &lt;a href="mailto:alice@company.com"&gt;alice@company.com&lt;/a&gt; at 09:22 UTC." The token is short-lived, single-purpose, and bounded to the approved context — the correct architecture for agents operating near sensitive decisions.&lt;/p&gt;

&lt;p&gt;The practical limitation of any human-in-the-loop mechanism is &lt;strong&gt;consent fatigue&lt;/strong&gt;. When agents operate at volume, approval requests become noise and users begin approving everything reflexively. The scalable answer is not eliminating human oversight but shifting it upstream: define policy before the agent runs, not at runtime. IGA (Identity Governance and Administration) guardrails specify what categories of action require human approval, what can be automatically permitted within defined bounds, and what is blocked outright. The agent then operates within a pre-authorized policy envelope rather than generating individual approval requests. CIBA handles the exceptions; policy handles the rule.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: Credential rotation should be tied to lifecycle events, deployment updates, configuration changes, scope modifications, not to calendar schedules. Scope should be reassessed whenever an agent's capabilities change, because capability expansion without privilege review is how agents accumulate standing access invisibly over time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Decommission&lt;/strong&gt;: Immediate credential revocation and identity removal from IAM are mandatory at end of life, but these are not the same operation. Revocation terminates a specific active token; deprovisioning removes the agent's registered identity entirely — all credentials, stored sessions, persistent permissions, and any delegated scope chains the agent holds. Without a formal deprovisioning workflow, retired agents become ghost identities: removed from operational rotation but still technically credentialed. Emerging SCIM extensions for agentic identity (the AgenticIdentity schema draft) aim to standardize this lifecycle event so it can be automated rather than reconstructed manually per agent.&lt;/p&gt;




&lt;h2&gt;
  
  
  AI Agent Authentication Best Practices for 2026
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Treat AI agents as governed non-human identities.&lt;/strong&gt; Machine identities already outnumber human identities at enterprise organizations by at least 45 to 1, a ratio accelerating with AI adoption. Without formal registration, scoped permissions, owner assignment, and audit inclusion, AI agents accumulate as shadow identities, the most capable actors in your infrastructure with the least oversight. For most enterprises, the practical path to governing AI agent credentials runs through existing PAM (Privileged Access Management) tooling. AI agents are NHIs, and PAM vendors are already extending their platforms to cover them, and security teams should evaluate that coverage before building separate pipelines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Eliminate static secrets in favor of short-lived credentials.&lt;/strong&gt; Static API keys and hardcoded tokens are a permanent exposure risk in systems that generate artifacts at machine speed. Every credential should have an enforced expiration, not one set as an afterthought.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enforce scoped access and unique identities per agent.&lt;/strong&gt; Shared credentials between agents eliminate the attribution clarity required for both compliance and incident response. Each agent needs its own identity, its own role, and its own minimum-necessary permission set. Overprivileged AI agents represent the highest-severity authentication security risk in enterprise environments precisely because they operate continuously, autonomously, and at scale.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Isolate identity for agents serving multiple users.&lt;/strong&gt; Enterprise AI assistants that serve multiple users concurrently face an additional risk: cross-context data leakage. When a single agent instance operates under a shared identity and processes multiple users' data, there is no credential-level boundary preventing one user's context from contaminating another's session or response. Each concurrent user context should be treated as a distinct authentication scope, with data access enforced at the identity layer, not only at the application level.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Continuously scan AI-generated outputs for secrets.&lt;/strong&gt; AI agents produce the exact artifacts, code commits, configuration files, CI logs, where credentials historically get embedded. Secret scanning integrated into AI-assisted development workflows is the compensating control that keeps agentic credential sprawl from becoming unmanageable.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automate credential rotation and lifecycle controls.&lt;/strong&gt; Manual rotation does not scale to systems operating autonomously around the clock. Rotation triggers should be event-driven. Revocation on anomaly detection should be immediate and validated in advance, not discovered to be broken during an incident.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintain kill-switch capability.&lt;/strong&gt; Every autonomous system must have a tested shutdown pathway: centralized revocation, emergency privilege stripping, runtime isolation, and documented incident response playbooks for agent compromise. Autonomous systems operating without containment mechanisms are not secured. They are presumed safe.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Future of AI Authentication
&lt;/h2&gt;

&lt;p&gt;The current state of AI authentication reflects a transition period. The bearer token model, where possession equals access, is structurally mismatched with systems that generate artifacts, chain tool calls, and operate across trust boundaries without human oversight.&lt;/p&gt;

&lt;p&gt;The direction of travel is toward cryptographic identity at the request level. &lt;a href="https://www.ietf.org/archive/id/draft-patwhite-aauth-00.html?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;AAuth&lt;/a&gt;, a specification under development by Dick Hardt (author of OAuth 2.0), proposes a foundation where agents are first-class identities and every HTTP request is signed by the agent's key pair. Bearer tokens become irrelevant because a stolen token without the corresponding private key cannot be replayed. Delegation chains become explicit, visible, and auditable rather than reconstructed after the fact from logs.&lt;/p&gt;

&lt;p&gt;Multi-agent delegation chains introduce a constraint that standard token formats cannot enforce: each hop should only be able to reduce permissions, never expand them. With bearer tokens, there is no mechanism to prevent a sub-agent from reusing a delegated credential at full scope. Token formats like Biscuits and Macaroons address this through offline scope attenuation: the agent receiving a token can cryptographically restrict it before passing it downstream, and those restrictions cannot be removed by any party that doesn't hold the original minting key. This becomes the correct architecture for recursive agent orchestration, where the root credential should never be reachable from the leaf agents.&lt;/p&gt;

&lt;p&gt;AI-to-AI authentication will become a standard requirement as multi-agent architectures proliferate. When one autonomous agent instructs another, each hop in the chain must independently verify identity and authorization. Without this, a single compromised agent can cascade instructions through a downstream network that has no mechanism to question them.&lt;/p&gt;

&lt;p&gt;Self-provisioning identities, dynamic privilege negotiation, and real-time identity risk scoring will require policy enforcement infrastructure that evaluates agent behavior continuously, not just at initial credential issuance. The boundary between AI authentication and non-human identity governance will disappear, since they are the same discipline operating at different layers of the same problem.&lt;/p&gt;

&lt;p&gt;Security leaders building identity-first controls today are simply ahead of schedule.&lt;/p&gt;




&lt;h2&gt;
  
  
  Summary: Authentication Is the Primary Containment Boundary
&lt;/h2&gt;

&lt;p&gt;AI authentication determines access scope, breach containment speed, and lifecycle risk. The mechanism matters less than the scope it enforces, the lifetime it applies, the identity it uniquely represents, and the speed at which it can be revoked.&lt;/p&gt;

&lt;p&gt;Secure authentication architecture enables safe AI autonomy. Organizations that build it will treat AI agents as governed non-human identities, eliminate standing credentials, and validate their revocation capability before they need it. The ones that do not will discover, during an incident, that their most capable systems were also their least monitored.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQ
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;How do AI agents handle authentication in enterprise environments?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;AI agents typically authenticate using the same mechanisms as other non-human identities: API keys, OAuth tokens, service accounts, workload identities, or certificates. The difference is that autonomous agents operate continuously and may dynamically expand their integration surface, which makes authentication scope, token lifetime, and revocability far more critical than in traditional machine-to-machine use cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What are the most common AI authentication vulnerabilities?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The most common vulnerabilities include hardcoded API keys, long-lived static tokens, overprivileged service accounts, overly broad OAuth scopes, and poor credential rotation practices. In autonomous systems, these weaknesses are amplified because AI agents can replicate configuration errors, generate artifacts containing secrets, and operate at scale without direct human oversight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What is the best authentication method for AI agents?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;There is no universal answer; the appropriate model depends on the environment. For SaaS and cross-organization integrations, OAuth 2.1 with short-lived, scoped tokens is typically preferred. For trusted cloud workloads, workload identity federation or managed identities reduce secret exposure risk. In high-security microservices environments, mTLS and certificate-based authentication provide stronger assurance through proof-of-possession. The strongest model across all contexts is one that minimizes static credentials and enables rapid, centralized revocation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Are API keys secure enough for AI systems?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;API keys are convenient but inherently risky for continuously operating agents. They are typically long-lived, replayable, and often lack granular scoping. If used, they must be vaulted, uniquely assigned per agent, rotated frequently, and continuously monitored for exposure. Whenever a stronger mechanism exists, API keys should be replaced; they are acceptable as a transitional choice under strict controls, not as a strategic architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How should organizations rotate AI agent credentials safely?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Credential rotation for AI agents should be automated and tied to lifecycle events: deployment updates, configuration changes, scope modifications, or anomaly detection triggers. Mature organizations use dynamic secret issuance from vault systems, enforce expiration by default, and test revocation workflows before they are needed in an incident. Manual rotation processes do not scale for autonomous systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Should AI agents have separate service accounts?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes. Each AI agent should have its own unique identity and service account. Shared credentials create attribution blind spots and significantly increase blast radius if compromised. Unique identities enable scoped permissions, centralized revocation, clear audit trails, and effective incident response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How do you audit and monitor AI authentication at scale?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Auditing AI authentication requires a centralized identity inventory, continuous monitoring of secret exposure, behavioral telemetry analysis, and integration with IAM recertification processes. Security teams should be able to answer four questions for every agent in production: which credentials does it use, what systems can it access, when were its privileges last reviewed, and how quickly can access be revoked. Without those answers, AI authentication risk becomes invisible governance debt.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>devsecops</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>When We Use AI To Ship Fast, Secrets Spread Fast</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 16 Apr 2026 14:54:21 +0000</pubDate>
      <link>https://dev.to/gitguardian/when-we-use-ai-to-ship-fast-secrets-spread-fast-4c70</link>
      <guid>https://dev.to/gitguardian/when-we-use-ai-to-ship-fast-secrets-spread-fast-4c70</guid>
      <description>&lt;p&gt;One of the largest takeaways from the latest &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian State of Secrets Sprawl Report&lt;/a&gt; is that in 2025, the way we all build software changed.&lt;/p&gt;

&lt;p&gt;First, there are more developers than ever before. Publicly active developers on GitHub grew 33% in 2025, and 54% of active developers, meaning someone who has pushed code to GitHub, made their first commit that year. That means a lot of new code, a lot of new projects, and a lot of new integrations arriving all at once. More builders usually means more issues in code and consequently more credentials being leaked. This most certainly was a factor contributing to the 28,649,024 new secrets GitGuardian found in public GitHub commits across 2025. This is a 34% year-over-year increase and the largest annual jump in the report's history. But it is more than just new devs and new agents. Since 2021, leaked secrets have grown 152%, while the public developer population grew 98%. Secrets are leaking faster than the developer base is growing.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhykrz7w3xogf0hlelmr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhykrz7w3xogf0hlelmr.png" width="800" height="363"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;But this year, there is another force in the system. AI has become part of the default software stack for almost all developers.&lt;/p&gt;

&lt;p&gt;That shift shows up clearly in the credential data. The report found 1,275,105 AI-service secrets exposed in 2025, with 81% year over year growth in AI-related service secrets. It also found that 12 of the top 15 fastest-growing leaked secret types were AI services. That is a strong signal that AI tooling is no longer a sidecar to the stack. It is the stack, or at least a growing layer of it.&lt;/p&gt;

&lt;p&gt;AI does not have to invent a new category of security mistakes to change the risk picture. It only has to increase the number of services, tools, workflows, and machine identities required to ship even ordinary software. More moving parts mean more keys. More keys mean more ways to leak them. The mechanics are familiar. The pace is new. Let's take a deeper look at what we found in the research across 2025.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI is now a real credential category
&lt;/h2&gt;

&lt;p&gt;The AI story goes well beyond model-provider keys. That is certainly a part of it, but just one aspect. The more interesting pattern is that the whole AI application layer is now visible in leaked secrets data. The fastest-growing detectors include the surrounding services that developers use to make AI features work in production.&lt;/p&gt;

&lt;p&gt;However, since LLMs are at the heart of all of this new infrastructure, let's start our analysis there.&lt;/p&gt;

&lt;h3&gt;
  
  
  LLM Platforms: The "Model as a Service"
&lt;/h3&gt;

&lt;p&gt;Of all the model providers, Deepseek showed the greatest change from the previous report, with 2,179% growth year-over-year.&lt;/p&gt;

&lt;p&gt;xAI showed similar year-over-year growth to OpenAI, at 555%, but with a volume smaller than Deepseek's, at 6,273 leaks. Rounding out the top-growing platforms that developers call directly are Mistral, up 578%; Claude, up 220%; LlamaCloud, up 109%; and Cohere, up 77%. These numbers show us that production-grade model access has become mainstream in real software.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg5989qug03nebi84cb5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvg5989qug03nebi84cb5.png" alt="TOP 15 Fastest Growing Specific Detectors (YoY)" width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Other findings suggest that developers and teams are comparing and using multiple model vendors for their projects. This can be for failover and resiliency in their apps or to best find the model that supports their use case.&lt;/p&gt;

&lt;p&gt;The best example of this trend is OpenRouter, which saw an astounding 4,661% more leaks. Their platform usually does not operate a single proprietary model. Instead, they act more like a gateway that lets developers access and switch among multiple models through one API.&lt;/p&gt;

&lt;p&gt;NVIDIA is another example, where its keys often relate to AI infrastructure or AI service platforms that help teams run or deploy models, rather than being only a direct "one provider, one model API" offering.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open Models Also Drive Leaks
&lt;/h3&gt;

&lt;p&gt;Hugging Face keys leaked over 130,000 times across public GitHub in 2025, staying about the same as the previous year. HuggingFace is important, though, as it often connects the "open model" world to inference platforms, which is part of the reason we think we are seeing Groq Cloud API keys leaking at a 211% higher rate in our findings. Groq was mainly known for its compute platform and inference hardware, enabling people to efficiently run certain open-source models but at the end of 2024, they began offering access through an API through Groq Cloud.&lt;/p&gt;

&lt;h3&gt;
  
  
  More AI Leaks From The Growing Support Ecosystem
&lt;/h3&gt;

&lt;p&gt;Other findings point to LLM usage being wrapped in a real application layer. Teams are building full products around the model, and those products need orchestration, monitoring, and data services.&lt;/p&gt;

&lt;p&gt;Supabase is the clearest example of the adoption curve turning into leakage risk. In the &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2025?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;2025 report&lt;/a&gt;, we reported +97% year-over-year growth. This year, that growth rate jumps to +992%. Supabase is widely loved because it makes it easy to stand up a database-backed application quickly, and it has become a common default for modern AI projects, especially when developers want a fast on-ramp to vector-like workflows and retrieval patterns. The more teams reach for an AI-friendly "fast database" to ship quickly, the more likely those, often new, developers are to leak keys.&lt;/p&gt;

&lt;p&gt;Another supporting service is LangChain, which had ~200% more leaks. LangChain is an orchestration framework that helps developers connect models to tools, prompts, and workflows. Its keys show up when teams operationalize multi-step AI features. Other examples include Weights and Biases keys, up 200%, and Jina keys, up around 400%, both of which are used to track experiments, evaluate outputs, and monitor performance over time, which is typical when an AI feature is being improved and maintained like any other production system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvvv0sbi2s0o46efux5d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjvvv0sbi2s0o46efux5d.png" alt="AI Service Detectors Growth (YoY %)" width="742" height="546"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Retrieval and access layers are also showing massive growth, with Perplexity keys up 750%. Perplexity can be used as a search and retrieval API, so its key shows up when teams route questions through a service that finds sources and returns context to feed into the model.&lt;/p&gt;

&lt;p&gt;For AI media and voice infrastructure, we see the largest changes from Vapi, up 780%. This is a developer platform for voice-based AI agents using real-time audio. Seeing these keys leak suggests that AI is moving into customer-facing experiences like voice support, sales calls, and content production, introducing new vendors and new secrets into everyday repos.&lt;/p&gt;

&lt;p&gt;Brave Search, whose keys are up 135%, is trusted by developers to run web-style searches, pulling back relevant pages or snippets that the AI can reference. This signals that teams are building complete AI systems in production, where a single "call the model" integration quickly becomes a larger network of services and credentials that need to be managed.&lt;/p&gt;

&lt;p&gt;When you add these together, you get a clear signal: a meaningful slice of leaked secrets in public repos is showing up because developers are building AI systems. If teams weren't doing AI projects, these secret types would not be appearing at anything like these levels.&lt;/p&gt;

&lt;h4&gt;
  
  
  New AI Control Plane Players
&lt;/h4&gt;

&lt;p&gt;The other broad category in our report is platforms that sit one layer above the models themselves, used to assemble complete AI applications by coordinating prompts, tools, data sources, and deployments. These become the control plane where teams build and run agent-style workflows.&lt;/p&gt;

&lt;p&gt;The platforms showing the highest year-over-year growth in leaks are Dify, at 570%, and Coze, at 500%. These platforms help teams build agentic systems, chain tools together, manage prompts, and deploy AI features quickly, without writing everything from scratch. Coze PATs are of particular concern, as these access tokens often carry very broad account-level permissions and show up in commonly leaked artifacts, such as local configuration files and automation scripts.&lt;/p&gt;

&lt;p&gt;The data suggests that developers are adopting higher-level agent-building platforms, increasing their speed to market, but also increasing the number of integrations and credentials in a typical AI project.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Claude Co-signed Commits Issue: 2x The Base Rate Of Leaked Secrets
&lt;/h2&gt;

&lt;p&gt;One of the most significant findings in the 2026 report has less to do with AI services and more to do with how software is now produced. GitGuardian found that AI-assisted commits significantly contribute to secrets sprawl, and that Claude Code co-authored commits leak secrets at roughly 2x the baseline across public GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyaaizplaila8g545nlh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdyaaizplaila8g545nlh.png" alt="Percent of Secrets For All Commits vs Commits co-authored by Claude Code" width="800" height="509"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;2025 was the breakout year, with adoption accelerating early and then ramping sharply in the second half as multiple assistants gained traction. By year-end, AI-assisted commits had reached their highest levels.&lt;/p&gt;

&lt;p&gt;This point is easy to overreact to, so it is worth being precise. The report is not saying one assistant is uniquely reckless or that AI coding tools are the root cause of secrets leakage. The more grounded reading is simpler. When code production speeds up, insecure patterns scale with it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3r7reyl165hkuptkjsg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fj3r7reyl165hkuptkjsg.png" alt="Secrets per 1k commits from human-only signed vs Claude Code cosigned commits" width="800" height="458"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That is the real risk. AI-assisted development makes it easier to scaffold projects, test integrations, spin up backends, wire third-party services together, and publish working code quickly. Those are all useful things. They also happen to be the exact moments when credentials get pasted into config files, shell histories, local scripts, repo examples, quick demos, and half-finished automation.&lt;/p&gt;

&lt;p&gt;Once the pace increases, those habits do not disappear. They become easier to repeat.&lt;/p&gt;

&lt;p&gt;AI-generated code often looks finished before it is production-ready. A developer can get a feature working quickly, prove it out, and move on before anyone has asked the boring but important questions: where should this secret live, who owns it, how does it rotate, what is its scope, what breaks if it expires, and what logs or files might accidentally retain it?&lt;/p&gt;

&lt;p&gt;AI-assisted coding data from the report is evidence that the software production has changed. We should expect more commits, more prototypes, more integrations, and more implicit machine identities. We should also expect those identities to be created faster than many organizations can realistically govern them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concrete Numbers From MCP Research
&lt;/h2&gt;

&lt;p&gt;One of the strongest additions in this year's report is the section on Model Context Protocol (MCP). MCP emerged in early 2025 as a common way to connect large language models to external tools and data sources. GitGuardian research found 24,008 unique secrets exposed in MCP configuration files. 14% of those were PostgreSQL database connection strings. The top leaked secret types in those files were Google API keys at 19.2%, PostgreSQL connection strings at 14%, Firecrawl at 11.9%, Perplexity at 11.2%, and Brave Search at 11%.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntippyz91bwx2kh269i8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fntippyz91bwx2kh269i8.png" alt="TOP 10 Valid Unique Secrets in MCP configuration" width="745" height="316"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That list is revealing. It maps almost perfectly to the AI support layer, now showing up everywhere else in the report. Search. Retrieval. Data access. Developer tooling. External APIs. MCP gives teams a standard way to connect models to the real world. It is risky for the exact same reason.&lt;/p&gt;

&lt;p&gt;New standards often spread through examples. People copy sample configs, tweak them, and keep moving. If those examples depend on hardcoded credentials in local files, the unsafe pattern spreads alongside the standard. This is not an MCP-specific criticism. It is a familiar story in software. Convenience wins first. Governance arrives later. By then, the insecure pattern is already normal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a90brapg3pzcxiskfmu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8a90brapg3pzcxiskfmu.png" alt="Unique Secrets Count Per Month - MCP configuration" width="800" height="342"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The MCP findings make the broader AI story more tangible. AI systems are not self-contained. They are built to reach out to tools, data, and services. Every connection point is another identity. Every identity needs ownership, scope, rotation, and visibility. If any of those are weak, the stack accumulates risk quietly and quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generic Detectors For The AI Ecosystem
&lt;/h2&gt;

&lt;p&gt;It is important to note that this year, for the first time, our machine-learning classification for generic secrets introduces an AI category. These are cases where we have strong signals that the key correlates to AI-related projects. 4.1% of all keys we found fell into this AI-labeled generic category with high confidence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkfqydar8petd0b16c5r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flkfqydar8petd0b16c5r.png" alt="Generic Secrets by Category (%)" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;That's the important qualifier: "with high confidence." Generic secrets also include a very large "other/unknown" bucket (around 41.9%), meaning we can't confidently map them to a specific purpose: that 4.1% is almost certainly a floor, not a ceiling. The AI-related fraction of generic secrets may be significantly higher.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bigger Story Is Identity Sprawl Driven By AI
&lt;/h2&gt;

&lt;p&gt;The growing problem is the volume of Non-Human Identities created by modern software development. Service accounts, API keys, database credentials, automation tokens, deployment secrets, bot identities, project-scoped keys, and agent tool credentials. These are the connective tissue of software now. AI adds more of them by default.&lt;/p&gt;

&lt;p&gt;GitGuardian's policy-breach distribution makes that plain. Long-lived secrets account for 60% of policy breaches. Internally leaked secrets make up 17%. Duplicated secrets account for 16%. Publicly leaked secrets are 5%. Cross-environment leaks are 1%, and reused secrets are 0.7%. That is a useful breakdown because it shows the dominant problem is lifecycle negligence. Secrets live too long, spread too widely, and get copied faster than they are governed.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkomb4vtkvp0h1gz13dy9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkomb4vtkvp0h1gz13dy9.png" width="800" height="431"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;AI increases creation velocity inside organizations that already struggle with ownership and remediation. More identities are born. More secrets are copied into more places. Fewer teams know which ones matter most, who should rotate them, and what systems depend on them. The stack expands. The discipline does not expand at the same rate.&lt;/p&gt;

&lt;h2&gt;
  
  
  What We Learned About AI Use
&lt;/h2&gt;

&lt;p&gt;AI did not create secrets sprawl, but it is now accelerating every condition that makes secrets sprawl worse. More people can build. More code gets shipped. More third-party services get connected. More machine identities get created. More local and temporary configuration surfaces appear. More insecure patterns survive because the fastest path to shipping still usually involves "just add a key."&lt;/p&gt;

&lt;p&gt;That is why the real AI lesson in this report is not "watch your model keys." It is broader. The AI stack is now part of the normal software stack, and the normal software stack already runs on credentials. Once that clicks, secrets security stops looking like a niche problem for DevOps teams and starts looking like a core operating problem for software organizations.&lt;/p&gt;

&lt;p&gt;The teams that handle this shift best will not just detect more exposed credentials. &lt;a href="https://www.gitguardian.com/nhi-governance?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;They will get better at controlling the lifecycle behind them&lt;/a&gt;. It will empower people to confidently answer, "Who created the identity? What can it reach? How narrowly is it scoped? Where is it stored? When it rotates? What depends on it? When should it die?"&lt;/p&gt;

&lt;p&gt;That work is less glamorous than shipping a new assistant, agent, or AI feature. It is also the difference between speed that compounds and speed that eventually breaks things.&lt;/p&gt;

&lt;p&gt;AI is making software easier to produce. When software gets easier to produce, identities get easier to create. And when identities get easier to create, secrets spread faster than most teams are ready for.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>devsecops</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>BSides SF 2026: Looking At Security Beyond The Next Big Bet</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Mon, 06 Apr 2026 13:28:38 +0000</pubDate>
      <link>https://dev.to/gitguardian/bsides-sf-2026-looking-at-security-beyond-the-next-big-bet-21lg</link>
      <guid>https://dev.to/gitguardian/bsides-sf-2026-looking-at-security-beyond-the-next-big-bet-21lg</guid>
      <description>&lt;p&gt;San Francisco has always had a talent for turning risk into infrastructure, such as when &lt;a href="https://en.wikipedia.org/wiki/Liberty_Bell_(game)?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Charles Fey invented the slot machine there during the Gold Rush&lt;/a&gt;. Today, we have another nondeterministic device for fortune seekers willing to pull a lever and see what comes back. We call it AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsidessf.org/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;BSides San Francisco 2026&lt;/a&gt; felt built for a more modern version of that wager. The stakes today are identities, tokens, agents, permissions, and the growing gap between what systems are supposed to do and what they actually do in production.&lt;/p&gt;

&lt;p&gt;Happening the weekend before RSA Conference, this is one of the largest of the BSides events globally. This year, 2,965 participants attended 92 talks, 8 workshops, 11 interactive sessions, a CTF, and many, many other activities. This year's event was made possible by the help of 235 volunteers and a truly tireless organizing team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq48qi6ip7dpizfmu76p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq48qi6ip7dpizfmu76p.png" alt="BSides SF final attendance metrics" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;People were there to share notes on how to keep control when software delivery speeds up, AI changes how code and infrastructure are produced, and attackers are increasingly happy to work through identity and trust instead of smashing through a perimeter. Here are just a few highlights from this year's edition of BSidesSF.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time Travel Without Nostalgia
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/anna-westelius-72a47048?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Anna Westelius, Head of Security, Privacy &amp;amp; Assurance at Netflix&lt;/a&gt;, "Let's Do the Timewarp Again! A Look Back to Move Forward," she presented security history as a series of pivots rather than a straight line. Instead of treating today's instability as unprecedented, she walked through earlier shifts in the early internet, the worm era, and the move to the cloud, showing how each period first felt chaotic, then gradually produced better defaults, stronger habits, and more durable systems.&lt;/p&gt;

&lt;p&gt;Anna made the case that security has repeatedly moved from heroics to engineering. Cloud, once framed as inherently unsafe, matured into a place where private-by-default storage, identity-centric controls, and better primitives could outperform old on-premises assumptions when teams rebuilt for the environment instead of dragging old workflows into it. Fire drills now are new CVEs, not being compromised.&lt;/p&gt;

&lt;p&gt;She reminded us that progress comes from community and deliberately laying out well-paved roads that are easier to travel. Anna argued that the field is finally in a position to measure meaningful risk reduction, design for humans instead of blaming them, and start tackling legacy risk that used to feel too sprawling to touch. Maturity is not perfection. It is building enough scaffolding that the next crisis does not have to be improvised.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sns3hy0zkp6wj29t6l4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sns3hy0zkp6wj29t6l4.png" alt="Anna Westelius" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Threat Model Meets Production
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/farshadabasi/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Farshad Abasi, CoFounder of Eureka DevSecOps and CSO of Forward Security&lt;/a&gt;, gave a session called "Your Threat Model Is Lying to You: Why Modeling the Design Isn't Enough in 2026," where he laid out a sharp critique of threat modeling, at least the way many teams still practice it. The problem was not the exercise itself, but that design intent keeps getting treated as reality, even when production systems drift, dependencies multiply, and delivery speed outruns any annual review cycle.&lt;/p&gt;

&lt;p&gt;Farshad explained that threat models often describe what teams think they built, while the real system includes transitive dependencies, infrastructure changes, deployment quirks, and configuration choices that never made it into the diagram. He pointed to the need for a feedback loop between the model and the evidence teams already collect from SAST, SCA, DAST, cloud findings, and deployment telemetry. A finding should not just confirm a known risk. It should be able to expose a broken assumption and force the model to update.&lt;/p&gt;

&lt;p&gt;That shift has real consequences, as it moves threat modeling out of a compliance drawer and back into engineering rituals like backlog refinement, pull requests, and post-finding analysis. The problem is rarely the first known issue. It is the invisible dependency, the quietly expanded permission, or the workflow that changed faster than the security model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbp10zja9lf5hmj09pf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbp10zja9lf5hmj09pf1.png" alt="Farshad Abasi" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tokens Are the New Currency
&lt;/h2&gt;

&lt;p&gt;In "Breaking Tokens: Modern Attacks on OAuth, OIDC, and JWT Auth Flows," &lt;a href="https://www.linkedin.com/in/bhaumikshah2?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Bhaumik Shah, CEO at SecurifyAI&lt;/a&gt;, presented identity failure as an application architecture problem, not just an authentication problem. He covered examples of token replay, weak audience validation, trust confusion between identity providers, and the dangerous habit of treating a valid token as universally trustworthy.&lt;/p&gt;

&lt;p&gt;He quickly moved from protocol language to operational consequences in this session, sharing that a token validated in one place could be replayed somewhere else. An app without proper validation might accept a token from the wrong issuer, or a federated environment could end up granting the same privileges to identities that arrived through very different trust paths. In practice, that means an organization can enforce MFA at login and still leave the actual session material portable enough to be abused elsewhere.&lt;/p&gt;

&lt;p&gt;Bhaumik's mitigation advice was crisp and overdue. We should bind privileges to high-trust identity providers and validate the issuer and subject together instead of trusting email alone. We also need to narrow the scopes so a stolen token does not become a skeleton key. He talked about the fact that identity is no longer just about proving who logged in. It is about preserving trust boundaries after authentication, when tokens start moving between proxies, services, and automation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwt6du621dga94cqrcqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwt6du621dga94cqrcqf.png" alt="Bhaumik Shah" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Hunting the Blind Spot on Developer Workstations
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://blog.securient.io/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Vinod Tiwari, Engineer at PIP Labs&lt;/a&gt; and first-time speaker, presented "Hunting Malicious IDE Extensions: Building Detection at Scale Across Developer Workstations." He walked us through a problem most security teams still barely measure: the IDE extension layer on developer machines has broad access to many dangerous things. Beyond source code, most dev machines have API keys, cloud credentials, deployment tooling, and local secrets, yet in many organizations, nobody has a complete inventory of what is installed. Vinod said that approvals are rare, monitoring is minimal, and only a couple of people in the room raised their hands when asked if they had MDM visibility into extensions at all. That gap matters because extensions sit inside one of the richest trust zones in the company.&lt;/p&gt;

&lt;p&gt;Vinod pointed to multiple cases from 2023 through 2025 in which malicious VS Code extensions were caught stealing SSH keys, typosquatted packages were bundled as IDE extensions to target crypto developers, and even widely installed extensions were found to exhibit data exfiltration behavior. With tens of thousands of extensions in the VS Code marketplace and a similar scale in JetBrains ecosystems, the review model has not kept pace with the level of access these plugins receive. He said there is often no sandbox here, and extensions can read and write local files, spawn processes, access the network, and, in some cases, quietly access clipboard data or other sensitive workflows.&lt;/p&gt;

&lt;p&gt;Vinod highlighted how private keys in &lt;code&gt;.env&lt;/code&gt; files, wallet seed phrases, and deployment credentials can all sit on developer workstations, turning a compromised extension into a direct path to irreversible damage. One malicious plugin is not just a workstation incident. It can become a wallet loss, a production compromise, or a supply chain exposure. Teams need to stop treating IDE extensions as harmless productivity add-ons and start treating them as privileged code execution inside the developer environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa663271zk0b4ggwmwmzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa663271zk0b4ggwmwmzv.png" alt="Vinod Tiwari" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Static security assumptions are failing faster
&lt;/h2&gt;

&lt;p&gt;Attackers are adapting, and our models of stability are aging out. Threat models go stale. Token assumptions do not survive microservices. Audit habits lag behind AI-assisted development. A control that made sense when releases were slower now becomes a blind spot because the underlying system changes too quickly.&lt;/p&gt;

&lt;p&gt;That is a meaningful shift for defensive work. It suggests that many security programs do not need more categories of findings as much as they need faster ways to reconcile expectations with reality. Drift, replay, hidden dependencies, and agent behavior all punish teams that treat security as a periodic review instead of continuous correction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identity has moved into the center of the map
&lt;/h2&gt;

&lt;p&gt;The strongest sessions throughout the event kept orbiting identity, even when they were not labeled that way. Okta hunting, OAuth token replay, authorization architecture, and AI agents with production access all pointed to the same practical truth: trust now travels through sessions, tokens, permissions, and service relationships more often than through a clean user login moment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/files/the-state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Secrets sprawl&lt;/a&gt;, overbroad scopes, orphaned permissions, and weak service boundaries all create the same outcome. They let valid-looking access travel farther than it should. In that environment, good hygiene is no longer a side practice. It is the structure that keeps the blast radius from becoming a business risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security teams are being pushed closer to platform work
&lt;/h2&gt;

&lt;p&gt;Another conversation at the event was that the old separation between security, platform, and developer tooling is getting harder to sustain. Talks on authorization, malicious IDE extensions, AppSec, and AI agents all described a world where the useful control point is often the workflow itself. The winning pattern was not "scan more." It was "build the road correctly."&lt;/p&gt;

&lt;p&gt;That has implications for staffing and program design. Teams need people who can express policy in systems, not only people who can identify issues after the fact. Secure defaults, tool proxies, sidecars, telemetry feedback loops, and opinionated guardrails all came up because they let security become part of how work is done, instead of an extra approval step hovering outside it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What San Francisco Made Feel Obvious
&lt;/h2&gt;

&lt;p&gt;BSidesSF is a very forward-looking conference, in part because the event is made up of the practitioners, maintainers, and professionals who are actively working to keep us all safe. What seemed to be the consensus in the hallways was that the problems we face can't be solved with just more tools. They are going to be solved organizational change in how we deal with trust and access.&lt;/p&gt;

&lt;p&gt;If the threat model no longer matches production or a dependency incident exposes confusion about ownership, it is not just time to patch, it is time to reconsider if your architecture and governance are aligned with your org's goals. This event left me with hope that we can make security better by focusing on the systems that issue trust, store secrets, define permissions, and drive automation. Reduce what can sprawl as you update what has drifted. We don't need to stretch our old security plans around new technology; we need to adopt better paved paths and guardrails for a safer future. Especially as it is ever increasingly driven by AI.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>ai</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Honeytokens on the Developer Workstation: When Cleanup Takes Time</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 02 Apr 2026 15:00:54 +0000</pubDate>
      <link>https://dev.to/gitguardian/honeytokens-on-the-developer-workstation-when-cleanup-takes-time-15a4</link>
      <guid>https://dev.to/gitguardian/honeytokens-on-the-developer-workstation-when-cleanup-takes-time-15a4</guid>
      <description>&lt;p&gt;Supply chain security has moved closer to the humans with hands on the keyboard.&lt;/p&gt;

&lt;p&gt;For years, security teams have treated production systems, CI/CD pipelines, and identity infrastructure as the most sensitive parts of the software lifecycle. That is not wrong, but it is incomplete. The developer workstation belongs in that same conversation because it sits at the intersection of privilege, trust, and execution. It is where code is written, dependencies are installed, credentials accumulate, and trusted actions begin.&lt;/p&gt;

&lt;p&gt;Modern supply chain attacks are increasingly designed to land on the developer machine first. They do not need to smash through the front gate of production if they can quietly collect the keys from the laptop that already has access to private repositories, package publishing workflows, cloud consoles, build systems, and internal tooling.&lt;/p&gt;

&lt;p&gt;In 2025, and for the first time, campaigns such as &lt;a href="https://blog.gitguardian.com/shai-hulud-2/" rel="noopener noreferrer"&gt;Shai-Hulud&lt;/a&gt; showed us publicly just how many credentials could be harvested from a developer machine. In the &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;State of Secrets Sprawl report&lt;/a&gt;, we showed that across 6,943 compromised machines in that supply chain attack, we found 33,185 unique secrets. At least 3,760 were still valid when we initially checked. Now a growing class of agentic &lt;a href="https://www.darkreading.com/application-security/supply-chain-attack-openclaw-cline-users?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;AI attacks&lt;/a&gt; aimed at local credentials and developer context shows the same pattern. The shortest path to enterprise impact often starts with developer access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3r7lutao78id2geq53a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3r7lutao78id2geq53a.png" alt="Shai-Hulud count of secrets per machine from the State of Secrets Sprawl Report 2026" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers have always been attractive targets. What has changed is the speed, scale, and plausibility of the attack paths. Poisoned packages, malicious plugins, compromised updates, and AI-assisted local automation all make it easier for adversaries to reach into a workstation and search for anything useful. That search is usually not abstract. It is practical. It looks for API keys, cloud tokens, SSH material, npm credentials, GitHub tokens, secrets in environment variables, plaintext config files, &lt;code&gt;.env&lt;/code&gt; files, shell history, logs, caches, and agent memory stores.&lt;/p&gt;

&lt;p&gt;The perimeter has not disappeared. It has become easier to recognize. The real perimeter is wherever the most privileged identities can be reached and abused. In many organizations, this includes the developer machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why developers should care
&lt;/h2&gt;

&lt;p&gt;Most developers do not think of themselves as part of the security boundary. They write code, and IT manages the laptop, while Security owns the policy. That division makes sense until an attacker uses a developer workstation as the easiest path into the systems that the developer already touches every day.&lt;/p&gt;

&lt;p&gt;This is why workstation security should not and cannot be framed as a request for every developer to become a full-time security engineer. The practical goal is smaller and more useful. If you reduce the chance that your machine becomes the easiest place to steal high-value access, you are also reducing real risk for your organization.&lt;/p&gt;

&lt;p&gt;That access is valuable for two reasons. First up, developer systems often hold secrets and tokens with real privilege. The second reason is that actions that originate from developer environments inherit trust. A package published from a maintainer machine, a commit signed with trusted credentials, a dependency update, a cloud login, or access to an internal support tool can all carry institutional trust that attackers would love to borrow.&lt;/p&gt;

&lt;p&gt;That is why the workstation deserves the same level of scrutiny and controls we already give to production systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The immediate problem is plaintext secrets
&lt;/h2&gt;

&lt;p&gt;When attacks land on a developer's machine, they do not need to perform magic. They need to find useful material quickly. Too often, that material is sitting in plaintext.&lt;/p&gt;

&lt;p&gt;Secrets end up in source trees, local config files, tests, debug output, copied terminal commands, environment variables, shell profiles, AI tool configuration, and temporary scripts. They very commonly end up in &lt;code&gt;.env&lt;/code&gt; files that were supposed to be local-only but quietly became permanent if not shared. Convenience turns into residue, which in turn becomes opportunities for attackers.&lt;/p&gt;

&lt;p&gt;That is why one of the clearest next steps for developers is also one of the least glamorous: eliminate plaintext secrets from the workstation wherever possible.&lt;/p&gt;

&lt;p&gt;Replace hardcoded credentials with calls to approved secret managers. Move local secrets into the system keychain or an enterprise-approved password manager. Encrypt files at rest when secrets must exist in files at all. Use tools such as SOPS where that workflow makes sense. Better yet, move away from shared static secrets entirely and adopt identity-based authentication wherever feasible.&lt;/p&gt;

&lt;p&gt;The goal is reducing the amount of value an attacker can extract from any successful foothold.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hard truth about remediation
&lt;/h2&gt;

&lt;p&gt;The most correct security answer here seems straightforward: reduce the use of plaintext secrets and move to stronger authentication. At the same time, we need to harden the workstation itself and standardize approved tooling. Slowing down dependency risk is another priority, while detecting abuse earlier is what security teams need for auditability.&lt;/p&gt;

&lt;p&gt;None of this happens instantly.&lt;/p&gt;

&lt;p&gt;Even good remediation plans take time because they touch real workflows. Changes require updates to training, tooling, access patterns, and team habits. Replacing a secret in code is easy on the surface. But replacing the workflow that caused ten copies of that secret to spread across laptops, config files, and build jobs is harder. Moving a team from local environment variables to a stronger secret retrieval model is possible, but it is not a same-day project. Moving from secret-based access to identity-based patterns takes longer still.&lt;/p&gt;

&lt;p&gt;Attackers do not pause while those projects are underway.&lt;/p&gt;

&lt;p&gt;That gap between knowing the right long-term direction and living in today's imperfect environment is where &lt;a href="https://www.gitguardian.com/honeytoken?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Honeytokens&lt;/a&gt; make sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why honeytokens belong on the developer workstation
&lt;/h2&gt;

&lt;p&gt;Honeytokens are not the end state. They are the first compensating controls that help while cleanup is still in progress.&lt;/p&gt;

&lt;p&gt;Honeytokens do not prevent workstation compromise. They do not replace hardening, secret elimination, or better authentication. What they do is give defenders a way to detect malicious secret harvesting as it happens.&lt;/p&gt;

&lt;p&gt;A honeytoken is a decoy credential designed to generate an alert when someone tries to use it. On a developer machine, that makes it useful as a tripwire. If a poisoned dependency, a malicious plugin, or a compromised local tool begins sweeping through files and environment variables, looking for credentials to exfiltrate and replay, a well-placed honeytoken can surface that behavior before the attacker gets very far. Validation, which triggers any honeytoken, is very often done automatically to reduce the noise for the attacker.&lt;/p&gt;

&lt;p&gt;That early signal changes the response window. It can limit the blast radius. It can help incident responders identify the affected host, the likely access path, and the timing of the event. It can also create an auditable record of what happened and when, which is valuable during investigation and remediation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgff3c9hqnnnl07p6r8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgff3c9hqnnnl07p6r8y.png" alt="The workflow of generate a honeytoken, deploy to a private environment, any attacker will trigger it, and get alerts instantly" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For organizations dealing with supply chain attacks that target credentials first, that is not security theater. That is practical detection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Placement matters more than enthusiasm
&lt;/h2&gt;

&lt;p&gt;Honeytokens only work if they stay believable and private. Placement is critical.&lt;/p&gt;

&lt;p&gt;A honeytoken should look like the kind of secret an attacker expects to find in the kind of place they are already searching. We can glean from Shai Hulud where attacks look for secrets. The best workstation honeytokens blend into the legitimate local context and live in the files and locations that the supply chain malware tends to inspect first, like any local &lt;code&gt;.env&lt;/code&gt; files or paths like &lt;code&gt;~/.config/gh/config.yml&lt;/code&gt; or &lt;code&gt;~/.aws/credentials.&lt;/code&gt; Local config files, development directories, service-related settings, and environment variable paths are all obvious candidates. So are places where convenience has historically created risky habits.&lt;/p&gt;

&lt;p&gt;Environment variables deserve special attention here. Developers often treat them as safer than files because they feel transient. In practice, they spread. They persist in shell history, child processes, debug output, terminal multiplexers, launch configs, and tool integrations. If it is in your environment, it is often more portable and more visible than people assume.&lt;/p&gt;

&lt;p&gt;A private honeytoken placed in those realistic paths can do its job quietly while real secrets are being removed from the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start with what an individual developer can do
&lt;/h2&gt;

&lt;p&gt;One of the weaknesses in many workstation security conversations is that they blur the distinction between individual actions and organizational controls. That creates advice that sounds good but feels impossible to follow.&lt;/p&gt;

&lt;p&gt;An individual developer cannot rewrite the package policy for the company, deploy endpoint tooling across the fleet, or migrate the entire organization to identity-based authentication alone. But they can make meaningful changes on their own machine today.&lt;/p&gt;

&lt;p&gt;They can remove plaintext secrets from active project directories. They can stop using unapproved local storage for credentials. They can move secrets into the system keychain or approved managers. They can reduce reliance on long-lived environment variables.&lt;/p&gt;

&lt;p&gt;Further, they can avoid random plugins, suspicious package installs, and unapproved agentic tooling. They can treat phishing, weird links, and convenience scripts with more skepticism. They can report strange package behavior instead of assuming the problem is isolated.&lt;/p&gt;

&lt;p&gt;They can also install and maintain honeytokens as a tripwire while the rest of the cleanup continues.&lt;/p&gt;

&lt;p&gt;Workstation security is partly organizational, but compromise often begins with local habits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Then demand organizational support
&lt;/h2&gt;

&lt;p&gt;Developers should not have to, and can't really solve this alone.&lt;/p&gt;

&lt;p&gt;The enterprise has to do its part by making the secure path easier to follow. That means providing approved secret managers and clear local development patterns. It means publishing workstation baselines, endpoint protections, and package trust policies that actually reflect current threats. Establish cooldown periods for updates when appropriate, rather than normalizing instant adoption of whatever just dropped upstream. Sandbox or somehow isolate new code and unfamiliar tools before they are trusted with real access.&lt;/p&gt;

&lt;p&gt;We must create and test response playbooks for when honeytokens fire, because a detection without a plan can still turn into a mess.&lt;/p&gt;

&lt;p&gt;There is also a new policy line that many teams should draw clearly: do not install agentic systems on work machines without explicit approval. That is especially important when those tools can read local context, inspect repositories, access credentials, or run broad local actions. The attack surface is not only the model. It is the &lt;a href="https://www.rescana.com/post/glassworm-supply-chain-attack-exploits-open-vsx-extensions-to-target-developer-environments?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;plugin&lt;/a&gt; ecosystem, the automation layer, the permissions, and the assumptions about trust.&lt;/p&gt;

&lt;p&gt;Some teams may even benefit from local warnings or command aliases that remind users when they are about to invoke unapproved tooling. If trying to invoke &lt;code&gt;openclaw&lt;/code&gt; instead sends off a warning or a honeytoken, even better.&lt;/p&gt;

&lt;p&gt;If the environment allows it, organizations can also &lt;a href="https://youtu.be/yandDvJr4Kc?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;add security-aware MCP and IAM tooling to local assistants&lt;/a&gt; to help with remediation workflows, policy checks, and honeytoken placement. That can make the defensive path more practical without pretending automation removes the need for judgment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq8qie7rtwlsfvtu6yu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq8qie7rtwlsfvtu6yu6.png" alt="Slide showing the capabilities of the GitGuardian MCP server" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets are the first target, not the only target
&lt;/h2&gt;

&lt;p&gt;Admittedly, secrets are not the whole story of workstation risk.&lt;/p&gt;

&lt;p&gt;Attackers may also want browser session material, package publishing rights, signed commit workflows, access to internal knowledge, SSH agent forwarding, build context, or any local state that helps them pivot. But secrets remain one of the most portable, reusable, and operationally useful prizes. That makes them the best first cleanup target and the best place to reduce attacker value quickly.&lt;/p&gt;

&lt;p&gt;If an attacker lands on a developer machine and finds nothing useful in plaintext, the possible damage narrows. The incident may still be serious, but it becomes harder to convert local execution into durable enterprise access.&lt;/p&gt;

&lt;p&gt;That is the security value of secret elimination. It does not promise perfection. It reduces attacker leverage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Treat developer machines like they matter, because they do
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://research.jfrog.com/post/ghostclaw-unmasked/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;agentic AI era has amplified workstation risk&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Developers now work in environments that combine trusted execution, local automation, sprawling dependency chains, and high-value access in one place. Attackers know that they no longer need physical access to a laptop or a dramatic break-in story. Sometimes all they need is an update, an altered package or &lt;a href="https://www.rescana.com/post/glassworm-supply-chain-attack-exploits-open-vsx-extensions-to-target-developer-environments?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;plugin&lt;/a&gt;, or a workflow that slips through trust assumptions and starts looking for credentials.&lt;/p&gt;

&lt;p&gt;Developer workstations deserve the same discipline we already apply to pipelines and production infrastructure: Eliminate plaintext secrets, move toward stronger identity-based patterns, and be careful with updates, plugins, and local automation. And while all of that longer work is underway, install &lt;a href="https://www.gitguardian.com/honeytoken?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Honeytokens&lt;/a&gt; where attackers are most likely to look.&lt;/p&gt;

&lt;p&gt;That is not the whole strategy. It is the first good move.&lt;/p&gt;

&lt;p&gt;For teams trying to reduce risk now, that is often the difference between discovering an attack after the damage is done and catching it while it is still unfolding.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>cybersecurity</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>Chainguard Assemble 2026 and the Security Factory Mindset</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Mon, 23 Mar 2026 15:37:55 +0000</pubDate>
      <link>https://dev.to/gitguardian/chainguard-assemble-2026-and-the-security-factory-mindset-3ib1</link>
      <guid>https://dev.to/gitguardian/chainguard-assemble-2026-and-the-security-factory-mindset-3ib1</guid>
      <description>&lt;p&gt;New York knows how to turn rough ground into something human-friendly. For example, &lt;a href="http://www.lizchristygarden.us/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;the Liz Christy Garden&lt;/a&gt; began in 1973 as the city's first community garden, carved out of an overgrown vacant lot on the Lower East Side. They went from a wild and chaotic space to a human-friendly park that New Yorkers still enjoy today. That felt like a good backdrop for an event where security-minded professionals could have a conversation centered on taking messy, fast-moving software systems and building them into something durable, governed, and worth trusting. About 400 like-minded practitioners did exactly that at &lt;a href="https://assemble.chainguard.dev/event/2991fca2-5be2-48cb-a8b9-132ab575cd51/summary?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Chainguard Assemble 2026&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Throughout the event, held at &lt;a href="https://www.theglasshouses.com/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;The Glass House&lt;/a&gt;, attendees had the chance to see 38 speakers, including a closing session with Colin Jost, discuss the path forward for security in a world of rapidly evolving, AI-powered threats. Across keynotes, lightning talks, and other sessions, the same message kept surfacing from different angles: patching after the fact is no longer a strategy, and security teams cannot afford to be the last human checkpoint in a pipeline shaped by AI, automation, and sprawling dependencies.&lt;/p&gt;

&lt;p&gt;Here are just a few of the highlights from Chainguard's second annual event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developing With Power Tools, Safely, At Scale
&lt;/h2&gt;

&lt;p&gt;The opening Product Keynote by &lt;a href="https://www.linkedin.com/in/danlorenc?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Dan Lorenc, founder of Chainguard&lt;/a&gt;, began with a woodworking story that contrasted hand tools with power tools. Hand tools are slower and more familiar. Power tools are faster, louder, and more dangerous when things go wrong. Dan said this is where we are all at with AI and software development: we can now move much faster, but at greater risk of causing damage.&lt;/p&gt;

&lt;p&gt;The industry has already entered a world where automation is not optional, and AI is not confined to autocomplete. CVE discovery and remediation that once took weeks can now happen in minutes. Agentic pentesting is compressing what used to be month-long cycles. That speed is real, and the attackers are using it too. The consequence is that "scan and patch" security no longer holds up as a primary operating model. By the time you find the issue downstream, the system has already moved on.&lt;/p&gt;

&lt;p&gt;Dan used that argument to frame &lt;a href="https://www.chainguard.dev/unchained/driftlessaf-introducing-chainguard-factory-2-0?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Chainguard Factory 2.0&lt;/a&gt;. This lets them build everything from source and control dependencies tightly, reconcile the active state against a desired state, and let agents handle the repetitive, high-volume coordination work while keeping trust anchored in cryptographic authenticity in a transparent process. Tens of thousands of dependency updates, hundreds of thousands of artifacts, and versioning edge cases across upstream projects that do not agree with each other. This is how they managed to build 7 new offerings, including &lt;a href="https://www.chainguard.dev/os-packages?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;OS dependencies&lt;/a&gt; and &lt;a href="https://www.chainguard.dev/agent-skills?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;AI agent skills&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Software production now looks more like an automated factory than a bespoke workshop. Security has to be embedded in the factory design, not bolted on later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j4i3lql3tpak876bjm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j4i3lql3tpak876bjm6.png" alt="Dan Lorenc and a volunteer sawing wood" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Golden Images As An Operating Model
&lt;/h2&gt;

&lt;p&gt;In the joint session from &lt;a href="https://www.linkedin.com/in/molly-soja?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Molly Soja, Lead Security Engineer at KKR&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/ayeshabhutto/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Ayesha Bhutto, Sr Technical Success Manager at Chainguard&lt;/a&gt;, called "Why Golden Images Still Matter," they presented a practical reminder that mature programs still need stable foundations. Golden images can sound old-fashioned, but this talk grounded the conversation in the realities of consistency, compliance, and rollout strategy.&lt;/p&gt;

&lt;p&gt;They explained that golden images are not just about hardening a base container, but about gaining predictability at scale. You get this by creating a standard layer where security, compliance readiness, and developer expectations can align. That matters even more in large environments where drift accumulates quietly, DIY image factories become distractions, and every exception adds operational drag. Molly told stories from her time at KKR and made the case for treating the work as a phased program instead of a grand migration. She said to start with a proof of value and choose services that reflect the real environment, and focus on reducing complexity.&lt;/p&gt;

&lt;p&gt;Teams want velocity, but they also want sane defaults, audit readiness, and fewer surprise regressions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29zjdz7lwn98qnp3z3ik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29zjdz7lwn98qnp3z3ik.png" alt="Ayesha Bhutto and Molly Soja" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer Speed Needs A Support Model
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/brandon-heard-36a73444?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Brandon Heard, Technical Leader at PeopleTec&lt;/a&gt;, called "Developer Productivity Without Compromise," he told us his job is to let developers go as fast as they want, as often as they want, without making security a tax on delivery. He stressed that the details matter.&lt;/p&gt;

&lt;p&gt;His approach rested on one blessed runtime, security built into development workflows, and migration treated as a supported program rather than a mandate. The rollout mechanics were concrete, involving taking a full inventory of existing images, piloting non-critical services, and automating through CI and templates. He supported the developers themselves by publishing example commits and holding office hours. This is not what typically happens in his experience; normally, platform or security teams announce a standard and assume adoption will follow. Brandon showed that adoption is a product problem as much as a technical one.&lt;/p&gt;

&lt;p&gt;The before-and-after comparison gave the story weight. Images dropped from roughly 600 MB to 120 MB. High CVEs dropped from 12 to 2. SBOMs became a default output instead of an afterthought. There is an operational lesson here that secure defaults only scale when accompanied by documentation, migration patterns, and support channels that respect how engineering teams actually work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5wym8z7pbs4giw2b1gg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5wym8z7pbs4giw2b1gg.png" alt="Brandon Heard" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Compliance At Lunar Velocity
&lt;/h2&gt;

&lt;p&gt;In his session, "Securing the Next Moon Age: Automated Compliance Powers the Next Giant Leap," &lt;a href="https://www.linkedin.com/in/collin-estes-b77a398?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Collin Estes, CIO at MRI Technologies working at NASA&lt;/a&gt;, presented the most compelling high-stakes, real-risk example of the day. The context was NASA missions, flight readiness, and systems where the question "is it safe to fly?" is literal.&lt;/p&gt;

&lt;p&gt;Collin described a stack of platform, compliance, and supply chain problems that will sound familiar outside aerospace: multiple cloud environments, bespoke platforms, complex data flows, inherited controls, and the challenge of continuous authorization across hundreds of controls and overlays. They needed to address compliance and delivery simultaneously. The platform they developed absorbed much of the control burden. GitOps, identity federation, brokered services, and hardened container supply chains became a force multiplier for both operations and auditability.&lt;/p&gt;

&lt;p&gt;He described the shift toward continuously delivering trust rather than chasing a "point-in-time clean state." A zero-CVE pull today says little about tomorrow unless the surrounding system keeps reconciling, updating, and proving what changed. When a software factory model reaches a mission-critical environment, compliance stops being paperwork and becomes part of the operating substrate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkni4hozqb2mk0aqqats.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkni4hozqb2mk0aqqats.png" alt="Collin Estes" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Is Moving Upstream Because Timelines Have Changed
&lt;/h2&gt;

&lt;p&gt;AI assistance, dependency churn, malicious package discovery, and faster release expectations have all shortened the window between creation and consequence. That does not just create more work for security teams. It changes where security work belongs. When the cycle compresses, downstream review becomes a bottleneck, and bottlenecks get bypassed.&lt;/p&gt;

&lt;p&gt;Many talks focused on source builds, policy enforcement at package boundaries, hardened actions, and secure defaults in developer tooling. The goal is no longer to catch bad outcomes late. It is to constrain what can enter the system in the first place.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trust Is Becoming A Property Of Systems, Not Vendors
&lt;/h3&gt;

&lt;p&gt;Another pattern across the event was a shift from brand trust to process trust. Several sessions touched on this from different angles, including cryptographic authenticity, trusted package sources, reconciliation loops, and audit trails for agents. Teams need verifiable control over how software is built, updated, and promoted.&lt;/p&gt;

&lt;p&gt;AI increases output faster than it increases confidence. If more code, more automation, and more decisions are flowing through the pipeline, then "trust us" is not enough. Systems have to show their work, preserve provenance, and make validation a first-class function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Operational Maturity Now Includes Non-Human Identities
&lt;/h3&gt;

&lt;p&gt;Modern governance has to account for agents, prompts, skills, actions, and machine-driven workflows as real participants in the supply chain. This was never presented as science fiction at Assemble. It was discussed as the current operational reality. Teams are already pulling external skills, running agentic workflows, and handing meaningful tasks to systems that can move much faster than manual reviewers.&lt;/p&gt;

&lt;p&gt;We are seeing a rapid shift in how we need to think about governance models. Identity risk is no longer only about employees and service accounts. Secret sprawl is no longer only a developer hygiene problem. Non-human identities with inherited permissions and agent behavior need to be managed with the same seriousness as runtime images and dependency graphs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assembling A More Secure, AI-Powered Future
&lt;/h2&gt;

&lt;p&gt;Assemble 2026 featured many product announcements, but that was not the main takeaway from the many hallway conversations. Everything at the event pointed to the reality that security teams cannot keep acting as if software is still produced by small groups moving at human review speed. The tools and way we work have changed. As AI agents and assistants get more powerful, the risks are less forgiving, and the output volume is already beyond what manual processes can govern.&lt;/p&gt;

&lt;p&gt;Automation alone is not the answer, and no single tool can make us secure. But we need automation inside systems designed for trust, delivering reproducible updates, policy-backed repositories, and auditable agent behavior. We must think about reducing variance in how we build before it becomes real risk. That is relevant whether you are building for financial services, federal environments, commercial SaaS, or missions that end in splashdown.&lt;/p&gt;

&lt;p&gt;Security maturity in 2026 is less about scanning harder and more about deciding where trust is manufactured. For teams dealing with identity risk, secrets sprawl, and the growing governance burden around non-human actors, that factory mindset looks less like a nice architectural pattern and more like the cost of keeping up.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>ai</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>AI Is Making Security More Agile: Highlights from ChiBrrCon 2026</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 19 Mar 2026 18:47:48 +0000</pubDate>
      <link>https://dev.to/gitguardian/ai-is-making-security-more-agile-highlights-from-chibrrcon-2026-4hle</link>
      <guid>https://dev.to/gitguardian/ai-is-making-security-more-agile-highlights-from-chibrrcon-2026-4hle</guid>
      <description>&lt;p&gt;In 1900, Chicago completed one of the most ambitious engineering projects ever. Engineers &lt;a href="https://www.wttw.com/chicago-river-tour/how-chicago-reversed-river-animated?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;reversed the flow of the Chicago River&lt;/a&gt; so that sewage would no longer contaminate Lake Michigan, the city's drinking water source. They did not filter harder or build temporary containment. They redesigned that entire system's direction. This effort mirrors the reality everyone in information security now faces, as AI-driven code and applications create new challenges that require bigger answers than just more alerts and tools. This spirit of solving a big challenge, together, carried through every session at &lt;a href="https://chibrrcon.com/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;ChiBrrCon 2026&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This year marked the 6th installment of ChiBrrCon, an enterprise-focused security conference hosted on the &lt;a href="https://www.iit.edu/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Illinois Institute of Technology campus&lt;/a&gt;. This was the largest ChiBrrCon as well, with over 800 tickets sold. Throughout this single-day event, 27 speakers shared their knowledge, war stories, and best practices for securing the enterprise. There were also hands-on villages and competitions, where folks could learn some new skills and make some new connections.&lt;/p&gt;

&lt;p&gt;Across all the sessions, there was an urgency as every speaker addressed the question of what we do in a world where AI is driving code and applications. The answers were never just technical, but honest discussions of the needed structural decisions and changes we need to work on.&lt;/p&gt;

&lt;p&gt;Here are just a few of the highlights from this year's ChiBrrCon.&lt;/p&gt;




&lt;h2&gt;
  
  
  Land the Plane Before You Write the Postmortem
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/joshuapeltz/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Joshua Peltz, VP of Zero Networks&lt;/a&gt;, called "Resiliency through Adversity: Comparing 'Flight 1549' with a Cyber Breach," he explained crisis response as disciplined execution under pressure. Drawing on his experience aboard &lt;a href="https://en.wikipedia.org/wiki/US_Airways_Flight_1549?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Flight 1549&lt;/a&gt;, the infamous flight that landed in the Hudson, piloted by Capt. Sully. He described how survival depended on preparation deposits made long before impact, immediate anomaly recognition, and unambiguous authority during execution.&lt;/p&gt;

&lt;p&gt;The aviation parallel worked because it was procedural, just like we need to be for incident response. Detection happened in seconds. Roles were predefined. Communication was controlled. The objective was stabilization, not explanation.&lt;/p&gt;

&lt;p&gt;Joshua emphasized sequencing: containment first, recovery second, and attribution later. Security teams often invert this order. They chase root cause while lateral movement continues. That instinct feels analytical, but it increases blast radius.&lt;/p&gt;

&lt;p&gt;He also surfaced an uncomfortable reality that runbooks that are not stress-tested are documentation theater. Communication channels that are not practiced introduce chaos when adrenaline rises. Authority models that are not explicit collapse when escalation happens. Resilience, in his framing, takes rehearsal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ryhnoib8akzhhnr7qic.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ryhnoib8akzhhnr7qic.jpeg" alt="Joshua Peltz" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Cutting Through The Noise Of 90 Billion Daily Events
&lt;/h2&gt;

&lt;p&gt;In "Modernizing Security Operations in a World of AI Threats," &lt;a href="https://www.linkedin.com/in/paul-hill-cissp-b258023a/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Paul Hill, Cortex Regional Sales Manager at Palo Alto Networks&lt;/a&gt;, presented a structural critique of how modern SOCs accumulate complexity. He shared his firsthand testimony of seeing his team take tens of billions of daily log events and collapse them using ML-powered automation into a handful of meaningful incidents. The real challenge was the cohesion of data, not the detection volume.&lt;/p&gt;

&lt;p&gt;Paul described how years of well-intentioned tooling decisions create fragmentation, where separate detection engines, SIEM pipelines, and response workflows created operational friction and context switching. Each part was optimized in isolation, and the end result was a blindness to the whole story.&lt;/p&gt;

&lt;p&gt;He described what they did at Palo Alto to consolidate telemetry into a unified data model, allowing context to travel with the signals. AI was positioned as a stitching mechanism, grouping alerts into coherent incident stories that reflect attacker movement rather than system boundaries. Repetitive triage work was automated so analysts could focus on investigation and engineering.&lt;/p&gt;

&lt;p&gt;Ultimately this led to the elimination of human-driven Level 1 triage. He was clear that there was no headcount reduction. Analysts moved into threat hunting and detection engineering. Burnout declined because the system removed repetition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fms3tfu4limvcfs5b384r.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fms3tfu4limvcfs5b384r.jpeg" alt="Paul Hill" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Fluency Is Not Judgment
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/billbernardchicago/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Bill Bernard, Field CTO at Between Two Firewalls&lt;/a&gt;, called "Gen AI Ain't Your Buddy: Neither Is Your Lawnmower," we got a behavioral warning about generative AI adoption. His fear is not that AI is intelligent; the danger lies in the fact that it &lt;em&gt;feels&lt;/em&gt; intelligent.&lt;/p&gt;

&lt;p&gt;Bill grounded generative AI in a practical definition. It predicts statistically likely output based on training data and rule constraints. It does not understand context in the human sense. It does not evaluate ethical nuance. It does not possess lived experience.&lt;/p&gt;

&lt;p&gt;He walked through examples of confidently wrong outputs, fabricated quotes, and biased responses that appear credible because they are well structured. He summed this up as "fluency creates an illusion of authority." The operational risk lies in how easily teams accept polished language as verified information.&lt;/p&gt;

&lt;p&gt;He reminded us that AI is just another tool, like your lawnmower. Tools excel within defined tasks. They fail when treated as cognitive partners. Generative AI is powerful for summarization, drafting, and preparation. It is fragile when asked to replace validation or expert judgment. He urged us to treat AI as infrastructure rather than a companion. He thinks that is the maturity step most teams have not yet taken.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt9o7y87v92u029sxceq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt9o7y87v92u029sxceq.jpeg" alt="Bill Bernard" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Risk Reduction Requires Building An Inventory
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/sean-juroviesky/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Sean Juroviesky, Security Architect at SoundCloud,&lt;/a&gt; presented "The Risky Business of AI Illiteracy." They presented AI risk as an extension of classical security fundamentals, such as overprivileged identities, injection vulnerabilities, misconfigurations, and weak segmentation. These issues have remained the dominant exposure vectors, no matter how the tech has evolved. Sean said that AI accelerates these patterns.&lt;/p&gt;

&lt;p&gt;Sean said we need to anchor the conversation with our teams, especially management, in "risk math." Risk equals threat plus vulnerability. Without full visibility into your environments and data flow, neither side of that equation is measurable.&lt;/p&gt;

&lt;p&gt;They emphasized mapping of trust boundaries and understanding how data moves across services. AI systems sit inside identity providers, API gateways, orchestration platforms, and third-party integrations. Focusing exclusively on model behavior while ignoring those interactions produces blind spots.&lt;/p&gt;

&lt;p&gt;Threat modeling, inventory discipline, and iterative review remain the backbone of defensible security. Automation does not replace them. It magnifies their absence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gutx1j18vhp0lmaqh1b.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gutx1j18vhp0lmaqh1b.jpeg" alt="Sean Juroviesky" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Operational Agility Beats Resilience
&lt;/h2&gt;

&lt;p&gt;The word "resilience" implies passive strength, a rigid ability to withstand shocks and return to baseline. ChiBrrCon 2026 revealed the deeper truth that security teams don't need to bounce back; they need to adapt forward. This is especially true in the exceptionally fast-moving world of AI-driven everything, which almost every speaker touched on.&lt;/p&gt;

&lt;p&gt;To deal with rapidly shifting demands and the new dangers that AI presents or amplifies, we must embrace a new goal beyond recovery, which can be summed up in one term: Operational Agility.&lt;/p&gt;

&lt;p&gt;The heart of operational agility is about more than just responding to and surviving breaches; it is the ability to evolve through these events in real time. And it's reshaping how we structure our teams, tools, and thinking.&lt;/p&gt;

&lt;h3&gt;
  
  
  Acting Before Certainty Arrives
&lt;/h3&gt;

&lt;p&gt;Operational environments now move faster than traditional security decision models allow. Data volumes overwhelm human analysis. Incidents unfold across multiple systems simultaneously. In this reality, waiting for full understanding is often indistinguishable from inaction.&lt;/p&gt;

&lt;p&gt;Operational agility requires comfort with acting with the best available, partial picture. It means knowing which decisions must be made immediately, which can be refined later, and which are reversible. Teams need to be able to prioritize under pressure.&lt;/p&gt;

&lt;p&gt;When incidents demand parallel action across detection, containment, communication, and recovery, hesitation becomes the threat multiplier. Agility emerges when authority is clear, decision rights are understood, and responders are trusted to move before every answer is known.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reducing Cognitive Load To Preserve Judgment
&lt;/h3&gt;

&lt;p&gt;Security teams are not failing because they lack enough data. If anything, there's generally too much raw data for us to know what to do with it. They are failing because context is fragmented. Alerts arrive without narrative. Analysts spend time stitching together meaning instead of making decisions. Every manual correlation step drains attention that should be reserved for judgment.&lt;/p&gt;

&lt;p&gt;Teams need a shared understanding of what is happening, why it matters, and what action is required next. Tooling only helps if it reduces mental overhead rather than adding to it. Automation is essential here, not as a headcount strategy, but as a way to protect human cognition.&lt;/p&gt;

&lt;p&gt;Any task that requires repeated action without new judgment should already be automated. Humans should be reserved for ambiguity, tradeoffs, and accountability. When teams are buried in repetitive work, agility collapses long before burnout becomes visible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Happens Where People Interpret Risk
&lt;/h3&gt;

&lt;p&gt;Risk is not perceived uniformly, as individual humans are the ones perceiving and defining it. Different people respond differently under stress. Some act quickly and pull others in. Some slow down to stabilize and verify. Some prioritize accuracy. Others prioritize momentum.&lt;/p&gt;

&lt;p&gt;Trust sits at the center of this. Without trust, uncertainty gets hidden. Escalation gets delayed. With trust, imperfect information surfaces early and improves over time. This is also where the limits of AI become clear. Tools can assist thinking. They cannot replace accountability or judgment.&lt;/p&gt;

&lt;p&gt;The enduring takeaway from ChiBrrCon was that operational agility is not something you discover during an incident. It is something you build long before one ever starts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Adaptability, Especially To AI, Is The New Availability
&lt;/h2&gt;

&lt;p&gt;The most useful takeaway from ChiBrrCon 2026 was not a new tool or a new tactic. After all, the dangers that AI brings to enterprise software are not new concepts. Your author was able to be one of many presenters who showed that issues like adversarial input injection, XSS, and broken access controls are patterns we have been facing for decades. AI has simply accelerated the speed at which these issues can be introduced.&lt;/p&gt;

&lt;p&gt;Operational agility is needed to adapt to this new rate of software development and delivery. It is something you build long before the next incident ever starts. You build it in runbooks that get stress-tested, and by making sure all alerts bring context instead of noise. You build it in inventories that reflect how your systems actually work, not how you wish they worked. Teams need to trust each other enough to act on available information, no matter how complete it is.&lt;/p&gt;

&lt;p&gt;Chicago did not solve its water problem by asking people to be more careful. It solved it by changing the system. That is the work in front of us now. And the good news, visible in every packed room and every hallway conversation at ChiBrrCon, is that we are not doing it alone.&lt;/p&gt;

&lt;p&gt;If you are working on solving the issues around secrets sprawl and NHI governance, you are not alone, either, &lt;a href="https://www.gitguardian.com/book-a-demo?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;and we would love to work with you&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>ai</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>ConFoo 2026: Guardrails for Agentic AI, Prompts, and Supply Chains</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 19 Mar 2026 14:48:30 +0000</pubDate>
      <link>https://dev.to/gitguardian/confoo-2026-guardrails-for-agentic-ai-prompts-and-supply-chains-1dpa</link>
      <guid>https://dev.to/gitguardian/confoo-2026-guardrails-for-agentic-ai-prompts-and-supply-chains-1dpa</guid>
      <description>&lt;p&gt;Montreal has a guardrail baked into its skyline. The "&lt;a href="https://cultmtl.com/2021/05/in-defence-of-building-height-restrictions-in-montreal-mount-royal-urban-plan-denis-coderre/#:~:text=The%20rationale%20behind%20the%20building%20height%20limitation%20in%201992%20was%20that%20Mount%20Royal%20is%20the%20city%E2%80%99s%20premier%20geographical%20feature%20and%20it%E2%80%99s%20both%20socially%20and%20culturally%20significant%20to%20Montrealers.%20As%20such%2C%20views%20of%20it%2C%20and%20views%20from%20it%2C%20should%20not%20be%20obstructed%20by%20tall%20buildings." rel="noopener noreferrer"&gt;mountain restriction&lt;/a&gt;" keeps most buildings from rising higher than the cross on top of &lt;a href="https://en.wikipedia.org/wiki/Mount_Royal" rel="noopener noreferrer"&gt;Mount Royal&lt;/a&gt;, roughly 233 meters (764 feet), so the city's natural high point remains the highest point. It is an urban policy choice that says clearly that growth is allowed, even encouraged, but only within constraints that preserve what matters.&lt;/p&gt;

&lt;p&gt;This makes Montreal a perfect backdrop for &lt;a href="https://confoo.ca/en/2026/" rel="noopener noreferrer"&gt;ConFoo 2026&lt;/a&gt;, a conference focused on building resilience by learning from experiences across many communities, including Java, .NET, PHP, Python, DevOps, and many others. With around 800 attendees spanning development and DevOps, the conference felt full of practitioners admitting the ground is moving and choosing to respond with structure rather than nostalgia. The event took place across five full days of activities, including over 190 sessions, two full days of workshops, an evening of co-located meetups, and many fun social hours spent sharing our ideas.&lt;/p&gt;

&lt;p&gt;The shared theme across sessions was not novelty for its own sake, but guardrails: controls that keep fast-moving systems from surprising you, whether the "system" is an LLM agent calling tools, a dependency graph pulling in unreviewed execution hooks, or a web application whose defaults quietly widen risk.&lt;/p&gt;

&lt;p&gt;There is no way to fully explain all that is ConFoo, so here are just a few highlights and thoughts.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Wristband Check for Your Bots&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/nickytonline/" rel="noopener noreferrer"&gt;Nick Taylor, Developer Advocate at Pomerium&lt;/a&gt;, "Agentic Access: OAuth Gets You In. Zero Trust Keeps You Safe," we were presented with a crisp argument that our access model has quietly changed from human access to agentic access, and most stacks are still built for the former. There is a mismatch between how we authenticate as people and how we should authenticate when an LLM agent makes the call.&lt;/p&gt;

&lt;p&gt;Nick used &lt;a href="https://blog.gitguardian.com/non-human-identity-security-zero-trust-architecture/" rel="noopener noreferrer"&gt;Zero Trust&lt;/a&gt; as the corrective lens. Through it, we can see we should never trust, always verify, and verify with more than identity. The "wristband-at-the-venue" metaphor he mentioned works because it captures the difference between a one-time gate and ongoing enforcement. In a Zero Trust model, "who you are" is only one input. Device posture, time, location, and session behavior become policy signals, and the enforcement point needs to sit in front of the request, not behind it. Identity Aware Proxies matter, now more than ever. They do not just authenticate blindly just because a key is present. Instead, they apply context-aware policy and create a single choke point for logging and auditing.&lt;/p&gt;

&lt;p&gt;MCP has quickly become a standard interface for tool calls in LLM ecosystems. That standardization reduced bespoke integrations, but it also made it easier to wire powerful tools to agents eager to comply with prompts. Nick said to "put the guardrails where the wheels touch the road;" place MCP servers behind a proxy, enforce authentication, validate token audience and scopes, prevent token passthrough, preserve user consent, and audit all access. When the caller is non-human, and your tools can touch sensitive systems, you need a per-request policy that survives context changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1kb0diugqbs4ojmbocw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1kb0diugqbs4ojmbocw.png" alt="Nick Taylor presenting at ConFoo 2026" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prompt Hygiene Is the New Input Validation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/bendechrai/" rel="noopener noreferrer"&gt;Ben Dechrai, Staff Developer Advocate &amp;amp; Software Engineer&lt;/a&gt;, presented "Rogue LLMs: Securing Prompts and Ensuring Persona Fidelity" to a completely full room. It was a reality check that LLMs are programmable interfaces that accept adversarial input, except that the input is language and the boundaries are fuzzy. Models will be "dangerous" if we keep treating prompt behavior as if it will be stable under pressure. We know from decades of security practice that "works on the happy path" is not a control.&lt;/p&gt;

&lt;p&gt;Ben gave real-world examples of "prompt leak," including one where system instructions translated into another language evaded trigger-word filters. Another showed structured output requirements used as a lever to coax the model into returning what it should not. Persona drift, where the assistant stops being "your bot," and defaults to system prompts that make it want to be more helpful. Basically, social engineering, except that the target has no human context and no hard boundaries. If you can socially engineer a human, you can socially engineer a model, and the model is optimized to comply.&lt;/p&gt;

&lt;p&gt;We must treat prompts like code and treat model behavior like a system under test. Ben talked about a mindset that says you cannot "prove" safety with just a couple of manual tests; you need statistically meaningful testing, an explicit risk appetite, and continuous evaluation in CI/CD. The environment will change even when your prompt does not. Ben also suggested planting "canary tokens" in your internal context and treating any appearance in a response as a deterministic sign that something has gone wrong.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1ohulz03cx5q1ki6v43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1ohulz03cx5q1ki6v43.png" alt="Ben Dechrai presenting at ConFoo 2026" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;NuGet as a Delivery Truck With a False Bottom&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In his session, "Building a supply chain attack with .NET and NuGet," &lt;a href="https://www.linkedin.com/in/maartenballiauw/" rel="noopener noreferrer"&gt;Maarten Balliauw, Head of Customer Success at Duende Software&lt;/a&gt;, presented on how dependency trust gets abused in ways that look boring at first, then turn catastrophic. He framed supply chain attacks as downstream-impact multipliers. If an attacker compromises a package, they inherit the victim's trust graph, and they leverage that trust to access environments that were never the original target. The danger of a "sleeper," where a good package goes bad later, is especially effective because it aligns with how teams actually update dependencies.&lt;/p&gt;

&lt;p&gt;Maarten broke the attack down into familiar components: a dropper, a payload, the command-and-control infrastructure, and a persistence-and-exfiltration layer. What made it uncomfortable was how many execution hooks exist in a modern .NET workflow that are legitimate features. Module initializers can run code when an assembly loads and source generators can run during builds to produce code that becomes part of the compiling project. Startup hooks can run before &lt;code&gt;Main&lt;/code&gt; via environment variables. These are all powerful extension points, not really bugs. The attacker's job is to smuggle intent through extension points that defenders treat as normal plumbing for the supply chain.&lt;/p&gt;

&lt;p&gt;We need a Swiss cheese model when thinking of guardrails, with multiple overlapping layers that make a system more resilient to attacks. Sign commits and packages, while also using package source mapping. Restore with lock files and enforce locked mode in CI, on top of generating SBOMs, which you should actually be analyzing. Pin CI actions while you watch for suspicious environment variable changes. Good dependency management requires adopting an operational discipline for your organization. If your build system can execute code from your dependency graph, then "dependency update" is a privileged operation, and it should be treated with the same care as production access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F728p98aimzpvtyfiy7w1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F728p98aimzpvtyfiy7w1.png" alt="Maarten Balliauw presenting at ConFoo 2026" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;OWASP as a Mirror, Not a Checklist&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/christianwenz/" rel="noopener noreferrer"&gt;Christian Wenz, Owner of Arrabiata Solutions&lt;/a&gt;, presented "Web Application Security Up-to-date: The 2025 OWASP Top Ten." He began by reminding us all that the OWASP project is not a compliance artifact. It is a lens for what the industry is repeatedly getting wrong. The list is useful precisely because it is a little imperfect; the categories are sometimes too broad, sometimes frustratingly narrow, and the debates about what belongs on it reveal where teams still lack shared mental models.&lt;/p&gt;

&lt;p&gt;Christian highlighted how misconfiguration and supply chain issues have risen in prominence and how some perennial categories stay stubbornly relevant. Broken Access Control remains an umbrella that hides many failure modes, from direct object access to function-level authorization gaps. Security Misconfiguration is odd because DevOps blurred the line between "developer" and "admin," and defaults remain sharp edges, full of noisy errors, weak browser headers, and parsing hazards. Cryptographic failures are most often about basics like enforcing HTTPS, setting HSTS, and using secure cookie flags consistently.&lt;/p&gt;

&lt;p&gt;Web security categories are converging with the agentic and supply chain themes rather than competing with them. "Injection" is still relevant, but the boundary of "input" is expanding. Model binding quirks, deserialization assumptions, and integrity failures in CI/CD all rhyme with prompt injection and package compromise. OWASP is not a checklist; it is a mirror. It reflects that our modern security failures are often control failures, often caused by automating actions without enforcing the constraints that make those actions safe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbtgn8awn02ried5wfgl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbtgn8awn02ried5wfgl.png" alt="Christian Wenz presenting at ConFoo 2026" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Future Is Embracing Change You Can Actually Operate&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Across ConFoo there were a lot of conversations across technical communities that rarely have the chance to meet and talk. The 'hallway track' consistently had an air of excitement. While many subjects came up, a common theme was that things are evolving faster than ever before. At the same time, there was a real sense around AI, especially agentic AI, that we need to proceed with a little more safety and control in mind.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Change is here, but it still needs structure&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;From the keynote, there was a recurring background conversation throughout the week about "&lt;a href="https://confoo.ca/en/2026/session/spec-driven-development-a-faster-way-to-code" rel="noopener noreferrer"&gt;Spec-Driven Design.&lt;/a&gt;" There is a worry that while AI accelerates work, it "amplifies ambiguity." When context is thin, the model will infer missing details and keep moving trying to please the user. That creates debt quickly when the output doesn't match what you meant, and you just have to hope it understands your next prompt.&lt;/p&gt;

&lt;p&gt;Instead, we need a structured approach. Structure should be seen as a performance feature. The clearer the requirements, such as a concrete design, with solid implementation details, gives both humans and tools something stable to align on. You get faster iteration because fewer cycles are spent translating intent after the fact.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Per-request trust beats perimeter trust&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Agentic AI was, of course, a common conversation across the networking events. Modern workflows are full of non-human actors that can touch real systems. Agents, automations, and developer tools operating with delegated access all throughout your "perimeter," which has a very different meaning in the modern world. Network location no longer describes risk when the tools you rely on run outside your environment.&lt;/p&gt;

&lt;p&gt;A sturdier approach is context-based access on every request. Systems need to validate identity and authorization, apply scoped permissions, and enforce policy before a tool reaches sensitive services. Then make it observable via monitored audit logs, consistent gateways, and controls that work the same way across systems.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Your software risk iceberg is mostly hidden beneath the surface&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;For modern applications, software risk lives outside just your application code. Dependencies, base images, build artifacts, configuration, and runtime behavior routinely decide what reaches production and what attackers can touch.&lt;/p&gt;

&lt;p&gt;One angle I heard in a few conversations was the idea of end-to-end ownership. For people who deliver dependencies, "owning" means proving you know what you ship, can track any exposure throughout the delivery chain, and build checks that hold up when any part of the systems change. For consumers, we need to treat dependency findings as seriously as first-party bugs, tighten error handling and logging, and test for drift in assistants and automations so they continue behaving as the system you intended.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Build for AI Speed With Control As A Requirement&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AI is forcing a shift in what "secure by default" even means. The models will eventually say something wrong, which is definitely something many speakers touched on, including your author. The issue is that we are wiring language to action, and then acting surprised when the system behaves probabilistically. Systems take the shortest path through ambiguity, following incentives to comply as it figures out the next token. AI will happily route around soft boundaries, and any unfortunate surprises are the tax you pay for automation without constraints.&lt;/p&gt;

&lt;p&gt;We can't slow down change, but we can make that change operable. We need policies enforced where the wheels touch the road, not just in a slide deck or internal PDFs. We have to treat prompts as inputs that can be adversarial and treat dependencies as privileged code that can execute automatically. Then test for drift, log what matters, and assume the environment will change even when your intent does not.&lt;/p&gt;

&lt;p&gt;Montreal's skyline still grows, but it keeps one thing higher than everything else, on purpose. For AI, that high point should be guardrails you can enforce and observe. Change for IT means building upward; we just need to align on where our highest priorities sit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfcl8kw7m7jyjajdbra1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfcl8kw7m7jyjajdbra1.png" alt="GitGuardian Interactive Demo" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>devsecops</category>
      <category>devops</category>
      <category>github</category>
    </item>
    <item>
      <title>From Detection to Defense: How Push-to-Vault Supercharges Secrets Management for DevSecOps</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Mon, 15 Dec 2025 18:16:58 +0000</pubDate>
      <link>https://dev.to/gitguardian/from-detection-to-defense-how-push-to-vault-supercharges-secrets-management-for-devsecops-174d</link>
      <guid>https://dev.to/gitguardian/from-detection-to-defense-how-push-to-vault-supercharges-secrets-management-for-devsecops-174d</guid>
      <description>&lt;p&gt;If you work in security or DevSecOps, you already know secrets do not live only where they should, safely in secrets management platforms, aka vaults. They leak into Git repos, CI logs, Slack threads, Jira tickets, wikis, and "temporary" config files that never get cleaned up. We unfortunately know this problem is getting worse for most organizations.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2025?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;The State of Secrets Sprawl 2025&lt;/a&gt; report found a 25% year-over-year increase in leaked secrets on public GitHub. We also reported that secrets are 8x more likely to leak into private repos.  Worse yet, 70% of the valid secrets leaked in 2022 remained valid when we retested them in 2025. &lt;/p&gt;

&lt;p&gt;The problem is not just that secrets leak. They stick around and grant access to whoever finds them.&lt;/p&gt;

&lt;p&gt;The good news is detection has never been better, thanks to &lt;a href="https://www.gitguardian.com/monitor-internal-repositories-for-secrets?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian's Secrets Detection&lt;/a&gt; across code, CI, collaboration tools, and anywhere sensitive values are exposed &lt;a href="https://docs.gitguardian.com/internal-monitoring/integrate-sources/bring-your-own-sources?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;across any source&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The bad news is that solving the "last mile" challenge is still a challenge for too many teams. Someone still has to handle all the manual steps of properly vaulting the secret, updating the required configs or CI variables, and rotating the secret, all while hoping nothing breaks. Doing that over and over turns into burnout, backlog, and real risk.&lt;/p&gt;

&lt;p&gt;GitGuardian's view is that there has been a missing link between "we found a secret" and "this is safely under control in a Secret Manager." We call that vital link "&lt;a href="https://docs.gitguardian.com/ggscout-docs/what-is-ggscout?ref=blog.gitguardian.com#safely-store-unvaulted-secrets" rel="noopener noreferrer"&gt;Push-to-Vault&lt;/a&gt;" and are happy to support it &lt;a href="https://docs.gitguardian.com/nhi-governance/integrate-your-sources?ref=blog.gitguardian.com#secrets-managers" rel="noopener noreferrer"&gt;across all leading secret management platforms&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/content/images/2025/12/p2v-in-app-pic.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdln2l8h8zjf857e48wm3.png" width="774" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Push-to-Vault feature in the GitGuardian workspace&lt;/p&gt;

&lt;h2&gt;
  
  
  What GitGuardian Push-to-Vault Does for You
&lt;/h2&gt;

&lt;p&gt;Push-to-Vault is the bridge between detection and secure storage. It is a workflow that lets users insert discovered unvaulted secrets directly into their Secret Manager from within the GitGuardian platform. Instead of copying values into a clipboard and juggling tabs, you stay in the incident view, review what was detected, and trigger a controlled flow that moves the secret from being exposed in code or logs to being safely stored at the right vault path. &lt;/p&gt;

&lt;p&gt;GitGuardian tracks that remediation for you, so you know which incidents are actually under control and which still need work.&lt;/p&gt;

&lt;p&gt;Crucially, this does not introduce a new vault; instead, it helps you make better use of the vaults you already invested in. Under the hood, Push-to-Vault is powered by ggscout, a lightweight tool you run inside your environment. You integrate ggscout with existing Secret Managers such as HashiCorp Vault, CyberArk Conjur Cloud, AWS Secrets Manager, and others. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/content/images/2025/12/data-src-image-d624d1cd-c315-48f5-a757-22dc9822a02f.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqp2931p7c9m1pvi7odz.png" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The GitGuardian Secrets Managers Integration menu&lt;/p&gt;

&lt;p&gt;When you Push-to-Vault from an incident, ggscout writes the exposed secret into your chosen vault path. After that, it sends only metadata and hashes back to GitGuardian so the platform can confirm remediation without ever holding the raw values.&lt;/p&gt;

&lt;h3&gt;
  
  
  Push-to-Vault As Part Of Your NHI Governance Strategy
&lt;/h3&gt;

&lt;p&gt;Push-to-Vault is a key part of &lt;a href="https://www.gitguardian.com/nhi-governance?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian's broader Non-Human Identity Governance story&lt;/a&gt;. Secrets, usually in the form of passwords, tokens, or API keys, are the connective tissue for service accounts, workloads, CI pipelines, agents, and every other workload identity that talks to critical systems. If you cannot see and govern those secrets, you cannot really say you understand or control your non-human identities.&lt;/p&gt;

&lt;p&gt;GitGuardian's NHI Governance aims to change that by giving you a real inventory, clear visibility, and end-to-end mapping from sources to consumers. You see which identity uses which secret, where that secret appeared, and how it flows through your environment. The faster a secret is secured in a vault, the sooner you can rotate it without fear of breaking applications or pipelines.&lt;/p&gt;

&lt;p&gt;As NHIs multiply and goals shift from "use vaults" to true lifecycle management, this becomes essential. GitGuardian helps you discover secrets and identities, understand hygiene and over-permissioning, and focus effort on vaulting what really matters. Once those secrets are properly stored, you can put regular rotation on autopilot instead of treating it as a one-off project. &lt;/p&gt;

&lt;p&gt;Push-to-Vault sits right in the middle of that journey. It does not just patch one leaked key. It turns each incident into an opportunity to bring another NHI under the kind of governance that modern zero trust architectures demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security First: How Push-to-Vault Works
&lt;/h2&gt;

&lt;p&gt;The internal flow for Push-to-Vault was intentionally designed to be secure and straightforward to implement.&lt;/p&gt;

&lt;p&gt;First, GitGuardian detects an incident with an exposed, unvaulted secret across one of your monitored sources. That might be a Git commit, a CI log, or a message in a collaboration tool. From there, ggscout pulls the incident details from your GitGuardian instance and uses its sync-secrets capability to write that secret to an integrated Secrets Manager. You decide which vault and which path, and you can standardize that per team, repo, or environment.&lt;/p&gt;

&lt;p&gt;Once ggscout writes the value into the vault, it sends only metadata back to GitGuardian so that the platform can mark the incident as vaulted and track remediation end-to-end. Once in the vault, the secret values never leave your infrastructure in clear text. GitGuardian never needs to store or replay your secrets. ggscout runs close to your vaults, while GitGuardian relies on metadata and cryptographic proofs that the secret is now under control.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tight Control Over Automation
&lt;/h3&gt;

&lt;p&gt;Push-to-Vault is opt-in. ggscout can be configured to read from your vaults without any write access at all, which is a common starting point for teams that want visibility first. When you are ready to enable writes, you can restrict ggscout to very specific locations, such as a dedicated path, a given namespace, or only non-production environments. That way, you give your teams a safer "lane" for automation instead of turning it loose everywhere.&lt;/p&gt;

&lt;p&gt;If you need even more assurance, you can run in a more conservative mode, where ggscout operates in a fetch-only posture and produces JSON reports that show what it would write and where. You can review those reports with your platform, security, or audit teams before allowing any actual writes. It is a practical way to bring skeptics along without asking them to take automation on faith.&lt;/p&gt;

&lt;p&gt;Just as important, Push-to-Vault is designed to help you avoid dumping everything into the vault and needing to sort it out later. We all know that later never comes. You can embed remediation guidance directly in GitGuardian to lead users down the right paths. Over time, the result is a cleaner, more predictable vault layout rather than simply moving the mess from Git into a secret store.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Push-to-Vault Changes Day-to-Day Work
&lt;/h2&gt;

&lt;p&gt;Push-to-Vault is all about removing friction. Instead of bouncing between an incident panel, a vault console, a configuration repo, and a runbook, your security and platform teams can move exposed secrets straight into the right Secret Manager path from the GitGuardian dashboard. No juggling user interfaces. No copy-paste. No hoping someone remembers to update a ticket afterward.&lt;/p&gt;

&lt;p&gt;Once a secret is vaulted, you can wire the new path back into the application or pipeline, rotate the value safely, and close the loop with a clean audit trail that shows how it moved from detection to secure storage. &lt;/p&gt;

&lt;p&gt;GitGuardian can tag incidents when secrets are already vaulted, so you get a clear split between "under control" and "still floating around unvaulted." That lines up with revocation workflows where you vault and rotate a new secret, revoke the old one, and then close the incident with evidence attached.&lt;/p&gt;

&lt;p&gt;For dev teams, this means no more generic "go fix this" tickets. Engineers get a vault path and a clear reference, saving them time looking up the process and the right vault. Over time, Push-to-Vault nudges your organization toward consistent vault hierarchies instead of random environment files and scattered CI variables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/content/images/2025/12/data-src-image-a4c9fbab-afe1-468a-9174-61c4981d3ac8.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnv3z3syu465d8pahab4i.png" alt="Optional workflows once ggscout is integrated" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  A Practical Scenario
&lt;/h3&gt;

&lt;p&gt;Let's walk through an example incident in which a production database password is committed to a private repo. With traditional remediation, you would copy the password out, create a vault entry by hand, try to pick the right path, update the configuration, rotate the password, possibly redeploy the application, and hope nothing got missed. &lt;/p&gt;

&lt;p&gt;With Push-to-Vault, you start from the incident, push the value into a standard production database path that your team already trusts, wire the app to read from that path, and rotate with confidence. The same pattern holds for an access key pasted into Slack during an incident or a long-lived token hiding in a legacy config file. Speed, safety, and traceability all improve in the same motion.&lt;/p&gt;

&lt;p&gt;At the program level, this operational improvement rolls up into better Privileged Access Management and stronger NHI governance. By centralizing where secrets actually live across vaults, you can see duplicate or weak credentials, enforce least privilege more consistently for non-human identities, and move toward a lifecycle-driven secrets strategy instead of a series of isolated cleanups.&lt;/p&gt;

&lt;h3&gt;
  
  
  Push-to-Vault As Part Of Your Secrets Security Strategy
&lt;/h3&gt;

&lt;p&gt;While Push-to-Vault is an exciting feature, it is one part of a larger approach. Adopting it needs to be part of your overall secrets management and NHI governance plan. If you are new to this journey, we highly recommend first finding out how many secrets are currently leaked and where they live. Until you know that, you don't know what is really at stake for your organization.&lt;/p&gt;

&lt;p&gt;Detection gives you that visibility. Once you are ready, though, Push-to-Vault turns that visibility into control. The faster you can move from finding leaked secrets with scanning to vaulting and rotating, the less time an attacker has to turn someone's mistake into a foothold.&lt;/p&gt;

&lt;p&gt;If you are already using GitGuardian to detect secrets, Push-to-Vault is how you turn noisy findings into durable, governed fixes. This is one more way GitGuardian is helping teams get non-human identity under control; in a vault, with automated rotation, and with proper lifecycle governance.&lt;/p&gt;

&lt;p&gt;If you are not yet leveraging GitGuardian, we would &lt;a href="https://www.gitguardian.com/book-a-demo?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;love to talk to you about fighting secrets sprawl and getting a handle on NHI governance&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>security</category>
      <category>cybersecurity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Lessons in Testing, Performance, and Legacy Systems from /dev/mtl 2025</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 11 Dec 2025 15:27:00 +0000</pubDate>
      <link>https://dev.to/gitguardian/lessons-in-testing-performance-and-legacy-systems-from-devmtl-2025-5dl2</link>
      <guid>https://dev.to/gitguardian/lessons-in-testing-performance-and-legacy-systems-from-devmtl-2025-5dl2</guid>
      <description>&lt;p&gt;Montreal, Canada, is the birthplace of the search engine. Long before the world was talking about semantic search and vector embeddings, a student at &lt;a href="https://en.wikipedia.org/wiki/Archie_(search_engine)?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;McGill built Archie&lt;/a&gt; to find files across FTP servers. The creator's goal wasn't to change how everyone would use the internet; it was to solve a specific issue around finding available files by exact title. Almost all leaps in technology are achieved the same way: a small group of practitioners focuses on solving a specific issue, and those innovations reverberate across the whole internet. It made Montreal an ideal setting for a group of current innovators to get together and discuss common challenges at &lt;a href="https://www.dev-mtl.ca/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;/dev/mtl 2025&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Around 150 developers got together at &lt;a href="https://www.dev-mtl.ca/venue?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;École de technologie supérieure (ÉTS)&lt;/a&gt; for a truly cross-community event, put on by a &lt;a href="https://www.dev-mtl.ca/about?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;coalition of 14 local tech communities&lt;/a&gt;, including Java and Python user groups, the local CNCF and AWS meetups, and Women in AI. In true Québécois fashion, the 21 speakers shared their knowledge in both French and English across three tracks. &lt;/p&gt;

&lt;p&gt;Here are just a few highlights from this year's /dev/mtl. &lt;/p&gt;

&lt;h2&gt;
  
  
  Unchecked Complexity Makes Testing Unpredictable
&lt;/h2&gt;

&lt;p&gt;In his session "Feature Flags And End-to-End Testing," &lt;a href="https://www.linkedin.com/in/bahmutov/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Gleb Bahmutov, Senior Director of Engineering at Mercari&lt;/a&gt;, walked through a problem a lot of teams quietly live with, especially as they update legacy code. Feature flags are great for incremental releases, experiments, and kill switches, but they can turn end-to-end testing into a maze. &lt;/p&gt;

&lt;p&gt;The math is exponential, as each new flag doubles the number of possible states. For example, if you have three flags with two states each, that is (2 × 2 × 2) = 8 states to test. Adding one more flag with two more states makes that 16 possible states. It introduces testing questions like "Did we ever test these flags in these states?" Keeping up manually is a logistical nightmare.&lt;/p&gt;

&lt;p&gt;Gleb explained that every test is supposed to be deterministic, yet percentage rollouts and misaligned environments mean the same test can fail every few days for no obvious reason.&lt;/p&gt;

&lt;p&gt;He compared three testing strategies. Total control gives tests the full feature flag payload through an API and fixtures, but now you are debugging caching and invalidation on top of the app. Selective control stubs only the flag under test, but page reloads, navigation, and backend behavior still make things unpredictable. The most reliable option was per-user control. Keep flags as production-like as possible, then target individual user IDs in tools like LaunchDarkly&lt;/p&gt;

&lt;p&gt;The larger lesson was lifecycle discipline. Treat flags as temporary. Make new features explicit opt-in, migrate tests as defaults change, archive flags when done, and aggressively retire anything old. Also, do not build your own feature flag system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m6pcazoatc2h?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8995w72j3u35rmn7fbz.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gleb Bahmutov&lt;/p&gt;

&lt;h3&gt;
  
  
  Invisible Complexity Impacts Performance
&lt;/h3&gt;

&lt;p&gt;&lt;a href="http://linkedin.com/in/rezamadabadi/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Reza Madabadi, Software Developer at 360.Agency&lt;/a&gt;, started his talk, "Why Your Database Hates You: The N+1 Query Problem," by asking, "Why is everything so slow?" It is a common question every developer faces when starting with a new company with years of history to decode. Tools like MySQL &lt;code&gt;EXPLAIN&lt;/code&gt; can help diagnose some of the issues, but Reza said that it does not tell you the full story; it can show the query is taking a long time, but it does not show why.&lt;/p&gt;

&lt;p&gt;Reza said what changed things for him was tracing. This revealed how a single, seemingly harmless request was turning into thousands of database calls. Each individual lookup was cheap, but together they added up. That is the N+1 problem. Not a language or framework bug, but a middleware issue rooted in object relational mappers (ORMs).&lt;/p&gt;

&lt;p&gt;He explained that in a typical Java and Hibernate monolith, data access objects feed big data transfer objects (DTOs), and lazy loading tries to protect you from loading the whole database at once. Instead of one query with joins, the ORM runs one query to get a list, then N queries to hydrate each association. Reza walked through join fetches and Hibernate batch size tweaks as partial fixes. They help, but they are still hacks that can create long prepared statements and memory pressure.&lt;/p&gt;

&lt;p&gt;The more durable answer was to design for DTOs directly. Reza said they now use entity-based CRUD where it makes sense, but write targeted select DTO queries where they know exactly what is needed. Pair that with local tracing tools like Digma to catch N+1 patterns early, before they turn into mysterious slow nights in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m6pi6j5u2t2j?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xlx3wpgmtupqd7qphz3.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reza Madabadi&lt;/p&gt;

&lt;h2&gt;
  
  
  Improvements Take A Dedicated Approach
&lt;/h2&gt;

&lt;p&gt;In his talk "My Journey With Software Testing," &lt;a href="https://www.linkedin.com/in/luciancondrea/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Lucian Condrea, a freelance full-stack developer and contributor at Tribe Social&lt;/a&gt;, told a story many self-taught testers will recognize. He started with no real testing skills, no strategy, and a growing pile of manual checks that were slow, tedious, and mentally draining. Leadership only saw that "QA is too slow," without understanding the challenges the team was facing, including the invisible cognitive load. Progress felt random, and there was no clear path to those dependable systems and breezy workdays he wanted.&lt;/p&gt;

&lt;p&gt;Things shifted when he became intentional about learning. Lucian built a "testing wishlist" and carved out daily 30-minute practice sessions. He leaned into small, atomic wins instead of vague "I should be testing more" guilt. He credited blogs from folks like &lt;a href="https://kentcdodds.com/blog/how-to-know-what-to-test?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Kent C. Dodds&lt;/a&gt; for finally clarifying why tests matter, and Nicolas Carlo's "Legacy Code: First Aid Kit" for showing how to create boundaries in the code so it was even possible to test. &lt;/p&gt;

&lt;p&gt;From there, he adopted a pragmatic view: "Tests should serve you, not the other way around." Focusing on readable, co-located tests, integration tests for the best return. This means minimal mocking and only as much end-to-end coverage as you truly need.&lt;/p&gt;

&lt;p&gt;He left us with the advice to reflect on strategy, not dogma. He said to build deliberate habits so your tests give you confidence instead of resentment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m6poidde332v?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdylsybwejz38uz6stdj.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lucian Condrea&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons From Developers For Everyone
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Legacy Is Our Shared Starting Point
&lt;/h3&gt;

&lt;p&gt;One quiet constant behind every talk was the reality that legacy is not a corner case, and no one is starting from greenfield. Legacy systems are where we do our most meaningful work. We are not working in an ideal vacuum, but are layering decisions on top of years of code, data, and human habits. &lt;/p&gt;

&lt;p&gt;Instead of fantasizing about starting over again, rewrite "once things calm down," the real work is learning how to move forward inside constraints you did not choose. When you accept that, dealing with legacy stops being a shameful side quest and becomes the main design problem of figuring out how to change things without breaking the promises your system made years ago.&lt;/p&gt;

&lt;h3&gt;
  
  
  Feedback Loops Beat Raw Velocity
&lt;/h3&gt;

&lt;p&gt;Another theme that permeated the whole event was that while speed matters, measuring and investigating what is happening might be just as important. Whether the topic was performance, testing, releases, or AI, the teams that seemed calmer were the ones with feedback loops they trusted. That might mean observability that shows how a request actually flows, or tests that fail in ways that teach you something instead of interrupting you at random. It might be metrics on how often your users get "no results" in search, or how many flags are still active past their intended lifetime. &lt;/p&gt;

&lt;p&gt;If you cannot observe it, you cannot reason about it, and if you cannot reason about it, you are just feeling like you are moving faster, in the dark.&lt;/p&gt;

&lt;h3&gt;
  
  
  Guardrails Over Heroics
&lt;/h3&gt;

&lt;p&gt;The speakers' stories kept pointing to the fact that the strongest teams design guardrails so that normal behavior is safe by default. That looks like defensible defaults, explicit lifecycles, and constraints that keep complexity from running away in the first place. &lt;/p&gt;

&lt;p&gt;It means treating experiments, flags, tests, and configurations as living systems with an end-of-life plan, not as one-off hacks. When you do that, you do not need a rockstar to remember every edge case. You need a group of normal people who respect the guardrails and adjust them as reality changes. This is as true for security and secrets governance as it is for any other area of production systems. &lt;/p&gt;

&lt;h3&gt;
  
  
  Tools Change, Habits Compound
&lt;/h3&gt;

&lt;p&gt;Underneath all the specific technologies, the real leverage showed up in habits, not tools. Tools will keep changing. We will likely never stop learning about new frameworks, new agents, and new tracing stacks. &lt;/p&gt;

&lt;p&gt;What carried across topics was the value of small, repeatable practices. Speakers commonly talked about carving out time to improve tests, routinely inspecting how your system actually behaves, and retiring complexity instead of hoarding it. We should strive for the simplest solution that fits the current scale. &lt;/p&gt;

&lt;p&gt;These habits compound in a way that individual tools never do. The future of our systems depends less on the next big thing and more on how disciplined we are with the things we already have.&lt;/p&gt;

&lt;h2&gt;
  
  
  Innovations Come From Persistence While Addressing Real Issues 
&lt;/h2&gt;

&lt;p&gt;Your author was able to share a session on secrets security at this developer-focused event. Rather than being put off by the scale of the secrets sprawl problem, developers who attended leaned in and asked about possible solutions. It was highly encouraging to see folks who do not regularly interact with the security team immediately recognize the dangers of plaintext credentials and seem eager to embrace available solutions to work more safely and efficiently.  &lt;/p&gt;

&lt;p&gt;No matter what area of enterprise technology you are dealing with, the same themes that ran through this developer-focused conference apply. Accept the legacy you have, add observability, put guardrails in place, and build habits that make the safe path the easy one. If we keep doing that across testing, performance, search, and security, the next "Archie moment" will not come from a single breakthrough, but from thousands of small, deliberate improvements shipped by teams like the ones who showed up at /dev/mtl.&lt;/p&gt;

</description>
      <category>techtalks</category>
      <category>devops</category>
      <category>security</category>
      <category>sre</category>
    </item>
  </channel>
</rss>
