<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dwayne McDaniel</title>
    <description>The latest articles on DEV Community by Dwayne McDaniel (@dwayne_mcdaniel).</description>
    <link>https://dev.to/dwayne_mcdaniel</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dwayne_mcdaniel"/>
    <language>en</language>
    <item>
      <title>BSides SF 2026: Looking At Security Beyond The Next Big Bet</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Mon, 06 Apr 2026 13:28:38 +0000</pubDate>
      <link>https://dev.to/gitguardian/bsides-sf-2026-looking-at-security-beyond-the-next-big-bet-21lg</link>
      <guid>https://dev.to/gitguardian/bsides-sf-2026-looking-at-security-beyond-the-next-big-bet-21lg</guid>
      <description>&lt;p&gt;San Francisco has always had a talent for turning risk into infrastructure, such as when &lt;a href="https://en.wikipedia.org/wiki/Liberty_Bell_(game)?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Charles Fey invented the slot machine there during the Gold Rush&lt;/a&gt;. Today, we have another nondeterministic device for fortune seekers willing to pull a lever and see what comes back. We call it AI.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsidessf.org/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;BSides San Francisco 2026&lt;/a&gt; felt built for a more modern version of that wager. The stakes today are identities, tokens, agents, permissions, and the growing gap between what systems are supposed to do and what they actually do in production.&lt;/p&gt;

&lt;p&gt;Happening the weekend before RSA Conference, this is one of the largest of the BSides events globally. This year, 2,965 participants attended 92 talks, 8 workshops, 11 interactive sessions, a CTF, and many, many other activities. This year's event was made possible by the help of 235 volunteers and a truly tireless organizing team.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq48qi6ip7dpizfmu76p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsq48qi6ip7dpizfmu76p.png" alt="BSides SF final attendance metrics" width="800" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;People were there to share notes on how to keep control when software delivery speeds up, AI changes how code and infrastructure are produced, and attackers are increasingly happy to work through identity and trust instead of smashing through a perimeter. Here are just a few highlights from this year's edition of BSidesSF.&lt;/p&gt;

&lt;h2&gt;
  
  
  Time Travel Without Nostalgia
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/anna-westelius-72a47048?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Anna Westelius, Head of Security, Privacy &amp;amp; Assurance at Netflix&lt;/a&gt;, "Let's Do the Timewarp Again! A Look Back to Move Forward," she presented security history as a series of pivots rather than a straight line. Instead of treating today's instability as unprecedented, she walked through earlier shifts in the early internet, the worm era, and the move to the cloud, showing how each period first felt chaotic, then gradually produced better defaults, stronger habits, and more durable systems.&lt;/p&gt;

&lt;p&gt;Anna made the case that security has repeatedly moved from heroics to engineering. Cloud, once framed as inherently unsafe, matured into a place where private-by-default storage, identity-centric controls, and better primitives could outperform old on-premises assumptions when teams rebuilt for the environment instead of dragging old workflows into it. Fire drills now are new CVEs, not being compromised.&lt;/p&gt;

&lt;p&gt;She reminded us that progress comes from community and deliberately laying out well-paved roads that are easier to travel. Anna argued that the field is finally in a position to measure meaningful risk reduction, design for humans instead of blaming them, and start tackling legacy risk that used to feel too sprawling to touch. Maturity is not perfection. It is building enough scaffolding that the next crisis does not have to be improvised.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sns3hy0zkp6wj29t6l4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sns3hy0zkp6wj29t6l4.png" alt="Anna Westelius" width="800" height="604"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Threat Model Meets Production
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/farshadabasi/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Farshad Abasi, CoFounder of Eureka DevSecOps and CSO of Forward Security&lt;/a&gt;, gave a session called "Your Threat Model Is Lying to You: Why Modeling the Design Isn't Enough in 2026," where he laid out a sharp critique of threat modeling, at least the way many teams still practice it. The problem was not the exercise itself, but that design intent keeps getting treated as reality, even when production systems drift, dependencies multiply, and delivery speed outruns any annual review cycle.&lt;/p&gt;

&lt;p&gt;Farshad explained that threat models often describe what teams think they built, while the real system includes transitive dependencies, infrastructure changes, deployment quirks, and configuration choices that never made it into the diagram. He pointed to the need for a feedback loop between the model and the evidence teams already collect from SAST, SCA, DAST, cloud findings, and deployment telemetry. A finding should not just confirm a known risk. It should be able to expose a broken assumption and force the model to update.&lt;/p&gt;

&lt;p&gt;That shift has real consequences, as it moves threat modeling out of a compliance drawer and back into engineering rituals like backlog refinement, pull requests, and post-finding analysis. The problem is rarely the first known issue. It is the invisible dependency, the quietly expanded permission, or the workflow that changed faster than the security model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbp10zja9lf5hmj09pf1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcbp10zja9lf5hmj09pf1.png" alt="Farshad Abasi" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Tokens Are the New Currency
&lt;/h2&gt;

&lt;p&gt;In "Breaking Tokens: Modern Attacks on OAuth, OIDC, and JWT Auth Flows," &lt;a href="https://www.linkedin.com/in/bhaumikshah2?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Bhaumik Shah, CEO at SecurifyAI&lt;/a&gt;, presented identity failure as an application architecture problem, not just an authentication problem. He covered examples of token replay, weak audience validation, trust confusion between identity providers, and the dangerous habit of treating a valid token as universally trustworthy.&lt;/p&gt;

&lt;p&gt;He quickly moved from protocol language to operational consequences in this session, sharing that a token validated in one place could be replayed somewhere else. An app without proper validation might accept a token from the wrong issuer, or a federated environment could end up granting the same privileges to identities that arrived through very different trust paths. In practice, that means an organization can enforce MFA at login and still leave the actual session material portable enough to be abused elsewhere.&lt;/p&gt;

&lt;p&gt;Bhaumik's mitigation advice was crisp and overdue. We should bind privileges to high-trust identity providers and validate the issuer and subject together instead of trusting email alone. We also need to narrow the scopes so a stolen token does not become a skeleton key. He talked about the fact that identity is no longer just about proving who logged in. It is about preserving trust boundaries after authentication, when tokens start moving between proxies, services, and automation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwt6du621dga94cqrcqf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwwt6du621dga94cqrcqf.png" alt="Bhaumik Shah" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Hunting the Blind Spot on Developer Workstations
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://blog.securient.io/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Vinod Tiwari, Engineer at PIP Labs&lt;/a&gt; and first-time speaker, presented "Hunting Malicious IDE Extensions: Building Detection at Scale Across Developer Workstations." He walked us through a problem most security teams still barely measure: the IDE extension layer on developer machines has broad access to many dangerous things. Beyond source code, most dev machines have API keys, cloud credentials, deployment tooling, and local secrets, yet in many organizations, nobody has a complete inventory of what is installed. Vinod said that approvals are rare, monitoring is minimal, and only a couple of people in the room raised their hands when asked if they had MDM visibility into extensions at all. That gap matters because extensions sit inside one of the richest trust zones in the company.&lt;/p&gt;

&lt;p&gt;Vinod pointed to multiple cases from 2023 through 2025 in which malicious VS Code extensions were caught stealing SSH keys, typosquatted packages were bundled as IDE extensions to target crypto developers, and even widely installed extensions were found to exhibit data exfiltration behavior. With tens of thousands of extensions in the VS Code marketplace and a similar scale in JetBrains ecosystems, the review model has not kept pace with the level of access these plugins receive. He said there is often no sandbox here, and extensions can read and write local files, spawn processes, access the network, and, in some cases, quietly access clipboard data or other sensitive workflows.&lt;/p&gt;

&lt;p&gt;Vinod highlighted how private keys in &lt;code&gt;.env&lt;/code&gt; files, wallet seed phrases, and deployment credentials can all sit on developer workstations, turning a compromised extension into a direct path to irreversible damage. One malicious plugin is not just a workstation incident. It can become a wallet loss, a production compromise, or a supply chain exposure. Teams need to stop treating IDE extensions as harmless productivity add-ons and start treating them as privileged code execution inside the developer environment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa663271zk0b4ggwmwmzv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa663271zk0b4ggwmwmzv.png" alt="Vinod Tiwari" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Static security assumptions are failing faster
&lt;/h2&gt;

&lt;p&gt;Attackers are adapting, and our models of stability are aging out. Threat models go stale. Token assumptions do not survive microservices. Audit habits lag behind AI-assisted development. A control that made sense when releases were slower now becomes a blind spot because the underlying system changes too quickly.&lt;/p&gt;

&lt;p&gt;That is a meaningful shift for defensive work. It suggests that many security programs do not need more categories of findings as much as they need faster ways to reconcile expectations with reality. Drift, replay, hidden dependencies, and agent behavior all punish teams that treat security as a periodic review instead of continuous correction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Identity has moved into the center of the map
&lt;/h2&gt;

&lt;p&gt;The strongest sessions throughout the event kept orbiting identity, even when they were not labeled that way. Okta hunting, OAuth token replay, authorization architecture, and AI agents with production access all pointed to the same practical truth: trust now travels through sessions, tokens, permissions, and service relationships more often than through a clean user login moment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/files/the-state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Secrets sprawl&lt;/a&gt;, overbroad scopes, orphaned permissions, and weak service boundaries all create the same outcome. They let valid-looking access travel farther than it should. In that environment, good hygiene is no longer a side practice. It is the structure that keeps the blast radius from becoming a business risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security teams are being pushed closer to platform work
&lt;/h2&gt;

&lt;p&gt;Another conversation at the event was that the old separation between security, platform, and developer tooling is getting harder to sustain. Talks on authorization, malicious IDE extensions, AppSec, and AI agents all described a world where the useful control point is often the workflow itself. The winning pattern was not "scan more." It was "build the road correctly."&lt;/p&gt;

&lt;p&gt;That has implications for staffing and program design. Teams need people who can express policy in systems, not only people who can identify issues after the fact. Secure defaults, tool proxies, sidecars, telemetry feedback loops, and opinionated guardrails all came up because they let security become part of how work is done, instead of an extra approval step hovering outside it.&lt;/p&gt;

&lt;h2&gt;
  
  
  What San Francisco Made Feel Obvious
&lt;/h2&gt;

&lt;p&gt;BSidesSF is a very forward-looking conference, in part because the event is made up of the practitioners, maintainers, and professionals who are actively working to keep us all safe. What seemed to be the consensus in the hallways was that the problems we face can't be solved with just more tools. They are going to be solved organizational change in how we deal with trust and access.&lt;/p&gt;

&lt;p&gt;If the threat model no longer matches production or a dependency incident exposes confusion about ownership, it is not just time to patch, it is time to reconsider if your architecture and governance are aligned with your org's goals. This event left me with hope that we can make security better by focusing on the systems that issue trust, store secrets, define permissions, and drive automation. Reduce what can sprawl as you update what has drifted. We don't need to stretch our old security plans around new technology; we need to adopt better paved paths and guardrails for a safer future. Especially as it is ever increasingly driven by AI.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>ai</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>Honeytokens on the Developer Workstation: When Cleanup Takes Time</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 02 Apr 2026 15:00:54 +0000</pubDate>
      <link>https://dev.to/gitguardian/honeytokens-on-the-developer-workstation-when-cleanup-takes-time-15a4</link>
      <guid>https://dev.to/gitguardian/honeytokens-on-the-developer-workstation-when-cleanup-takes-time-15a4</guid>
      <description>&lt;p&gt;Supply chain security has moved closer to the humans with hands on the keyboard.&lt;/p&gt;

&lt;p&gt;For years, security teams have treated production systems, CI/CD pipelines, and identity infrastructure as the most sensitive parts of the software lifecycle. That is not wrong, but it is incomplete. The developer workstation belongs in that same conversation because it sits at the intersection of privilege, trust, and execution. It is where code is written, dependencies are installed, credentials accumulate, and trusted actions begin.&lt;/p&gt;

&lt;p&gt;Modern supply chain attacks are increasingly designed to land on the developer machine first. They do not need to smash through the front gate of production if they can quietly collect the keys from the laptop that already has access to private repositories, package publishing workflows, cloud consoles, build systems, and internal tooling.&lt;/p&gt;

&lt;p&gt;In 2025, and for the first time, campaigns such as &lt;a href="https://blog.gitguardian.com/shai-hulud-2/" rel="noopener noreferrer"&gt;Shai-Hulud&lt;/a&gt; showed us publicly just how many credentials could be harvested from a developer machine. In the &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2026?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;State of Secrets Sprawl report&lt;/a&gt;, we showed that across 6,943 compromised machines in that supply chain attack, we found 33,185 unique secrets. At least 3,760 were still valid when we initially checked. Now a growing class of agentic &lt;a href="https://www.darkreading.com/application-security/supply-chain-attack-openclaw-cline-users?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;AI attacks&lt;/a&gt; aimed at local credentials and developer context shows the same pattern. The shortest path to enterprise impact often starts with developer access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3r7lutao78id2geq53a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl3r7lutao78id2geq53a.png" alt="Shai-Hulud count of secrets per machine from the State of Secrets Sprawl Report 2026" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers have always been attractive targets. What has changed is the speed, scale, and plausibility of the attack paths. Poisoned packages, malicious plugins, compromised updates, and AI-assisted local automation all make it easier for adversaries to reach into a workstation and search for anything useful. That search is usually not abstract. It is practical. It looks for API keys, cloud tokens, SSH material, npm credentials, GitHub tokens, secrets in environment variables, plaintext config files, &lt;code&gt;.env&lt;/code&gt; files, shell history, logs, caches, and agent memory stores.&lt;/p&gt;

&lt;p&gt;The perimeter has not disappeared. It has become easier to recognize. The real perimeter is wherever the most privileged identities can be reached and abused. In many organizations, this includes the developer machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why developers should care
&lt;/h2&gt;

&lt;p&gt;Most developers do not think of themselves as part of the security boundary. They write code, and IT manages the laptop, while Security owns the policy. That division makes sense until an attacker uses a developer workstation as the easiest path into the systems that the developer already touches every day.&lt;/p&gt;

&lt;p&gt;This is why workstation security should not and cannot be framed as a request for every developer to become a full-time security engineer. The practical goal is smaller and more useful. If you reduce the chance that your machine becomes the easiest place to steal high-value access, you are also reducing real risk for your organization.&lt;/p&gt;

&lt;p&gt;That access is valuable for two reasons. First up, developer systems often hold secrets and tokens with real privilege. The second reason is that actions that originate from developer environments inherit trust. A package published from a maintainer machine, a commit signed with trusted credentials, a dependency update, a cloud login, or access to an internal support tool can all carry institutional trust that attackers would love to borrow.&lt;/p&gt;

&lt;p&gt;That is why the workstation deserves the same level of scrutiny and controls we already give to production systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  The immediate problem is plaintext secrets
&lt;/h2&gt;

&lt;p&gt;When attacks land on a developer's machine, they do not need to perform magic. They need to find useful material quickly. Too often, that material is sitting in plaintext.&lt;/p&gt;

&lt;p&gt;Secrets end up in source trees, local config files, tests, debug output, copied terminal commands, environment variables, shell profiles, AI tool configuration, and temporary scripts. They very commonly end up in &lt;code&gt;.env&lt;/code&gt; files that were supposed to be local-only but quietly became permanent if not shared. Convenience turns into residue, which in turn becomes opportunities for attackers.&lt;/p&gt;

&lt;p&gt;That is why one of the clearest next steps for developers is also one of the least glamorous: eliminate plaintext secrets from the workstation wherever possible.&lt;/p&gt;

&lt;p&gt;Replace hardcoded credentials with calls to approved secret managers. Move local secrets into the system keychain or an enterprise-approved password manager. Encrypt files at rest when secrets must exist in files at all. Use tools such as SOPS where that workflow makes sense. Better yet, move away from shared static secrets entirely and adopt identity-based authentication wherever feasible.&lt;/p&gt;

&lt;p&gt;The goal is reducing the amount of value an attacker can extract from any successful foothold.&lt;/p&gt;

&lt;h2&gt;
  
  
  The hard truth about remediation
&lt;/h2&gt;

&lt;p&gt;The most correct security answer here seems straightforward: reduce the use of plaintext secrets and move to stronger authentication. At the same time, we need to harden the workstation itself and standardize approved tooling. Slowing down dependency risk is another priority, while detecting abuse earlier is what security teams need for auditability.&lt;/p&gt;

&lt;p&gt;None of this happens instantly.&lt;/p&gt;

&lt;p&gt;Even good remediation plans take time because they touch real workflows. Changes require updates to training, tooling, access patterns, and team habits. Replacing a secret in code is easy on the surface. But replacing the workflow that caused ten copies of that secret to spread across laptops, config files, and build jobs is harder. Moving a team from local environment variables to a stronger secret retrieval model is possible, but it is not a same-day project. Moving from secret-based access to identity-based patterns takes longer still.&lt;/p&gt;

&lt;p&gt;Attackers do not pause while those projects are underway.&lt;/p&gt;

&lt;p&gt;That gap between knowing the right long-term direction and living in today's imperfect environment is where &lt;a href="https://www.gitguardian.com/honeytoken?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Honeytokens&lt;/a&gt; make sense.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why honeytokens belong on the developer workstation
&lt;/h2&gt;

&lt;p&gt;Honeytokens are not the end state. They are the first compensating controls that help while cleanup is still in progress.&lt;/p&gt;

&lt;p&gt;Honeytokens do not prevent workstation compromise. They do not replace hardening, secret elimination, or better authentication. What they do is give defenders a way to detect malicious secret harvesting as it happens.&lt;/p&gt;

&lt;p&gt;A honeytoken is a decoy credential designed to generate an alert when someone tries to use it. On a developer machine, that makes it useful as a tripwire. If a poisoned dependency, a malicious plugin, or a compromised local tool begins sweeping through files and environment variables, looking for credentials to exfiltrate and replay, a well-placed honeytoken can surface that behavior before the attacker gets very far. Validation, which triggers any honeytoken, is very often done automatically to reduce the noise for the attacker.&lt;/p&gt;

&lt;p&gt;That early signal changes the response window. It can limit the blast radius. It can help incident responders identify the affected host, the likely access path, and the timing of the event. It can also create an auditable record of what happened and when, which is valuable during investigation and remediation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgff3c9hqnnnl07p6r8y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgff3c9hqnnnl07p6r8y.png" alt="The workflow of generate a honeytoken, deploy to a private environment, any attacker will trigger it, and get alerts instantly" width="800" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For organizations dealing with supply chain attacks that target credentials first, that is not security theater. That is practical detection.&lt;/p&gt;

&lt;h2&gt;
  
  
  Placement matters more than enthusiasm
&lt;/h2&gt;

&lt;p&gt;Honeytokens only work if they stay believable and private. Placement is critical.&lt;/p&gt;

&lt;p&gt;A honeytoken should look like the kind of secret an attacker expects to find in the kind of place they are already searching. We can glean from Shai Hulud where attacks look for secrets. The best workstation honeytokens blend into the legitimate local context and live in the files and locations that the supply chain malware tends to inspect first, like any local &lt;code&gt;.env&lt;/code&gt; files or paths like &lt;code&gt;~/.config/gh/config.yml&lt;/code&gt; or &lt;code&gt;~/.aws/credentials.&lt;/code&gt; Local config files, development directories, service-related settings, and environment variable paths are all obvious candidates. So are places where convenience has historically created risky habits.&lt;/p&gt;

&lt;p&gt;Environment variables deserve special attention here. Developers often treat them as safer than files because they feel transient. In practice, they spread. They persist in shell history, child processes, debug output, terminal multiplexers, launch configs, and tool integrations. If it is in your environment, it is often more portable and more visible than people assume.&lt;/p&gt;

&lt;p&gt;A private honeytoken placed in those realistic paths can do its job quietly while real secrets are being removed from the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start with what an individual developer can do
&lt;/h2&gt;

&lt;p&gt;One of the weaknesses in many workstation security conversations is that they blur the distinction between individual actions and organizational controls. That creates advice that sounds good but feels impossible to follow.&lt;/p&gt;

&lt;p&gt;An individual developer cannot rewrite the package policy for the company, deploy endpoint tooling across the fleet, or migrate the entire organization to identity-based authentication alone. But they can make meaningful changes on their own machine today.&lt;/p&gt;

&lt;p&gt;They can remove plaintext secrets from active project directories. They can stop using unapproved local storage for credentials. They can move secrets into the system keychain or approved managers. They can reduce reliance on long-lived environment variables.&lt;/p&gt;

&lt;p&gt;Further, they can avoid random plugins, suspicious package installs, and unapproved agentic tooling. They can treat phishing, weird links, and convenience scripts with more skepticism. They can report strange package behavior instead of assuming the problem is isolated.&lt;/p&gt;

&lt;p&gt;They can also install and maintain honeytokens as a tripwire while the rest of the cleanup continues.&lt;/p&gt;

&lt;p&gt;Workstation security is partly organizational, but compromise often begins with local habits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Then demand organizational support
&lt;/h2&gt;

&lt;p&gt;Developers should not have to, and can't really solve this alone.&lt;/p&gt;

&lt;p&gt;The enterprise has to do its part by making the secure path easier to follow. That means providing approved secret managers and clear local development patterns. It means publishing workstation baselines, endpoint protections, and package trust policies that actually reflect current threats. Establish cooldown periods for updates when appropriate, rather than normalizing instant adoption of whatever just dropped upstream. Sandbox or somehow isolate new code and unfamiliar tools before they are trusted with real access.&lt;/p&gt;

&lt;p&gt;We must create and test response playbooks for when honeytokens fire, because a detection without a plan can still turn into a mess.&lt;/p&gt;

&lt;p&gt;There is also a new policy line that many teams should draw clearly: do not install agentic systems on work machines without explicit approval. That is especially important when those tools can read local context, inspect repositories, access credentials, or run broad local actions. The attack surface is not only the model. It is the &lt;a href="https://www.rescana.com/post/glassworm-supply-chain-attack-exploits-open-vsx-extensions-to-target-developer-environments?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;plugin&lt;/a&gt; ecosystem, the automation layer, the permissions, and the assumptions about trust.&lt;/p&gt;

&lt;p&gt;Some teams may even benefit from local warnings or command aliases that remind users when they are about to invoke unapproved tooling. If trying to invoke &lt;code&gt;openclaw&lt;/code&gt; instead sends off a warning or a honeytoken, even better.&lt;/p&gt;

&lt;p&gt;If the environment allows it, organizations can also &lt;a href="https://youtu.be/yandDvJr4Kc?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;add security-aware MCP and IAM tooling to local assistants&lt;/a&gt; to help with remediation workflows, policy checks, and honeytoken placement. That can make the defensive path more practical without pretending automation removes the need for judgment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq8qie7rtwlsfvtu6yu6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffq8qie7rtwlsfvtu6yu6.png" alt="Slide showing the capabilities of the GitGuardian MCP server" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Secrets are the first target, not the only target
&lt;/h2&gt;

&lt;p&gt;Admittedly, secrets are not the whole story of workstation risk.&lt;/p&gt;

&lt;p&gt;Attackers may also want browser session material, package publishing rights, signed commit workflows, access to internal knowledge, SSH agent forwarding, build context, or any local state that helps them pivot. But secrets remain one of the most portable, reusable, and operationally useful prizes. That makes them the best first cleanup target and the best place to reduce attacker value quickly.&lt;/p&gt;

&lt;p&gt;If an attacker lands on a developer machine and finds nothing useful in plaintext, the possible damage narrows. The incident may still be serious, but it becomes harder to convert local execution into durable enterprise access.&lt;/p&gt;

&lt;p&gt;That is the security value of secret elimination. It does not promise perfection. It reduces attacker leverage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Treat developer machines like they matter, because they do
&lt;/h2&gt;

&lt;p&gt;The &lt;a href="https://research.jfrog.com/post/ghostclaw-unmasked/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;agentic AI era has amplified workstation risk&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Developers now work in environments that combine trusted execution, local automation, sprawling dependency chains, and high-value access in one place. Attackers know that they no longer need physical access to a laptop or a dramatic break-in story. Sometimes all they need is an update, an altered package or &lt;a href="https://www.rescana.com/post/glassworm-supply-chain-attack-exploits-open-vsx-extensions-to-target-developer-environments?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;plugin&lt;/a&gt;, or a workflow that slips through trust assumptions and starts looking for credentials.&lt;/p&gt;

&lt;p&gt;Developer workstations deserve the same discipline we already apply to pipelines and production infrastructure: Eliminate plaintext secrets, move toward stronger identity-based patterns, and be careful with updates, plugins, and local automation. And while all of that longer work is underway, install &lt;a href="https://www.gitguardian.com/honeytoken?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Honeytokens&lt;/a&gt; where attackers are most likely to look.&lt;/p&gt;

&lt;p&gt;That is not the whole strategy. It is the first good move.&lt;/p&gt;

&lt;p&gt;For teams trying to reduce risk now, that is often the difference between discovering an attack after the damage is done and catching it while it is still unfolding.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>cybersecurity</category>
      <category>devsecops</category>
    </item>
    <item>
      <title>Chainguard Assemble 2026 and the Security Factory Mindset</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Mon, 23 Mar 2026 15:37:55 +0000</pubDate>
      <link>https://dev.to/gitguardian/chainguard-assemble-2026-and-the-security-factory-mindset-3ib1</link>
      <guid>https://dev.to/gitguardian/chainguard-assemble-2026-and-the-security-factory-mindset-3ib1</guid>
      <description>&lt;p&gt;New York knows how to turn rough ground into something human-friendly. For example, &lt;a href="http://www.lizchristygarden.us/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;the Liz Christy Garden&lt;/a&gt; began in 1973 as the city's first community garden, carved out of an overgrown vacant lot on the Lower East Side. They went from a wild and chaotic space to a human-friendly park that New Yorkers still enjoy today. That felt like a good backdrop for an event where security-minded professionals could have a conversation centered on taking messy, fast-moving software systems and building them into something durable, governed, and worth trusting. About 400 like-minded practitioners did exactly that at &lt;a href="https://assemble.chainguard.dev/event/2991fca2-5be2-48cb-a8b9-132ab575cd51/summary?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Chainguard Assemble 2026&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Throughout the event, held at &lt;a href="https://www.theglasshouses.com/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;The Glass House&lt;/a&gt;, attendees had the chance to see 38 speakers, including a closing session with Colin Jost, discuss the path forward for security in a world of rapidly evolving, AI-powered threats. Across keynotes, lightning talks, and other sessions, the same message kept surfacing from different angles: patching after the fact is no longer a strategy, and security teams cannot afford to be the last human checkpoint in a pipeline shaped by AI, automation, and sprawling dependencies.&lt;/p&gt;

&lt;p&gt;Here are just a few of the highlights from Chainguard's second annual event.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developing With Power Tools, Safely, At Scale
&lt;/h2&gt;

&lt;p&gt;The opening Product Keynote by &lt;a href="https://www.linkedin.com/in/danlorenc?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Dan Lorenc, founder of Chainguard&lt;/a&gt;, began with a woodworking story that contrasted hand tools with power tools. Hand tools are slower and more familiar. Power tools are faster, louder, and more dangerous when things go wrong. Dan said this is where we are all at with AI and software development: we can now move much faster, but at greater risk of causing damage.&lt;/p&gt;

&lt;p&gt;The industry has already entered a world where automation is not optional, and AI is not confined to autocomplete. CVE discovery and remediation that once took weeks can now happen in minutes. Agentic pentesting is compressing what used to be month-long cycles. That speed is real, and the attackers are using it too. The consequence is that "scan and patch" security no longer holds up as a primary operating model. By the time you find the issue downstream, the system has already moved on.&lt;/p&gt;

&lt;p&gt;Dan used that argument to frame &lt;a href="https://www.chainguard.dev/unchained/driftlessaf-introducing-chainguard-factory-2-0?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Chainguard Factory 2.0&lt;/a&gt;. This lets them build everything from source and control dependencies tightly, reconcile the active state against a desired state, and let agents handle the repetitive, high-volume coordination work while keeping trust anchored in cryptographic authenticity in a transparent process. Tens of thousands of dependency updates, hundreds of thousands of artifacts, and versioning edge cases across upstream projects that do not agree with each other. This is how they managed to build 7 new offerings, including &lt;a href="https://www.chainguard.dev/os-packages?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;OS dependencies&lt;/a&gt; and &lt;a href="https://www.chainguard.dev/agent-skills?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;AI agent skills&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Software production now looks more like an automated factory than a bespoke workshop. Security has to be embedded in the factory design, not bolted on later.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j4i3lql3tpak876bjm6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2j4i3lql3tpak876bjm6.png" alt="Dan Lorenc and a volunteer sawing wood" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Golden Images As An Operating Model
&lt;/h2&gt;

&lt;p&gt;In the joint session from &lt;a href="https://www.linkedin.com/in/molly-soja?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Molly Soja, Lead Security Engineer at KKR&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/ayeshabhutto/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Ayesha Bhutto, Sr Technical Success Manager at Chainguard&lt;/a&gt;, called "Why Golden Images Still Matter," they presented a practical reminder that mature programs still need stable foundations. Golden images can sound old-fashioned, but this talk grounded the conversation in the realities of consistency, compliance, and rollout strategy.&lt;/p&gt;

&lt;p&gt;They explained that golden images are not just about hardening a base container, but about gaining predictability at scale. You get this by creating a standard layer where security, compliance readiness, and developer expectations can align. That matters even more in large environments where drift accumulates quietly, DIY image factories become distractions, and every exception adds operational drag. Molly told stories from her time at KKR and made the case for treating the work as a phased program instead of a grand migration. She said to start with a proof of value and choose services that reflect the real environment, and focus on reducing complexity.&lt;/p&gt;

&lt;p&gt;Teams want velocity, but they also want sane defaults, audit readiness, and fewer surprise regressions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29zjdz7lwn98qnp3z3ik.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F29zjdz7lwn98qnp3z3ik.png" alt="Ayesha Bhutto and Molly Soja" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer Speed Needs A Support Model
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/brandon-heard-36a73444?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Brandon Heard, Technical Leader at PeopleTec&lt;/a&gt;, called "Developer Productivity Without Compromise," he told us his job is to let developers go as fast as they want, as often as they want, without making security a tax on delivery. He stressed that the details matter.&lt;/p&gt;

&lt;p&gt;His approach rested on one blessed runtime, security built into development workflows, and migration treated as a supported program rather than a mandate. The rollout mechanics were concrete, involving taking a full inventory of existing images, piloting non-critical services, and automating through CI and templates. He supported the developers themselves by publishing example commits and holding office hours. This is not what typically happens in his experience; normally, platform or security teams announce a standard and assume adoption will follow. Brandon showed that adoption is a product problem as much as a technical one.&lt;/p&gt;

&lt;p&gt;The before-and-after comparison gave the story weight. Images dropped from roughly 600 MB to 120 MB. High CVEs dropped from 12 to 2. SBOMs became a default output instead of an afterthought. There is an operational lesson here that secure defaults only scale when accompanied by documentation, migration patterns, and support channels that respect how engineering teams actually work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5wym8z7pbs4giw2b1gg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5wym8z7pbs4giw2b1gg.png" alt="Brandon Heard" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Compliance At Lunar Velocity
&lt;/h2&gt;

&lt;p&gt;In his session, "Securing the Next Moon Age: Automated Compliance Powers the Next Giant Leap," &lt;a href="https://www.linkedin.com/in/collin-estes-b77a398?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Collin Estes, CIO at MRI Technologies working at NASA&lt;/a&gt;, presented the most compelling high-stakes, real-risk example of the day. The context was NASA missions, flight readiness, and systems where the question "is it safe to fly?" is literal.&lt;/p&gt;

&lt;p&gt;Collin described a stack of platform, compliance, and supply chain problems that will sound familiar outside aerospace: multiple cloud environments, bespoke platforms, complex data flows, inherited controls, and the challenge of continuous authorization across hundreds of controls and overlays. They needed to address compliance and delivery simultaneously. The platform they developed absorbed much of the control burden. GitOps, identity federation, brokered services, and hardened container supply chains became a force multiplier for both operations and auditability.&lt;/p&gt;

&lt;p&gt;He described the shift toward continuously delivering trust rather than chasing a "point-in-time clean state." A zero-CVE pull today says little about tomorrow unless the surrounding system keeps reconciling, updating, and proving what changed. When a software factory model reaches a mission-critical environment, compliance stops being paperwork and becomes part of the operating substrate.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkni4hozqb2mk0aqqats.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhkni4hozqb2mk0aqqats.png" alt="Collin Estes" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Is Moving Upstream Because Timelines Have Changed
&lt;/h2&gt;

&lt;p&gt;AI assistance, dependency churn, malicious package discovery, and faster release expectations have all shortened the window between creation and consequence. That does not just create more work for security teams. It changes where security work belongs. When the cycle compresses, downstream review becomes a bottleneck, and bottlenecks get bypassed.&lt;/p&gt;

&lt;p&gt;Many talks focused on source builds, policy enforcement at package boundaries, hardened actions, and secure defaults in developer tooling. The goal is no longer to catch bad outcomes late. It is to constrain what can enter the system in the first place.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trust Is Becoming A Property Of Systems, Not Vendors
&lt;/h3&gt;

&lt;p&gt;Another pattern across the event was a shift from brand trust to process trust. Several sessions touched on this from different angles, including cryptographic authenticity, trusted package sources, reconciliation loops, and audit trails for agents. Teams need verifiable control over how software is built, updated, and promoted.&lt;/p&gt;

&lt;p&gt;AI increases output faster than it increases confidence. If more code, more automation, and more decisions are flowing through the pipeline, then "trust us" is not enough. Systems have to show their work, preserve provenance, and make validation a first-class function.&lt;/p&gt;

&lt;h3&gt;
  
  
  Operational Maturity Now Includes Non-Human Identities
&lt;/h3&gt;

&lt;p&gt;Modern governance has to account for agents, prompts, skills, actions, and machine-driven workflows as real participants in the supply chain. This was never presented as science fiction at Assemble. It was discussed as the current operational reality. Teams are already pulling external skills, running agentic workflows, and handing meaningful tasks to systems that can move much faster than manual reviewers.&lt;/p&gt;

&lt;p&gt;We are seeing a rapid shift in how we need to think about governance models. Identity risk is no longer only about employees and service accounts. Secret sprawl is no longer only a developer hygiene problem. Non-human identities with inherited permissions and agent behavior need to be managed with the same seriousness as runtime images and dependency graphs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Assembling A More Secure, AI-Powered Future
&lt;/h2&gt;

&lt;p&gt;Assemble 2026 featured many product announcements, but that was not the main takeaway from the many hallway conversations. Everything at the event pointed to the reality that security teams cannot keep acting as if software is still produced by small groups moving at human review speed. The tools and way we work have changed. As AI agents and assistants get more powerful, the risks are less forgiving, and the output volume is already beyond what manual processes can govern.&lt;/p&gt;

&lt;p&gt;Automation alone is not the answer, and no single tool can make us secure. But we need automation inside systems designed for trust, delivering reproducible updates, policy-backed repositories, and auditable agent behavior. We must think about reducing variance in how we build before it becomes real risk. That is relevant whether you are building for financial services, federal environments, commercial SaaS, or missions that end in splashdown.&lt;/p&gt;

&lt;p&gt;Security maturity in 2026 is less about scanning harder and more about deciding where trust is manufactured. For teams dealing with identity risk, secrets sprawl, and the growing governance burden around non-human actors, that factory mindset looks less like a nice architectural pattern and more like the cost of keeping up.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>ai</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>AI Is Making Security More Agile: Highlights from ChiBrrCon 2026</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 19 Mar 2026 18:47:48 +0000</pubDate>
      <link>https://dev.to/gitguardian/ai-is-making-security-more-agile-highlights-from-chibrrcon-2026-4hle</link>
      <guid>https://dev.to/gitguardian/ai-is-making-security-more-agile-highlights-from-chibrrcon-2026-4hle</guid>
      <description>&lt;p&gt;In 1900, Chicago completed one of the most ambitious engineering projects ever. Engineers &lt;a href="https://www.wttw.com/chicago-river-tour/how-chicago-reversed-river-animated?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;reversed the flow of the Chicago River&lt;/a&gt; so that sewage would no longer contaminate Lake Michigan, the city's drinking water source. They did not filter harder or build temporary containment. They redesigned that entire system's direction. This effort mirrors the reality everyone in information security now faces, as AI-driven code and applications create new challenges that require bigger answers than just more alerts and tools. This spirit of solving a big challenge, together, carried through every session at &lt;a href="https://chibrrcon.com/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;ChiBrrCon 2026&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This year marked the 6th installment of ChiBrrCon, an enterprise-focused security conference hosted on the &lt;a href="https://www.iit.edu/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Illinois Institute of Technology campus&lt;/a&gt;. This was the largest ChiBrrCon as well, with over 800 tickets sold. Throughout this single-day event, 27 speakers shared their knowledge, war stories, and best practices for securing the enterprise. There were also hands-on villages and competitions, where folks could learn some new skills and make some new connections.&lt;/p&gt;

&lt;p&gt;Across all the sessions, there was an urgency as every speaker addressed the question of what we do in a world where AI is driving code and applications. The answers were never just technical, but honest discussions of the needed structural decisions and changes we need to work on.&lt;/p&gt;

&lt;p&gt;Here are just a few of the highlights from this year's ChiBrrCon.&lt;/p&gt;




&lt;h2&gt;
  
  
  Land the Plane Before You Write the Postmortem
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/joshuapeltz/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Joshua Peltz, VP of Zero Networks&lt;/a&gt;, called "Resiliency through Adversity: Comparing 'Flight 1549' with a Cyber Breach," he explained crisis response as disciplined execution under pressure. Drawing on his experience aboard &lt;a href="https://en.wikipedia.org/wiki/US_Airways_Flight_1549?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Flight 1549&lt;/a&gt;, the infamous flight that landed in the Hudson, piloted by Capt. Sully. He described how survival depended on preparation deposits made long before impact, immediate anomaly recognition, and unambiguous authority during execution.&lt;/p&gt;

&lt;p&gt;The aviation parallel worked because it was procedural, just like we need to be for incident response. Detection happened in seconds. Roles were predefined. Communication was controlled. The objective was stabilization, not explanation.&lt;/p&gt;

&lt;p&gt;Joshua emphasized sequencing: containment first, recovery second, and attribution later. Security teams often invert this order. They chase root cause while lateral movement continues. That instinct feels analytical, but it increases blast radius.&lt;/p&gt;

&lt;p&gt;He also surfaced an uncomfortable reality that runbooks that are not stress-tested are documentation theater. Communication channels that are not practiced introduce chaos when adrenaline rises. Authority models that are not explicit collapse when escalation happens. Resilience, in his framing, takes rehearsal.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ryhnoib8akzhhnr7qic.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ryhnoib8akzhhnr7qic.jpeg" alt="Joshua Peltz" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Cutting Through The Noise Of 90 Billion Daily Events
&lt;/h2&gt;

&lt;p&gt;In "Modernizing Security Operations in a World of AI Threats," &lt;a href="https://www.linkedin.com/in/paul-hill-cissp-b258023a/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Paul Hill, Cortex Regional Sales Manager at Palo Alto Networks&lt;/a&gt;, presented a structural critique of how modern SOCs accumulate complexity. He shared his firsthand testimony of seeing his team take tens of billions of daily log events and collapse them using ML-powered automation into a handful of meaningful incidents. The real challenge was the cohesion of data, not the detection volume.&lt;/p&gt;

&lt;p&gt;Paul described how years of well-intentioned tooling decisions create fragmentation, where separate detection engines, SIEM pipelines, and response workflows created operational friction and context switching. Each part was optimized in isolation, and the end result was a blindness to the whole story.&lt;/p&gt;

&lt;p&gt;He described what they did at Palo Alto to consolidate telemetry into a unified data model, allowing context to travel with the signals. AI was positioned as a stitching mechanism, grouping alerts into coherent incident stories that reflect attacker movement rather than system boundaries. Repetitive triage work was automated so analysts could focus on investigation and engineering.&lt;/p&gt;

&lt;p&gt;Ultimately this led to the elimination of human-driven Level 1 triage. He was clear that there was no headcount reduction. Analysts moved into threat hunting and detection engineering. Burnout declined because the system removed repetition.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fms3tfu4limvcfs5b384r.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fms3tfu4limvcfs5b384r.jpeg" alt="Paul Hill" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Fluency Is Not Judgment
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/billbernardchicago/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Bill Bernard, Field CTO at Between Two Firewalls&lt;/a&gt;, called "Gen AI Ain't Your Buddy: Neither Is Your Lawnmower," we got a behavioral warning about generative AI adoption. His fear is not that AI is intelligent; the danger lies in the fact that it &lt;em&gt;feels&lt;/em&gt; intelligent.&lt;/p&gt;

&lt;p&gt;Bill grounded generative AI in a practical definition. It predicts statistically likely output based on training data and rule constraints. It does not understand context in the human sense. It does not evaluate ethical nuance. It does not possess lived experience.&lt;/p&gt;

&lt;p&gt;He walked through examples of confidently wrong outputs, fabricated quotes, and biased responses that appear credible because they are well structured. He summed this up as "fluency creates an illusion of authority." The operational risk lies in how easily teams accept polished language as verified information.&lt;/p&gt;

&lt;p&gt;He reminded us that AI is just another tool, like your lawnmower. Tools excel within defined tasks. They fail when treated as cognitive partners. Generative AI is powerful for summarization, drafting, and preparation. It is fragile when asked to replace validation or expert judgment. He urged us to treat AI as infrastructure rather than a companion. He thinks that is the maturity step most teams have not yet taken.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt9o7y87v92u029sxceq.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyt9o7y87v92u029sxceq.jpeg" alt="Bill Bernard" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Risk Reduction Requires Building An Inventory
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/sean-juroviesky/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Sean Juroviesky, Security Architect at SoundCloud,&lt;/a&gt; presented "The Risky Business of AI Illiteracy." They presented AI risk as an extension of classical security fundamentals, such as overprivileged identities, injection vulnerabilities, misconfigurations, and weak segmentation. These issues have remained the dominant exposure vectors, no matter how the tech has evolved. Sean said that AI accelerates these patterns.&lt;/p&gt;

&lt;p&gt;Sean said we need to anchor the conversation with our teams, especially management, in "risk math." Risk equals threat plus vulnerability. Without full visibility into your environments and data flow, neither side of that equation is measurable.&lt;/p&gt;

&lt;p&gt;They emphasized mapping of trust boundaries and understanding how data moves across services. AI systems sit inside identity providers, API gateways, orchestration platforms, and third-party integrations. Focusing exclusively on model behavior while ignoring those interactions produces blind spots.&lt;/p&gt;

&lt;p&gt;Threat modeling, inventory discipline, and iterative review remain the backbone of defensible security. Automation does not replace them. It magnifies their absence.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gutx1j18vhp0lmaqh1b.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0gutx1j18vhp0lmaqh1b.jpeg" alt="Sean Juroviesky" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Operational Agility Beats Resilience
&lt;/h2&gt;

&lt;p&gt;The word "resilience" implies passive strength, a rigid ability to withstand shocks and return to baseline. ChiBrrCon 2026 revealed the deeper truth that security teams don't need to bounce back; they need to adapt forward. This is especially true in the exceptionally fast-moving world of AI-driven everything, which almost every speaker touched on.&lt;/p&gt;

&lt;p&gt;To deal with rapidly shifting demands and the new dangers that AI presents or amplifies, we must embrace a new goal beyond recovery, which can be summed up in one term: Operational Agility.&lt;/p&gt;

&lt;p&gt;The heart of operational agility is about more than just responding to and surviving breaches; it is the ability to evolve through these events in real time. And it's reshaping how we structure our teams, tools, and thinking.&lt;/p&gt;

&lt;h3&gt;
  
  
  Acting Before Certainty Arrives
&lt;/h3&gt;

&lt;p&gt;Operational environments now move faster than traditional security decision models allow. Data volumes overwhelm human analysis. Incidents unfold across multiple systems simultaneously. In this reality, waiting for full understanding is often indistinguishable from inaction.&lt;/p&gt;

&lt;p&gt;Operational agility requires comfort with acting with the best available, partial picture. It means knowing which decisions must be made immediately, which can be refined later, and which are reversible. Teams need to be able to prioritize under pressure.&lt;/p&gt;

&lt;p&gt;When incidents demand parallel action across detection, containment, communication, and recovery, hesitation becomes the threat multiplier. Agility emerges when authority is clear, decision rights are understood, and responders are trusted to move before every answer is known.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reducing Cognitive Load To Preserve Judgment
&lt;/h3&gt;

&lt;p&gt;Security teams are not failing because they lack enough data. If anything, there's generally too much raw data for us to know what to do with it. They are failing because context is fragmented. Alerts arrive without narrative. Analysts spend time stitching together meaning instead of making decisions. Every manual correlation step drains attention that should be reserved for judgment.&lt;/p&gt;

&lt;p&gt;Teams need a shared understanding of what is happening, why it matters, and what action is required next. Tooling only helps if it reduces mental overhead rather than adding to it. Automation is essential here, not as a headcount strategy, but as a way to protect human cognition.&lt;/p&gt;

&lt;p&gt;Any task that requires repeated action without new judgment should already be automated. Humans should be reserved for ambiguity, tradeoffs, and accountability. When teams are buried in repetitive work, agility collapses long before burnout becomes visible.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security Happens Where People Interpret Risk
&lt;/h3&gt;

&lt;p&gt;Risk is not perceived uniformly, as individual humans are the ones perceiving and defining it. Different people respond differently under stress. Some act quickly and pull others in. Some slow down to stabilize and verify. Some prioritize accuracy. Others prioritize momentum.&lt;/p&gt;

&lt;p&gt;Trust sits at the center of this. Without trust, uncertainty gets hidden. Escalation gets delayed. With trust, imperfect information surfaces early and improves over time. This is also where the limits of AI become clear. Tools can assist thinking. They cannot replace accountability or judgment.&lt;/p&gt;

&lt;p&gt;The enduring takeaway from ChiBrrCon was that operational agility is not something you discover during an incident. It is something you build long before one ever starts.&lt;/p&gt;




&lt;h2&gt;
  
  
  Adaptability, Especially To AI, Is The New Availability
&lt;/h2&gt;

&lt;p&gt;The most useful takeaway from ChiBrrCon 2026 was not a new tool or a new tactic. After all, the dangers that AI brings to enterprise software are not new concepts. Your author was able to be one of many presenters who showed that issues like adversarial input injection, XSS, and broken access controls are patterns we have been facing for decades. AI has simply accelerated the speed at which these issues can be introduced.&lt;/p&gt;

&lt;p&gt;Operational agility is needed to adapt to this new rate of software development and delivery. It is something you build long before the next incident ever starts. You build it in runbooks that get stress-tested, and by making sure all alerts bring context instead of noise. You build it in inventories that reflect how your systems actually work, not how you wish they worked. Teams need to trust each other enough to act on available information, no matter how complete it is.&lt;/p&gt;

&lt;p&gt;Chicago did not solve its water problem by asking people to be more careful. It solved it by changing the system. That is the work in front of us now. And the good news, visible in every packed room and every hallway conversation at ChiBrrCon, is that we are not doing it alone.&lt;/p&gt;

&lt;p&gt;If you are working on solving the issues around secrets sprawl and NHI governance, you are not alone, either, &lt;a href="https://www.gitguardian.com/book-a-demo?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;and we would love to work with you&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>ai</category>
      <category>cybersecurity</category>
    </item>
    <item>
      <title>ConFoo 2026: Guardrails for Agentic AI, Prompts, and Supply Chains</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 19 Mar 2026 14:48:30 +0000</pubDate>
      <link>https://dev.to/gitguardian/confoo-2026-guardrails-for-agentic-ai-prompts-and-supply-chains-1dpa</link>
      <guid>https://dev.to/gitguardian/confoo-2026-guardrails-for-agentic-ai-prompts-and-supply-chains-1dpa</guid>
      <description>&lt;p&gt;Montreal has a guardrail baked into its skyline. The "&lt;a href="https://cultmtl.com/2021/05/in-defence-of-building-height-restrictions-in-montreal-mount-royal-urban-plan-denis-coderre/#:~:text=The%20rationale%20behind%20the%20building%20height%20limitation%20in%201992%20was%20that%20Mount%20Royal%20is%20the%20city%E2%80%99s%20premier%20geographical%20feature%20and%20it%E2%80%99s%20both%20socially%20and%20culturally%20significant%20to%20Montrealers.%20As%20such%2C%20views%20of%20it%2C%20and%20views%20from%20it%2C%20should%20not%20be%20obstructed%20by%20tall%20buildings." rel="noopener noreferrer"&gt;mountain restriction&lt;/a&gt;" keeps most buildings from rising higher than the cross on top of &lt;a href="https://en.wikipedia.org/wiki/Mount_Royal" rel="noopener noreferrer"&gt;Mount Royal&lt;/a&gt;, roughly 233 meters (764 feet), so the city's natural high point remains the highest point. It is an urban policy choice that says clearly that growth is allowed, even encouraged, but only within constraints that preserve what matters.&lt;/p&gt;

&lt;p&gt;This makes Montreal a perfect backdrop for &lt;a href="https://confoo.ca/en/2026/" rel="noopener noreferrer"&gt;ConFoo 2026&lt;/a&gt;, a conference focused on building resilience by learning from experiences across many communities, including Java, .NET, PHP, Python, DevOps, and many others. With around 800 attendees spanning development and DevOps, the conference felt full of practitioners admitting the ground is moving and choosing to respond with structure rather than nostalgia. The event took place across five full days of activities, including over 190 sessions, two full days of workshops, an evening of co-located meetups, and many fun social hours spent sharing our ideas.&lt;/p&gt;

&lt;p&gt;The shared theme across sessions was not novelty for its own sake, but guardrails: controls that keep fast-moving systems from surprising you, whether the "system" is an LLM agent calling tools, a dependency graph pulling in unreviewed execution hooks, or a web application whose defaults quietly widen risk.&lt;/p&gt;

&lt;p&gt;There is no way to fully explain all that is ConFoo, so here are just a few highlights and thoughts.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;The Wristband Check for Your Bots&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In the session from &lt;a href="https://www.linkedin.com/in/nickytonline/" rel="noopener noreferrer"&gt;Nick Taylor, Developer Advocate at Pomerium&lt;/a&gt;, "Agentic Access: OAuth Gets You In. Zero Trust Keeps You Safe," we were presented with a crisp argument that our access model has quietly changed from human access to agentic access, and most stacks are still built for the former. There is a mismatch between how we authenticate as people and how we should authenticate when an LLM agent makes the call.&lt;/p&gt;

&lt;p&gt;Nick used &lt;a href="https://blog.gitguardian.com/non-human-identity-security-zero-trust-architecture/" rel="noopener noreferrer"&gt;Zero Trust&lt;/a&gt; as the corrective lens. Through it, we can see we should never trust, always verify, and verify with more than identity. The "wristband-at-the-venue" metaphor he mentioned works because it captures the difference between a one-time gate and ongoing enforcement. In a Zero Trust model, "who you are" is only one input. Device posture, time, location, and session behavior become policy signals, and the enforcement point needs to sit in front of the request, not behind it. Identity Aware Proxies matter, now more than ever. They do not just authenticate blindly just because a key is present. Instead, they apply context-aware policy and create a single choke point for logging and auditing.&lt;/p&gt;

&lt;p&gt;MCP has quickly become a standard interface for tool calls in LLM ecosystems. That standardization reduced bespoke integrations, but it also made it easier to wire powerful tools to agents eager to comply with prompts. Nick said to "put the guardrails where the wheels touch the road;" place MCP servers behind a proxy, enforce authentication, validate token audience and scopes, prevent token passthrough, preserve user consent, and audit all access. When the caller is non-human, and your tools can touch sensitive systems, you need a per-request policy that survives context changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1kb0diugqbs4ojmbocw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc1kb0diugqbs4ojmbocw.png" alt="Nick Taylor presenting at ConFoo 2026" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Prompt Hygiene Is the New Input Validation&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/bendechrai/" rel="noopener noreferrer"&gt;Ben Dechrai, Staff Developer Advocate &amp;amp; Software Engineer&lt;/a&gt;, presented "Rogue LLMs: Securing Prompts and Ensuring Persona Fidelity" to a completely full room. It was a reality check that LLMs are programmable interfaces that accept adversarial input, except that the input is language and the boundaries are fuzzy. Models will be "dangerous" if we keep treating prompt behavior as if it will be stable under pressure. We know from decades of security practice that "works on the happy path" is not a control.&lt;/p&gt;

&lt;p&gt;Ben gave real-world examples of "prompt leak," including one where system instructions translated into another language evaded trigger-word filters. Another showed structured output requirements used as a lever to coax the model into returning what it should not. Persona drift, where the assistant stops being "your bot," and defaults to system prompts that make it want to be more helpful. Basically, social engineering, except that the target has no human context and no hard boundaries. If you can socially engineer a human, you can socially engineer a model, and the model is optimized to comply.&lt;/p&gt;

&lt;p&gt;We must treat prompts like code and treat model behavior like a system under test. Ben talked about a mindset that says you cannot "prove" safety with just a couple of manual tests; you need statistically meaningful testing, an explicit risk appetite, and continuous evaluation in CI/CD. The environment will change even when your prompt does not. Ben also suggested planting "canary tokens" in your internal context and treating any appearance in a response as a deterministic sign that something has gone wrong.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1ohulz03cx5q1ki6v43.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb1ohulz03cx5q1ki6v43.png" alt="Ben Dechrai presenting at ConFoo 2026" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;NuGet as a Delivery Truck With a False Bottom&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;In his session, "Building a supply chain attack with .NET and NuGet," &lt;a href="https://www.linkedin.com/in/maartenballiauw/" rel="noopener noreferrer"&gt;Maarten Balliauw, Head of Customer Success at Duende Software&lt;/a&gt;, presented on how dependency trust gets abused in ways that look boring at first, then turn catastrophic. He framed supply chain attacks as downstream-impact multipliers. If an attacker compromises a package, they inherit the victim's trust graph, and they leverage that trust to access environments that were never the original target. The danger of a "sleeper," where a good package goes bad later, is especially effective because it aligns with how teams actually update dependencies.&lt;/p&gt;

&lt;p&gt;Maarten broke the attack down into familiar components: a dropper, a payload, the command-and-control infrastructure, and a persistence-and-exfiltration layer. What made it uncomfortable was how many execution hooks exist in a modern .NET workflow that are legitimate features. Module initializers can run code when an assembly loads and source generators can run during builds to produce code that becomes part of the compiling project. Startup hooks can run before &lt;code&gt;Main&lt;/code&gt; via environment variables. These are all powerful extension points, not really bugs. The attacker's job is to smuggle intent through extension points that defenders treat as normal plumbing for the supply chain.&lt;/p&gt;

&lt;p&gt;We need a Swiss cheese model when thinking of guardrails, with multiple overlapping layers that make a system more resilient to attacks. Sign commits and packages, while also using package source mapping. Restore with lock files and enforce locked mode in CI, on top of generating SBOMs, which you should actually be analyzing. Pin CI actions while you watch for suspicious environment variable changes. Good dependency management requires adopting an operational discipline for your organization. If your build system can execute code from your dependency graph, then "dependency update" is a privileged operation, and it should be treated with the same care as production access.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F728p98aimzpvtyfiy7w1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F728p98aimzpvtyfiy7w1.png" alt="Maarten Balliauw presenting at ConFoo 2026" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;OWASP as a Mirror, Not a Checklist&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/christianwenz/" rel="noopener noreferrer"&gt;Christian Wenz, Owner of Arrabiata Solutions&lt;/a&gt;, presented "Web Application Security Up-to-date: The 2025 OWASP Top Ten." He began by reminding us all that the OWASP project is not a compliance artifact. It is a lens for what the industry is repeatedly getting wrong. The list is useful precisely because it is a little imperfect; the categories are sometimes too broad, sometimes frustratingly narrow, and the debates about what belongs on it reveal where teams still lack shared mental models.&lt;/p&gt;

&lt;p&gt;Christian highlighted how misconfiguration and supply chain issues have risen in prominence and how some perennial categories stay stubbornly relevant. Broken Access Control remains an umbrella that hides many failure modes, from direct object access to function-level authorization gaps. Security Misconfiguration is odd because DevOps blurred the line between "developer" and "admin," and defaults remain sharp edges, full of noisy errors, weak browser headers, and parsing hazards. Cryptographic failures are most often about basics like enforcing HTTPS, setting HSTS, and using secure cookie flags consistently.&lt;/p&gt;

&lt;p&gt;Web security categories are converging with the agentic and supply chain themes rather than competing with them. "Injection" is still relevant, but the boundary of "input" is expanding. Model binding quirks, deserialization assumptions, and integrity failures in CI/CD all rhyme with prompt injection and package compromise. OWASP is not a checklist; it is a mirror. It reflects that our modern security failures are often control failures, often caused by automating actions without enforcing the constraints that make those actions safe.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbtgn8awn02ried5wfgl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmbtgn8awn02ried5wfgl.png" alt="Christian Wenz presenting at ConFoo 2026" width="800" height="435"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;The Future Is Embracing Change You Can Actually Operate&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;Across ConFoo there were a lot of conversations across technical communities that rarely have the chance to meet and talk. The 'hallway track' consistently had an air of excitement. While many subjects came up, a common theme was that things are evolving faster than ever before. At the same time, there was a real sense around AI, especially agentic AI, that we need to proceed with a little more safety and control in mind.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Change is here, but it still needs structure&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;From the keynote, there was a recurring background conversation throughout the week about "&lt;a href="https://confoo.ca/en/2026/session/spec-driven-development-a-faster-way-to-code" rel="noopener noreferrer"&gt;Spec-Driven Design.&lt;/a&gt;" There is a worry that while AI accelerates work, it "amplifies ambiguity." When context is thin, the model will infer missing details and keep moving trying to please the user. That creates debt quickly when the output doesn't match what you meant, and you just have to hope it understands your next prompt.&lt;/p&gt;

&lt;p&gt;Instead, we need a structured approach. Structure should be seen as a performance feature. The clearer the requirements, such as a concrete design, with solid implementation details, gives both humans and tools something stable to align on. You get faster iteration because fewer cycles are spent translating intent after the fact.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Per-request trust beats perimeter trust&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Agentic AI was, of course, a common conversation across the networking events. Modern workflows are full of non-human actors that can touch real systems. Agents, automations, and developer tools operating with delegated access all throughout your "perimeter," which has a very different meaning in the modern world. Network location no longer describes risk when the tools you rely on run outside your environment.&lt;/p&gt;

&lt;p&gt;A sturdier approach is context-based access on every request. Systems need to validate identity and authorization, apply scoped permissions, and enforce policy before a tool reaches sensitive services. Then make it observable via monitored audit logs, consistent gateways, and controls that work the same way across systems.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Your software risk iceberg is mostly hidden beneath the surface&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;For modern applications, software risk lives outside just your application code. Dependencies, base images, build artifacts, configuration, and runtime behavior routinely decide what reaches production and what attackers can touch.&lt;/p&gt;

&lt;p&gt;One angle I heard in a few conversations was the idea of end-to-end ownership. For people who deliver dependencies, "owning" means proving you know what you ship, can track any exposure throughout the delivery chain, and build checks that hold up when any part of the systems change. For consumers, we need to treat dependency findings as seriously as first-party bugs, tighten error handling and logging, and test for drift in assistants and automations so they continue behaving as the system you intended.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Build for AI Speed With Control As A Requirement&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;AI is forcing a shift in what "secure by default" even means. The models will eventually say something wrong, which is definitely something many speakers touched on, including your author. The issue is that we are wiring language to action, and then acting surprised when the system behaves probabilistically. Systems take the shortest path through ambiguity, following incentives to comply as it figures out the next token. AI will happily route around soft boundaries, and any unfortunate surprises are the tax you pay for automation without constraints.&lt;/p&gt;

&lt;p&gt;We can't slow down change, but we can make that change operable. We need policies enforced where the wheels touch the road, not just in a slide deck or internal PDFs. We have to treat prompts as inputs that can be adversarial and treat dependencies as privileged code that can execute automatically. Then test for drift, log what matters, and assume the environment will change even when your intent does not.&lt;/p&gt;

&lt;p&gt;Montreal's skyline still grows, but it keeps one thing higher than everything else, on purpose. For AI, that high point should be guardrails you can enforce and observe. Change for IT means building upward; we just need to align on where our highest priorities sit.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfcl8kw7m7jyjajdbra1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfcl8kw7m7jyjajdbra1.png" alt="GitGuardian Interactive Demo" width="800" height="331"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>devsecops</category>
      <category>devops</category>
      <category>github</category>
    </item>
    <item>
      <title>From Detection to Defense: How Push-to-Vault Supercharges Secrets Management for DevSecOps</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Mon, 15 Dec 2025 18:16:58 +0000</pubDate>
      <link>https://dev.to/gitguardian/from-detection-to-defense-how-push-to-vault-supercharges-secrets-management-for-devsecops-174d</link>
      <guid>https://dev.to/gitguardian/from-detection-to-defense-how-push-to-vault-supercharges-secrets-management-for-devsecops-174d</guid>
      <description>&lt;p&gt;If you work in security or DevSecOps, you already know secrets do not live only where they should, safely in secrets management platforms, aka vaults. They leak into Git repos, CI logs, Slack threads, Jira tickets, wikis, and "temporary" config files that never get cleaned up. We unfortunately know this problem is getting worse for most organizations.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2025?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;The State of Secrets Sprawl 2025&lt;/a&gt; report found a 25% year-over-year increase in leaked secrets on public GitHub. We also reported that secrets are 8x more likely to leak into private repos.  Worse yet, 70% of the valid secrets leaked in 2022 remained valid when we retested them in 2025. &lt;/p&gt;

&lt;p&gt;The problem is not just that secrets leak. They stick around and grant access to whoever finds them.&lt;/p&gt;

&lt;p&gt;The good news is detection has never been better, thanks to &lt;a href="https://www.gitguardian.com/monitor-internal-repositories-for-secrets?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian's Secrets Detection&lt;/a&gt; across code, CI, collaboration tools, and anywhere sensitive values are exposed &lt;a href="https://docs.gitguardian.com/internal-monitoring/integrate-sources/bring-your-own-sources?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;across any source&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;The bad news is that solving the "last mile" challenge is still a challenge for too many teams. Someone still has to handle all the manual steps of properly vaulting the secret, updating the required configs or CI variables, and rotating the secret, all while hoping nothing breaks. Doing that over and over turns into burnout, backlog, and real risk.&lt;/p&gt;

&lt;p&gt;GitGuardian's view is that there has been a missing link between "we found a secret" and "this is safely under control in a Secret Manager." We call that vital link "&lt;a href="https://docs.gitguardian.com/ggscout-docs/what-is-ggscout?ref=blog.gitguardian.com#safely-store-unvaulted-secrets" rel="noopener noreferrer"&gt;Push-to-Vault&lt;/a&gt;" and are happy to support it &lt;a href="https://docs.gitguardian.com/nhi-governance/integrate-your-sources?ref=blog.gitguardian.com#secrets-managers" rel="noopener noreferrer"&gt;across all leading secret management platforms&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/content/images/2025/12/p2v-in-app-pic.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdln2l8h8zjf857e48wm3.png" width="774" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Push-to-Vault feature in the GitGuardian workspace&lt;/p&gt;

&lt;h2&gt;
  
  
  What GitGuardian Push-to-Vault Does for You
&lt;/h2&gt;

&lt;p&gt;Push-to-Vault is the bridge between detection and secure storage. It is a workflow that lets users insert discovered unvaulted secrets directly into their Secret Manager from within the GitGuardian platform. Instead of copying values into a clipboard and juggling tabs, you stay in the incident view, review what was detected, and trigger a controlled flow that moves the secret from being exposed in code or logs to being safely stored at the right vault path. &lt;/p&gt;

&lt;p&gt;GitGuardian tracks that remediation for you, so you know which incidents are actually under control and which still need work.&lt;/p&gt;

&lt;p&gt;Crucially, this does not introduce a new vault; instead, it helps you make better use of the vaults you already invested in. Under the hood, Push-to-Vault is powered by ggscout, a lightweight tool you run inside your environment. You integrate ggscout with existing Secret Managers such as HashiCorp Vault, CyberArk Conjur Cloud, AWS Secrets Manager, and others. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/content/images/2025/12/data-src-image-d624d1cd-c315-48f5-a757-22dc9822a02f.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnqp2931p7c9m1pvi7odz.png" width="800" height="392"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The GitGuardian Secrets Managers Integration menu&lt;/p&gt;

&lt;p&gt;When you Push-to-Vault from an incident, ggscout writes the exposed secret into your chosen vault path. After that, it sends only metadata and hashes back to GitGuardian so the platform can confirm remediation without ever holding the raw values.&lt;/p&gt;

&lt;h3&gt;
  
  
  Push-to-Vault As Part Of Your NHI Governance Strategy
&lt;/h3&gt;

&lt;p&gt;Push-to-Vault is a key part of &lt;a href="https://www.gitguardian.com/nhi-governance?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian's broader Non-Human Identity Governance story&lt;/a&gt;. Secrets, usually in the form of passwords, tokens, or API keys, are the connective tissue for service accounts, workloads, CI pipelines, agents, and every other workload identity that talks to critical systems. If you cannot see and govern those secrets, you cannot really say you understand or control your non-human identities.&lt;/p&gt;

&lt;p&gt;GitGuardian's NHI Governance aims to change that by giving you a real inventory, clear visibility, and end-to-end mapping from sources to consumers. You see which identity uses which secret, where that secret appeared, and how it flows through your environment. The faster a secret is secured in a vault, the sooner you can rotate it without fear of breaking applications or pipelines.&lt;/p&gt;

&lt;p&gt;As NHIs multiply and goals shift from "use vaults" to true lifecycle management, this becomes essential. GitGuardian helps you discover secrets and identities, understand hygiene and over-permissioning, and focus effort on vaulting what really matters. Once those secrets are properly stored, you can put regular rotation on autopilot instead of treating it as a one-off project. &lt;/p&gt;

&lt;p&gt;Push-to-Vault sits right in the middle of that journey. It does not just patch one leaked key. It turns each incident into an opportunity to bring another NHI under the kind of governance that modern zero trust architectures demand.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security First: How Push-to-Vault Works
&lt;/h2&gt;

&lt;p&gt;The internal flow for Push-to-Vault was intentionally designed to be secure and straightforward to implement.&lt;/p&gt;

&lt;p&gt;First, GitGuardian detects an incident with an exposed, unvaulted secret across one of your monitored sources. That might be a Git commit, a CI log, or a message in a collaboration tool. From there, ggscout pulls the incident details from your GitGuardian instance and uses its sync-secrets capability to write that secret to an integrated Secrets Manager. You decide which vault and which path, and you can standardize that per team, repo, or environment.&lt;/p&gt;

&lt;p&gt;Once ggscout writes the value into the vault, it sends only metadata back to GitGuardian so that the platform can mark the incident as vaulted and track remediation end-to-end. Once in the vault, the secret values never leave your infrastructure in clear text. GitGuardian never needs to store or replay your secrets. ggscout runs close to your vaults, while GitGuardian relies on metadata and cryptographic proofs that the secret is now under control.&lt;/p&gt;

&lt;h3&gt;
  
  
  Tight Control Over Automation
&lt;/h3&gt;

&lt;p&gt;Push-to-Vault is opt-in. ggscout can be configured to read from your vaults without any write access at all, which is a common starting point for teams that want visibility first. When you are ready to enable writes, you can restrict ggscout to very specific locations, such as a dedicated path, a given namespace, or only non-production environments. That way, you give your teams a safer "lane" for automation instead of turning it loose everywhere.&lt;/p&gt;

&lt;p&gt;If you need even more assurance, you can run in a more conservative mode, where ggscout operates in a fetch-only posture and produces JSON reports that show what it would write and where. You can review those reports with your platform, security, or audit teams before allowing any actual writes. It is a practical way to bring skeptics along without asking them to take automation on faith.&lt;/p&gt;

&lt;p&gt;Just as important, Push-to-Vault is designed to help you avoid dumping everything into the vault and needing to sort it out later. We all know that later never comes. You can embed remediation guidance directly in GitGuardian to lead users down the right paths. Over time, the result is a cleaner, more predictable vault layout rather than simply moving the mess from Git into a secret store.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Push-to-Vault Changes Day-to-Day Work
&lt;/h2&gt;

&lt;p&gt;Push-to-Vault is all about removing friction. Instead of bouncing between an incident panel, a vault console, a configuration repo, and a runbook, your security and platform teams can move exposed secrets straight into the right Secret Manager path from the GitGuardian dashboard. No juggling user interfaces. No copy-paste. No hoping someone remembers to update a ticket afterward.&lt;/p&gt;

&lt;p&gt;Once a secret is vaulted, you can wire the new path back into the application or pipeline, rotate the value safely, and close the loop with a clean audit trail that shows how it moved from detection to secure storage. &lt;/p&gt;

&lt;p&gt;GitGuardian can tag incidents when secrets are already vaulted, so you get a clear split between "under control" and "still floating around unvaulted." That lines up with revocation workflows where you vault and rotate a new secret, revoke the old one, and then close the incident with evidence attached.&lt;/p&gt;

&lt;p&gt;For dev teams, this means no more generic "go fix this" tickets. Engineers get a vault path and a clear reference, saving them time looking up the process and the right vault. Over time, Push-to-Vault nudges your organization toward consistent vault hierarchies instead of random environment files and scattered CI variables.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/content/images/2025/12/data-src-image-a4c9fbab-afe1-468a-9174-61c4981d3ac8.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnv3z3syu465d8pahab4i.png" alt="Optional workflows once ggscout is integrated" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  A Practical Scenario
&lt;/h3&gt;

&lt;p&gt;Let's walk through an example incident in which a production database password is committed to a private repo. With traditional remediation, you would copy the password out, create a vault entry by hand, try to pick the right path, update the configuration, rotate the password, possibly redeploy the application, and hope nothing got missed. &lt;/p&gt;

&lt;p&gt;With Push-to-Vault, you start from the incident, push the value into a standard production database path that your team already trusts, wire the app to read from that path, and rotate with confidence. The same pattern holds for an access key pasted into Slack during an incident or a long-lived token hiding in a legacy config file. Speed, safety, and traceability all improve in the same motion.&lt;/p&gt;

&lt;p&gt;At the program level, this operational improvement rolls up into better Privileged Access Management and stronger NHI governance. By centralizing where secrets actually live across vaults, you can see duplicate or weak credentials, enforce least privilege more consistently for non-human identities, and move toward a lifecycle-driven secrets strategy instead of a series of isolated cleanups.&lt;/p&gt;

&lt;h3&gt;
  
  
  Push-to-Vault As Part Of Your Secrets Security Strategy
&lt;/h3&gt;

&lt;p&gt;While Push-to-Vault is an exciting feature, it is one part of a larger approach. Adopting it needs to be part of your overall secrets management and NHI governance plan. If you are new to this journey, we highly recommend first finding out how many secrets are currently leaked and where they live. Until you know that, you don't know what is really at stake for your organization.&lt;/p&gt;

&lt;p&gt;Detection gives you that visibility. Once you are ready, though, Push-to-Vault turns that visibility into control. The faster you can move from finding leaked secrets with scanning to vaulting and rotating, the less time an attacker has to turn someone's mistake into a foothold.&lt;/p&gt;

&lt;p&gt;If you are already using GitGuardian to detect secrets, Push-to-Vault is how you turn noisy findings into durable, governed fixes. This is one more way GitGuardian is helping teams get non-human identity under control; in a vault, with automated rotation, and with proper lifecycle governance.&lt;/p&gt;

&lt;p&gt;If you are not yet leveraging GitGuardian, we would &lt;a href="https://www.gitguardian.com/book-a-demo?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;love to talk to you about fighting secrets sprawl and getting a handle on NHI governance&lt;/a&gt;&lt;/p&gt;

</description>
      <category>devops</category>
      <category>security</category>
      <category>cybersecurity</category>
      <category>programming</category>
    </item>
    <item>
      <title>Lessons in Testing, Performance, and Legacy Systems from /dev/mtl 2025</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 11 Dec 2025 15:27:00 +0000</pubDate>
      <link>https://dev.to/gitguardian/lessons-in-testing-performance-and-legacy-systems-from-devmtl-2025-5dl2</link>
      <guid>https://dev.to/gitguardian/lessons-in-testing-performance-and-legacy-systems-from-devmtl-2025-5dl2</guid>
      <description>&lt;p&gt;Montreal, Canada, is the birthplace of the search engine. Long before the world was talking about semantic search and vector embeddings, a student at &lt;a href="https://en.wikipedia.org/wiki/Archie_(search_engine)?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;McGill built Archie&lt;/a&gt; to find files across FTP servers. The creator's goal wasn't to change how everyone would use the internet; it was to solve a specific issue around finding available files by exact title. Almost all leaps in technology are achieved the same way: a small group of practitioners focuses on solving a specific issue, and those innovations reverberate across the whole internet. It made Montreal an ideal setting for a group of current innovators to get together and discuss common challenges at &lt;a href="https://www.dev-mtl.ca/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;/dev/mtl 2025&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Around 150 developers got together at &lt;a href="https://www.dev-mtl.ca/venue?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;École de technologie supérieure (ÉTS)&lt;/a&gt; for a truly cross-community event, put on by a &lt;a href="https://www.dev-mtl.ca/about?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;coalition of 14 local tech communities&lt;/a&gt;, including Java and Python user groups, the local CNCF and AWS meetups, and Women in AI. In true Québécois fashion, the 21 speakers shared their knowledge in both French and English across three tracks. &lt;/p&gt;

&lt;p&gt;Here are just a few highlights from this year's /dev/mtl. &lt;/p&gt;

&lt;h2&gt;
  
  
  Unchecked Complexity Makes Testing Unpredictable
&lt;/h2&gt;

&lt;p&gt;In his session "Feature Flags And End-to-End Testing," &lt;a href="https://www.linkedin.com/in/bahmutov/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Gleb Bahmutov, Senior Director of Engineering at Mercari&lt;/a&gt;, walked through a problem a lot of teams quietly live with, especially as they update legacy code. Feature flags are great for incremental releases, experiments, and kill switches, but they can turn end-to-end testing into a maze. &lt;/p&gt;

&lt;p&gt;The math is exponential, as each new flag doubles the number of possible states. For example, if you have three flags with two states each, that is (2 × 2 × 2) = 8 states to test. Adding one more flag with two more states makes that 16 possible states. It introduces testing questions like "Did we ever test these flags in these states?" Keeping up manually is a logistical nightmare.&lt;/p&gt;

&lt;p&gt;Gleb explained that every test is supposed to be deterministic, yet percentage rollouts and misaligned environments mean the same test can fail every few days for no obvious reason.&lt;/p&gt;

&lt;p&gt;He compared three testing strategies. Total control gives tests the full feature flag payload through an API and fixtures, but now you are debugging caching and invalidation on top of the app. Selective control stubs only the flag under test, but page reloads, navigation, and backend behavior still make things unpredictable. The most reliable option was per-user control. Keep flags as production-like as possible, then target individual user IDs in tools like LaunchDarkly&lt;/p&gt;

&lt;p&gt;The larger lesson was lifecycle discipline. Treat flags as temporary. Make new features explicit opt-in, migrate tests as defaults change, archive flags when done, and aggressively retire anything old. Also, do not build your own feature flag system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m6pcazoatc2h?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh8995w72j3u35rmn7fbz.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Gleb Bahmutov&lt;/p&gt;

&lt;h3&gt;
  
  
  Invisible Complexity Impacts Performance
&lt;/h3&gt;

&lt;p&gt;&lt;a href="http://linkedin.com/in/rezamadabadi/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Reza Madabadi, Software Developer at 360.Agency&lt;/a&gt;, started his talk, "Why Your Database Hates You: The N+1 Query Problem," by asking, "Why is everything so slow?" It is a common question every developer faces when starting with a new company with years of history to decode. Tools like MySQL &lt;code&gt;EXPLAIN&lt;/code&gt; can help diagnose some of the issues, but Reza said that it does not tell you the full story; it can show the query is taking a long time, but it does not show why.&lt;/p&gt;

&lt;p&gt;Reza said what changed things for him was tracing. This revealed how a single, seemingly harmless request was turning into thousands of database calls. Each individual lookup was cheap, but together they added up. That is the N+1 problem. Not a language or framework bug, but a middleware issue rooted in object relational mappers (ORMs).&lt;/p&gt;

&lt;p&gt;He explained that in a typical Java and Hibernate monolith, data access objects feed big data transfer objects (DTOs), and lazy loading tries to protect you from loading the whole database at once. Instead of one query with joins, the ORM runs one query to get a list, then N queries to hydrate each association. Reza walked through join fetches and Hibernate batch size tweaks as partial fixes. They help, but they are still hacks that can create long prepared statements and memory pressure.&lt;/p&gt;

&lt;p&gt;The more durable answer was to design for DTOs directly. Reza said they now use entity-based CRUD where it makes sense, but write targeted select DTO queries where they know exactly what is needed. Pair that with local tracing tools like Digma to catch N+1 patterns early, before they turn into mysterious slow nights in production.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m6pi6j5u2t2j?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xlx3wpgmtupqd7qphz3.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Reza Madabadi&lt;/p&gt;

&lt;h2&gt;
  
  
  Improvements Take A Dedicated Approach
&lt;/h2&gt;

&lt;p&gt;In his talk "My Journey With Software Testing," &lt;a href="https://www.linkedin.com/in/luciancondrea/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Lucian Condrea, a freelance full-stack developer and contributor at Tribe Social&lt;/a&gt;, told a story many self-taught testers will recognize. He started with no real testing skills, no strategy, and a growing pile of manual checks that were slow, tedious, and mentally draining. Leadership only saw that "QA is too slow," without understanding the challenges the team was facing, including the invisible cognitive load. Progress felt random, and there was no clear path to those dependable systems and breezy workdays he wanted.&lt;/p&gt;

&lt;p&gt;Things shifted when he became intentional about learning. Lucian built a "testing wishlist" and carved out daily 30-minute practice sessions. He leaned into small, atomic wins instead of vague "I should be testing more" guilt. He credited blogs from folks like &lt;a href="https://kentcdodds.com/blog/how-to-know-what-to-test?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Kent C. Dodds&lt;/a&gt; for finally clarifying why tests matter, and Nicolas Carlo's "Legacy Code: First Aid Kit" for showing how to create boundaries in the code so it was even possible to test. &lt;/p&gt;

&lt;p&gt;From there, he adopted a pragmatic view: "Tests should serve you, not the other way around." Focusing on readable, co-located tests, integration tests for the best return. This means minimal mocking and only as much end-to-end coverage as you truly need.&lt;/p&gt;

&lt;p&gt;He left us with the advice to reflect on strategy, not dogma. He said to build deliberate habits so your tests give you confidence instead of resentment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m6poidde332v?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwdylsybwejz38uz6stdj.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Lucian Condrea&lt;/p&gt;

&lt;h2&gt;
  
  
  Lessons From Developers For Everyone
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Legacy Is Our Shared Starting Point
&lt;/h3&gt;

&lt;p&gt;One quiet constant behind every talk was the reality that legacy is not a corner case, and no one is starting from greenfield. Legacy systems are where we do our most meaningful work. We are not working in an ideal vacuum, but are layering decisions on top of years of code, data, and human habits. &lt;/p&gt;

&lt;p&gt;Instead of fantasizing about starting over again, rewrite "once things calm down," the real work is learning how to move forward inside constraints you did not choose. When you accept that, dealing with legacy stops being a shameful side quest and becomes the main design problem of figuring out how to change things without breaking the promises your system made years ago.&lt;/p&gt;

&lt;h3&gt;
  
  
  Feedback Loops Beat Raw Velocity
&lt;/h3&gt;

&lt;p&gt;Another theme that permeated the whole event was that while speed matters, measuring and investigating what is happening might be just as important. Whether the topic was performance, testing, releases, or AI, the teams that seemed calmer were the ones with feedback loops they trusted. That might mean observability that shows how a request actually flows, or tests that fail in ways that teach you something instead of interrupting you at random. It might be metrics on how often your users get "no results" in search, or how many flags are still active past their intended lifetime. &lt;/p&gt;

&lt;p&gt;If you cannot observe it, you cannot reason about it, and if you cannot reason about it, you are just feeling like you are moving faster, in the dark.&lt;/p&gt;

&lt;h3&gt;
  
  
  Guardrails Over Heroics
&lt;/h3&gt;

&lt;p&gt;The speakers' stories kept pointing to the fact that the strongest teams design guardrails so that normal behavior is safe by default. That looks like defensible defaults, explicit lifecycles, and constraints that keep complexity from running away in the first place. &lt;/p&gt;

&lt;p&gt;It means treating experiments, flags, tests, and configurations as living systems with an end-of-life plan, not as one-off hacks. When you do that, you do not need a rockstar to remember every edge case. You need a group of normal people who respect the guardrails and adjust them as reality changes. This is as true for security and secrets governance as it is for any other area of production systems. &lt;/p&gt;

&lt;h3&gt;
  
  
  Tools Change, Habits Compound
&lt;/h3&gt;

&lt;p&gt;Underneath all the specific technologies, the real leverage showed up in habits, not tools. Tools will keep changing. We will likely never stop learning about new frameworks, new agents, and new tracing stacks. &lt;/p&gt;

&lt;p&gt;What carried across topics was the value of small, repeatable practices. Speakers commonly talked about carving out time to improve tests, routinely inspecting how your system actually behaves, and retiring complexity instead of hoarding it. We should strive for the simplest solution that fits the current scale. &lt;/p&gt;

&lt;p&gt;These habits compound in a way that individual tools never do. The future of our systems depends less on the next big thing and more on how disciplined we are with the things we already have.&lt;/p&gt;

&lt;h2&gt;
  
  
  Innovations Come From Persistence While Addressing Real Issues 
&lt;/h2&gt;

&lt;p&gt;Your author was able to share a session on secrets security at this developer-focused event. Rather than being put off by the scale of the secrets sprawl problem, developers who attended leaned in and asked about possible solutions. It was highly encouraging to see folks who do not regularly interact with the security team immediately recognize the dangers of plaintext credentials and seem eager to embrace available solutions to work more safely and efficiently.  &lt;/p&gt;

&lt;p&gt;No matter what area of enterprise technology you are dealing with, the same themes that ran through this developer-focused conference apply. Accept the legacy you have, add observability, put guardrails in place, and build habits that make the safe path the easy one. If we keep doing that across testing, performance, search, and security, the next "Archie moment" will not come from a single breakthrough, but from thousands of small, deliberate improvements shipped by teams like the ones who showed up at /dev/mtl.&lt;/p&gt;

</description>
      <category>techtalks</category>
      <category>devops</category>
      <category>security</category>
      <category>sre</category>
    </item>
    <item>
      <title>Workload And Agentic Identity at Scale: Insights From CyberArk's Workload Identity Day Zero</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Fri, 28 Nov 2025 16:10:36 +0000</pubDate>
      <link>https://dev.to/gitguardian/workload-and-agentic-identity-at-scale-insights-from-cyberarks-workload-identity-day-zero-55k</link>
      <guid>https://dev.to/gitguardian/workload-and-agentic-identity-at-scale-insights-from-cyberarks-workload-identity-day-zero-55k</guid>
      <description>&lt;p&gt;What do the terms identity, AI, workload, access, SPIFFE, and secrets all have in common? These were the most common words used at &lt;a href="https://lp.cyberark.com/20251110-cyberark-workload-identity-day-zero-atlanta-registration.html?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;CyberArk's Workload Identity Day Zero in Atlanta&lt;/a&gt; ahead of KubeCon 2025. &lt;/p&gt;

&lt;p&gt;Across an evening full of talks and hallway conversations, the conversation kept coming back to the fact that we have built our infrastructures, tools, and standards around humans, then quietly handed the keys to a fast-multiplying universe of non-human identities (NHIs). However, the evening didn't dwell on what we have gotten wrong, but instead on what we are getting right as we look towards a brighter future of workload identity. &lt;/p&gt;

&lt;h2&gt;
  
  
  State of Workload Authentication
&lt;/h2&gt;

&lt;p&gt;Every speaker discussed what has happened so far and how we have reached the state in which so many companies find themselves. These workload identities, in the form of services, agents, CI jobs, Lambdas, etc, are mostly authenticated today with long-lived API keys that are more likely than not, overprivileged. We have granted standing access too often. While some teams have embraced PKI setups at scale are trapped in a complexity that only a handful of experts truly understand. &lt;/p&gt;

&lt;p&gt;The result is explosive complexity as teams face multi-cloud and hybrid environments, multiple languages, and increasingly complex org charts. It is no wonder that every siloed team has come up with its own ad hoc solutions over the years. But that means it is unlikely that any governance model will be a good fit, preventing us from leveraging a single management system. There is also the fact that if an attacker gets hold of a single NHI credential, they gain a huge, often invisible foothold with a massive blast radius.&lt;/p&gt;

&lt;p&gt;At the same time, scale and AI are turning this from "annoying" to "existential" threat. Workloads now spin up, talk to each other, cross trust domains, and die off in seconds, while organizations want billions of attestations per day without hiring an army just to rotate secrets. Now, Agentic AI shows up and starts acting on our behalf. It calls APIs, touches sensitive data, and hops across providers. We now don't know if a user triggered an action, or an autonomous agent did. &lt;/p&gt;

&lt;p&gt;We are beginning to recognize the need for clean attribution, access governance, and logging. Everyone on stage is essentially describing the same issue: we can't keep handing out magic tokens to non-human actors and hope spreadsheets, YAML, and "best effort" PKI will save us.&lt;/p&gt;

&lt;h2&gt;
  
  
  Weaving A Shared Workload Identity Story
&lt;/h2&gt;

&lt;p&gt;The opening keynote from &lt;a href="https://www.linkedin.com/in/andrew-moore-681b1114a/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Andrew Moore, Staff Software Engineer at Uber&lt;/a&gt;, "From Bet to Backbone, Securing Uber with SPIRE," set the tone for the evening. For Uber, all of this work comes down to external customer trust.  Their SPIRE journey at Uber really began when they admitted the impossibility of governing "thousands of solutions at scale." &lt;/p&gt;

&lt;p&gt;Andrew's team moved toward a single, SPIFFE-based workload identity fabric that can handle hundreds of thousands to billions of attestations per day. They treat &lt;a href="https://spiffe.io/book/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;SPIRE as the "bottom turtle&lt;/a&gt;." This means trusted boot, agent validation, centralized signing, tight, well-designed SPIFFE IDs that don't accumulate junk. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m5ck5fli5427?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyhn1ny8zygdliqeusag9.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Andrew Moore&lt;/p&gt;

&lt;h2&gt;
  
  
  Tying AI To Workloads
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/bcaley/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Brett Caley, Senior Software Security Engineer at Block&lt;/a&gt;, echoed a similar story arc as Uber's in his talk "WIMSE, OAUTH and SPIFFE: A Standards-Based Blueprint for Securing Workloads at Scale." The core question they needed to answer was how to prove a workload "Is who they say they are." Their team went from plaintext keys in Git and bespoke OIDC hacks to "x509 everywhere," SPIRE-driven attestations. They have rolled out systems that can issue credentials where the workload is, and at the speed developers demand. He explained that they deployed SPIRE "where it should exist."&lt;/p&gt;

&lt;p&gt;Brett also tied all of this to agentic AI and our tendency to anthropomorphize, that is, giving AI agents names in Slack. He said this might invoke Black Mirror storylines, but we need to remember that these 'human-like agents' are all still workloads that need narrowly scoped permissions, explicit authorization of actions, and confirmation of intent.&lt;/p&gt;

&lt;p&gt;When something goes wrong, he argues, the right question isn't "What did the AI do to us?" but "How did our &lt;em&gt;system&lt;/em&gt; fail in governing the AI's workload identity and permissions?"&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m5csatle632j?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcuvlc8v1v4f8psfsfk42.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Brett Caley&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Agents Are Workloads That Need Identities
&lt;/h2&gt;

&lt;p&gt;In their talk "AI agent communication across cloud providers with SPIFFE universal identities," &lt;a href="https://www.linkedin.com/in/dchoi19/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Dan Choi, Senior Product Manager, AWS Cryptography&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/bdpaul99/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Brendan Paul, Sr. Security Solutions Architect from AWS&lt;/a&gt;, highlighted that Agentic AIs are comprised of workloads and need to communicate across clouds without ever touching long-lived secrets.&lt;/p&gt;

&lt;p&gt;They said if you can establish two-way trust between your authorization servers and your SPIFFE roots of trust, via SPIRE, you can treat SPIFFE Verifiable Identity Documents (SVID) as universal, short-lived identities for AI agents. &lt;/p&gt;

&lt;p&gt;From there, they walked through some concrete use cases, including an AI agent acting on behalf of a user. Framed as an "AI-enabled coffee shop" demo, they showed how you start with an authenticated web app, then propagate both user identity and workload identity so the agent can check inventory, update systems, and call tools with clear attribution and least privilege.&lt;/p&gt;

&lt;p&gt;They stressed that you don't need MCP or any particular orchestration framework for this; the pattern is always "get the agent an SVID, then exchange it for scoped cloud credentials," whether you are interfacing with an S3 bucket or anything else. They closed by telling us that AI agents naturally span trust domains, and SPIFFE gives them a common identity fabric. The future work will refine how we do token proof-of-possession and delegation at scale for all of these non-human workloads.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m5cq4eocui22?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4tb762ussk8ilsf3iaqo.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Brendan Paul and Dan Choi&lt;/p&gt;

&lt;h2&gt;
  
  
  Where We Go From Here
&lt;/h2&gt;

&lt;p&gt;If these talks are any indication, the next few years are about moving workload identity from heroic projects to boring infrastructure. SPIFFE/SPIRE, WIMSE, OAuth token-exchange patterns, and transaction tokens will quietly become the plumbing. This will define how we deploy CI/CD, microservices, and AI agents securely at scale for the next generation of applications and platforms. &lt;/p&gt;

&lt;p&gt;Enterprises are realizing that "API keys in Git" and "service account sprawl" are no longer acceptable risks. The time is now to either adopt internal identity fabrics that attest workloads, issue short-lived credentials, centralize policy, and log everything, enriching those logs with as much context as possible. The UX lesson from all of these talks is that identity is currently painful, and people route around it. If we can do the work now, to make workload identity automatic and invisible, teams will lean into it.&lt;/p&gt;

&lt;p&gt;At the same time, agentic AI will force us to sharpen our thinking about representation and blame. We need to ask "What is this workload allowed to do, on whose behalf, and with what guardrails?" The future is building golden paths where every non-human identity, no matter what shape or function, comes pre-wired with a strong, attestable identity and tightly scoped access. The creative tension will be keeping that world flexible enough for experimentation, and safe enough that "exploding rockets" stay in test environments, not production.&lt;/p&gt;

&lt;p&gt;No matter where your path towards better NHI governance, everyone at Workload Identity Day Zero agreed that having insight into your current inventory of workloads and machine identities is mandatory. We at GitGuardian firmly believe that the future can mean fewer leaked secrets, better secrets management, and scalable &lt;a href="https://www.gitguardian.com/nhi-governance?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;NHI governance&lt;/a&gt;, but we can't get there without first understanding the scope of the issue in the org today. &lt;a href="https://www.gitguardian.com/book-a-demo?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;We would love to talk to you and your team about that&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>techtalks</category>
      <category>devops</category>
      <category>security</category>
    </item>
    <item>
      <title>Queen City Con 0x3: Hacking And Embracing Resiliency</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Wed, 19 Nov 2025 17:11:45 +0000</pubDate>
      <link>https://dev.to/gitguardian/queen-city-con-0x3-hacking-and-embracing-resiliency-7hl</link>
      <guid>https://dev.to/gitguardian/queen-city-con-0x3-hacking-and-embracing-resiliency-7hl</guid>
      <description>&lt;p&gt;Cincinnati holds the distinction of being the &lt;a href="https://www.cincinnati-oh.gov/fire/about-fire/history/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;first in the United States to establish a municipal fire department in 1853&lt;/a&gt;, as well as the first to install a fire‑station pole. This marked a turning point in the history of firefighting, as the new technology of the steam pump let small dedicated groups of professionals stop fires much faster than ever before. But the arrival of the steam pump was not immediately embraced by the public, &lt;a href="https://www.americanheritage.com/how-steam-blew-rowdies-out-fire-departments?ref=blog.gitguardian.com#:~:text=The%20firemen%20would%20have%20nothing%20to%20do%20with%20the%20%E2%80%9Csham%20squirt%2C%E2%80%9D%20as%20they%20derisively%20called%20the%20steam%20pumper." rel="noopener noreferrer"&gt;as many people distrusted this new disruptive technology&lt;/a&gt;. Over 120 years later, we are once again seeing defenders leveraging new technology, namely AI, that is also being met with a lot of skepticism. This parallel made "Cincy" the perfect backdrop for hackers to get together to talk security and trends at &lt;a href="http://queencitycon.org/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Queen City Con 0x3&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Hundreds of security pros, compliance experts, students, and hackers got together for the third installment of "Cincinnati's Premier Security Conference." Over three days, 71 speakers presented talks alongside hands-on labs, workshops, and 10 different villages, which many participants noted made this event feel very similar to DEF CON, but without the infamously long lines.&lt;/p&gt;

&lt;p&gt;Here are just a few highlights from this year's QCC.&lt;/p&gt;

&lt;h2&gt;
  
  
  Machines Now Define Your Perimeter
&lt;/h2&gt;

&lt;p&gt;In his session "Non-Human Identity Management (NHI Management)," &lt;a href="https://www.linkedin.com/in/scott-d-smith-cissp-cisa-ccsp-hcispp-crisc-3364235/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Scott Smith, Principal Consultant at New Era Technology&lt;/a&gt;, talked about how risk has shifted from people to non-human identities (NHIs). He defined this to include tokens, API and OAuth keys, certificates, bots, and service accounts, spread across our systems, code, and environments. Creation is automated, but ownership is unclear, and visibility is decentralized. Attackers know this. The common breach path is simple: a developer hardcodes a key, commits to a public GitHub repo, and automated scanners find it within minutes. Scott said that over half of organizations report breaches tied to machine identities, and that now 77% of web app attacks start with stolen credentials.&lt;/p&gt;

&lt;p&gt;Scott explained that traditional identity and access management (IAM) programs do not cover this terrain. Secret scanning helps, but really, what you need are better processes. He reminded us not to boil the ocean; we should start where DevOps already has traction and keep secrets out of code. Treat machine access like data risk. If an NHI can reach a critical system, govern it like regulated data.&lt;/p&gt;

&lt;p&gt;We need a pragmatic approach here. First, discover and inventory NHIs and classify them. Next, prioritize and rotate static credentials, right-size permissions, and integrate automated secret scanning into CI/CD to stop new leaks. Finally, establish governance that survives growth. Expect the footprint to expand as microservices, automation, IoT, and AI agents multiply identities and introduce drift, especially in MLOps. The perimeter is identity now. It only works if it protects more than your workforce.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m54y56huyk22?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fprgbzi8bh3rl4sundq24.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Scott Smith&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Security's Preventable Failures
&lt;/h2&gt;

&lt;p&gt;In his session "Cloud Security and Other Assorted Cautionary Tales," &lt;a href="https://www.linkedin.com/in/mattscheurer/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Matt Scheurer, VP, Computer Security &amp;amp; Incident Response&lt;/a&gt;, walked through the kinds of mistakes that still drive incidents in AWS, Azure, and Google Cloud. It starts with a simple posture check, using threat models like STRIDE and data-flow mapping to see how information moves in and out of systems. Then he verifies that basic controls exist. In AWS that means GuardDuty for detection, CloudTrail for activity logging, and CloudWatch for performance signals. In Azure, lean on Defender for Cloud, Sentinel, and Entra ID for identity. In Google Cloud, Security Command Center is the anchor. Training matters too. He said he relies on Microsoft's Cloud Security Explorer and Kusto Detective Agency to make it easier to find issues before attackers do.&lt;/p&gt;

&lt;p&gt;Matt's introduced us to the acronym "SaaD," Stupidity-as-a-Disservice. He does not mean this as an insult, but as a reminder that many cloud failures are avoidable if we think things through and communicate. For example, he told the story of a storage bucket marked public, where a developer insisted that since uploading to it required a login, it was safe. It was not. Anyone could discover and download anything in that bucket, which contained receipt images, full credit card payment data. A misunderstanding and misconfiguration turned into a privacy incident. Another case hinged on default credentials left unchanged after a penetration test. Another story Matt told was about an engineer who opened Remote Desktop on a cloud host to bypass a broken VPN, creating a jump host that risked full environment compromise.&lt;/p&gt;

&lt;p&gt;Matt said the fixes for most issues are straightforward. For example, avoid making public buckets by default. Update credentials and test that default creds no longer work on any given system. Classify the data before migrating it to determine the precautions you need to take and the scope of potential incidents during and after the move. Ultimately, we can't trade security for convenience. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m52vbbgvru27?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6m0d0bhhyw4ia6ip4ex8.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Matt Scheurer&lt;/p&gt;

&lt;h2&gt;
  
  
  Defaults That Let Users Own Your Forest
&lt;/h2&gt;

&lt;p&gt;In their session "Making $ With COMPUTER$," our &lt;a href="https://www.linkedin.com/in/jakehildreth/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;, Principal Security Consultant at Semperis&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/sk3w/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;John Askew, Hacker and Founder of Terrapin Labs&lt;/a&gt;, showed how a plain user can join a machine to Active Directory (AD) and pivot to compromise the whole forest in minutes. Two default settings were at the heart of this issue. First, the ms-DS-MachineAccountQuota attribute lets certain users add up to 10 computers to the domain. The other is  &lt;code&gt;SeMachineAccountPrivilege&lt;/code&gt; user right, which lets any Authenticated User add those computers. &lt;/p&gt;

&lt;p&gt;The presenters said that this made sense 25 years ago, but today machine accounts are attacker gold. They face less scrutiny, carry different permissions, and can even be created via relay without credentials. Both speakers see these defaults everywhere; Jake estimates roughly 80 percent of AD instances have never updated these settings. John said he has never seen anything but default for these in any pentest he has done.&lt;/p&gt;

&lt;p&gt;The duo explained that the fix is simple and disruptive in the right ways. Set MachineAccountQuota to 0 and restrict SeMachineAccountPrivilege so only admins can add computers. Teams should follow the newer domain join model with trusted computer account owners. Pre-create the object in a controlled Organizational Unit (OU), which is a specialized container, then let a designated joiner attach the host, or even go as far as performing &lt;a href="https://learn.microsoft.com/en-us/windows-server/remote/remote-access/directaccess/directaccess-offline-domain-join?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Offline Domain Joins&lt;/a&gt; for tighter control. Monitor for new machine accounts with &lt;a href="https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4741?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Event ID 4741&lt;/a&gt; and investigate the creator if it ever shows up unexpectedly. The pair urged us all to start the conversation with the server and identity teams now, before any incidents occur.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m52rqbknrj2z?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5j1ohpda33ivy76udwr.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;John Askew and Jake Hildreth&lt;/p&gt;

&lt;h2&gt;
  
  
  Detection by design means resilient first
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/trent-liffick-b162536/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Trent Liffick, Principal Cyber Threat Analyst at Fifth Third Bank&lt;/a&gt;, in his session "Detection by Design: Engineering Resilience Against Evolving Threats," argued that most teams believe they are designing for detection, yet many lack dedicated practitioners. He framed detection engineering as a lifecycle: gather intel, design, develop, test and deploy, monitor, and keep testing. The goal is coverage that balances integrity, operational cost, risk, and utility.&lt;/p&gt;

&lt;p&gt;Trent drew a clear line between brittle and resilient logic, where brittle rules break when attackers rename binaries, obfuscate command lines, or swap tools. Resilient detections describe behavior, so for example, instead of matching powershell.exe by name, use OriginalFileName to see what the service was originally called, and Script Block Logging, which records all PS script runs, regardless of how they are invoked. His principle is "shift down, not left." Ask whether a rule holds over time, resists small tactic changes, and models attacker behavior rather than a string seen once. Prefer generalized patterns and abstractions. Trent said that we need to evolve faster than adversaries. We should aim to catch any mistakes an attacker makes while taking shortcuts to speed up their attacks, and design detections that survive evasion.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/did:plc:ytw3m436pntb4tbom424ua2d/post/3m54umwejt52s?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fesa71oqrjydjzdboqf8c.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Trent Liffick&lt;/p&gt;

&lt;h2&gt;
  
  
  Discipline Over Defaults
&lt;/h2&gt;

&lt;p&gt;The subtext across QCC was simple. Our biggest risks are not zero-days. They are defaults, drift, and decisions we delay because they feel inconvenient. Each session pointed at the same nerve. Identity is the perimeter, and our lack of guardrails in the cloud is an attacker's best friend. Detection is celebrated, yet often built on fragile strings instead of durable behaviors. We must develop the habit of turning principles into practice on the dull, daily work of building out and enriching our asset inventory, logging, rotation, and reviewing to ensure we are following the principle of least privilege everywhere.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model Over Mechanism
&lt;/h3&gt;

&lt;p&gt;The best teams are the ones that think in lifecycles and patterns, defining what "good" looks like, then keeping the systems inside those bounds. Measure change. Log what matters and read it to detect the behavior, not the binary names. When you do adopt tools, pick the ones that reinforce the model rather than distract with dashboards. The tool is not the needed control. The control is the rule you enforce every day.&lt;/p&gt;

&lt;h3&gt;
  
  
  Culture Decides Outcomes
&lt;/h3&gt;

&lt;p&gt;Convenience is a persistent attacker's ally. Public buckets, default creds, and brittle rules that break on a filename change are not sophisticated concepts, but they are very common patterns in modern organizations. Resilience comes from teams that choose to embrace friction early so they avoid catastrophe later. Identity governance might be a boring grind to get right, but it is mandatory. &lt;/p&gt;

&lt;h2&gt;
  
  
  Pull The Alarm, Prove The Control
&lt;/h2&gt;

&lt;p&gt;Firefighting in Cincinnati turned a corner when steam met discipline. Security is at the same bend now. The lesson from Queen City Con 0x3 is not another tool. It is posture. Your author got to give a talk along these same lines, raising awareness of the seriousness of the issues of poorly governed NHIs and what steps we can take right now, and into the future, to improve our security posture in this area. &lt;/p&gt;

&lt;p&gt;Treat identity as the perimeter, especially for non-human accounts, and focus on nailing down cloud security basics. We should stop pretending that giving in to convenience is neutral. Fixing the issues we create by taking the easy path takes work, but for most problems, the solutions are boringly straightforward. Cincinnati moved faster once the city trusted a new technology and trained professionals on how to use it. We will too, if we choose discipline over drift.&lt;/p&gt;

</description>
      <category>techtalks</category>
      <category>security</category>
      <category>cybersecurity</category>
      <category>devops</category>
    </item>
    <item>
      <title>Working Towards Improved PAM: Widening The Scope And Taking Control</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Thu, 06 Nov 2025 13:19:53 +0000</pubDate>
      <link>https://dev.to/gitguardian/working-towards-improved-pam-widening-the-scope-and-taking-control-fbd</link>
      <guid>https://dev.to/gitguardian/working-towards-improved-pam-widening-the-scope-and-taking-control-fbd</guid>
      <description>&lt;p&gt;Recently, &lt;a href="https://cloud.google.com/blog/topics/threat-intelligence/privileged-account-monitoring/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Mandiant published an in-depth look at PAM&lt;/a&gt;. We applaud Mandiant and Google as they continue to push toward a safer world, and their PAM guidance is packed with grounded, hard-won advice. It is also good to see non-human identities included in the scope. Service accounts APIs, workloads, and automation have real entitlements. Leaving NHIs out of PAM creates blind spots that attackers exploit and auditors eventually find.&lt;/p&gt;

&lt;p&gt;We also appreciate their PAM levels of maturity guidance. Teams can use their four levels, uninitiated, ad-hoc, repeatable, and iterative optimization, to help organizations of any size find the next steps in the right direction. This approach aligns with what we have seen in our &lt;a href="https://blog.gitguardian.com/a-maturity-model-for-secrets-management/" rel="noopener noreferrer"&gt;Secrets Management Maturity research&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;GitGuardian is happy to help teams as they align with these objectives, no matter where you are on your path.&lt;/p&gt;

&lt;h2&gt;
  
  
  Start By Knowing What NHIs You Have
&lt;/h2&gt;

&lt;p&gt;Traditionally, the field of privileged access management has focused on humans who require a higher level of privilege. As Mandiant defines it, "A privileged account is any human or non-human identity(NHI) whose entitlements can change system state."&lt;/p&gt;

&lt;p&gt;Attackers love NHIs, as they tend to be long-lived and lack MFA. Worse yet, detecting unusual behavior from an entity when it logs in is much harder than if someone breaks in.&lt;/p&gt;

&lt;p&gt;We must ask ourselves, can this NHI impact my organization? What would be affected if it were revoked? Is the data it accesses privileged?&lt;/p&gt;

&lt;p&gt;But those questions are hard to ask if you don't know the NHI exists. GitGuardian is helping teams answer exactly this with out NHI Governance and Secrets Security platform. We help teams find access keys, commonly called secrets, in plaintext, where they likely should not be, and give visibility to vault contents, where secrets should safely live.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/monitor-internal-repositories-for-secrets?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian can help you discover all secrets leaked into code&lt;/a&gt;, but also when they get pasted and stored in Jira, Slack, Confluence, and any &lt;a href="https://blog.gitguardian.com/scanning-github-gists-for-secrets/" rel="noopener noreferrer"&gt;other system you need to monitor&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/content/images/2025/10/data-src-image-83fcc276-ed5a-420e-944b-c79a91611d47.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0lgqcl6rowqqn5ftjt3g.png" width="800" height="455"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;GitGuardian Internal Secrets Monitoring Integrations&lt;/p&gt;

&lt;h2&gt;
  
  
  Automating Rotation Requires Cross-Vault Insight
&lt;/h2&gt;

&lt;p&gt;One sign you are on your way beyond the "Unitiated" level of your PAM maturity is that any shared credentials are vaulted and rotated. Thankfully, every provider, from &lt;a href="https://blog.gitguardian.com/handling-secrets-with-aws-secrets-manager/" rel="noopener noreferrer"&gt;AWS Secrets&lt;/a&gt; to &lt;a href="https://blog.gitguardian.com/the-hidden-challenges-of-automating-secrets-rotation/" rel="noopener noreferrer"&gt;CyberArk&lt;/a&gt;, makes rotation easy once the secret is accounted for within the vault.&lt;/p&gt;

&lt;p&gt;This assumes, though, that any particular secret exists in only one vault and is used only once throughout your code or infrastructure. What would the results be from rotating this secret in one vault, but not all vaults? Does anything actually call this secret? &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.gitguardian.com/nhi-governance?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian NHI's Governance platform&lt;/a&gt; can help you safely inventory all of your secrets across all of your vaults, mapping paths and usage throughout your systems. By tying into your &lt;a href="https://docs.gitguardian.com/nhi-governance/integrate-your-sources?ref=blog.gitguardian.com#supported-integrations" rel="noopener noreferrer"&gt;IAM, CI, and other operational tools&lt;/a&gt;, GitGuardian can help you enforce rules around how these secrets are used and what state your NHIs are in. We want to help all teams get visibility into who accesses which credentials and from where. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/content/images/2025/10/data-src-image-3bd5dad5-dc59-4b28-bbca-072fe24882fe.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkdg7jewutrm0eszuzlj8.png" width="800" height="543"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The GitGuardian NHI Governance Inventory dashboard showing policy violations and risk scores.&lt;/p&gt;

&lt;p&gt;Have confidence when you do rotate those credentials; all instances and access are accounted for. This approach also gives you more certainty that a secret is being consumed by the right entities in the correct environments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Aiming High While Dealing With Operational Reality
&lt;/h2&gt;

&lt;p&gt;For mature organizations, Mandiant says that the PAM maturity path should result in "Human standing privilege trends toward zero; service/API identities move to group Managed Service Account (gMSA)/managed identities."&lt;/p&gt;

&lt;p&gt;We strongly agree and support teams as they embrace Zero Trust Architectures and federated identities in their enterprise. We also know that improving security posture takes time. Teams should focus on the near-term goal of identifying which identities, human or NHI, can access and affect your most mission-critical systems.&lt;/p&gt;

&lt;p&gt;Without visibility into your current NHI inventory, it is difficult to move towards a world of just-in-time (JIT) access, instead of the current world of  long-lived, overprivileged keys, connection URLS, and tokens. Getting a baseline of what exists is the first needed step before assigning any entity a tier or criticality score.&lt;/p&gt;

&lt;p&gt;Combining GitGuardian's dual approach of both finding all secrets where they should not be in plaintext throughout your systems, and giving you insights into the state if the secrets which are properly housed in vaults, gives your team unprecedented visibility. This visibility is critical for administering any governance plans. &lt;/p&gt;

&lt;p&gt;And, with &lt;a href="https://docs.gitguardian.com/ggscout-docs/what-is-ggscout?ref=blog.gitguardian.com#safely-store-unvaulted-secrets" rel="noopener noreferrer"&gt;GitGuardian's Push to Vault feature&lt;/a&gt;, we can help your team move secrets found outside of your &lt;a href="https://docs.gitguardian.com/ggscout-docs/integrations/secret-managers?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;preferred secrets management platform&lt;/a&gt; into the right vault your governance processes require. &lt;/p&gt;

&lt;h2&gt;
  
  
  Shifting Left On Credential Exposure
&lt;/h2&gt;

&lt;p&gt;GitGuardian agrees with Mandiant that "uninitiated password reset or account setting change on a privileged account" is indeed a sign of compromise. Those credentials for those accounts should be rotated or revoked as soon as the danger emerges. However, we think that teams should be responding to exposed credentials much earlier: as soon as they are exposed in plaintext in the first place.&lt;/p&gt;

&lt;p&gt;GitGuardian is helping teams shift left with credential exposure response in some major ways.&lt;/p&gt;

&lt;p&gt;First, the platform can alert you instantly when any real-time incidents of credentials showing up in plaintext across code, messaging systems, ticketing platforms, documentation, and any other source. We are not limited to detecting NHIs, which the platform famously helps teams govern. Any password or credential that is contextually granting access will be detected in seconds, allowing you to respond before an attacker is ever aware it exists.&lt;/p&gt;

&lt;p&gt;We know from research and experience that once a credential is exposed in plaintext, it is only a matter of time before it is exploited. This is extremely true of all the secrets that make their way into &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2025?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;public GitHub repositories&lt;/a&gt;. But it is equally true that once an attacker gets inside your trusted systems, where we know secrets leak at a much higher rate, as much as 8x more often, &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2025?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;according to our research&lt;/a&gt;.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://blog.gitguardian.com/content/images/2025/10/data-src-image-982cefb6-c4cc-43c1-ba22-77e7b863e977.png" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwax0hutqbleg9xfy8zrw.png" alt="" width="800" height="432"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The other way GitGuardian can help teams towards better PAM security is by providing early warning canaries, which we call &lt;a href="https://www.gitguardian.com/nhi-governance?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;GitGuardian Honeytoken&lt;/a&gt;s.&lt;/p&gt;

&lt;p&gt;Offering attackers alluring decoy credentials, in the form of AWS keys, we can tell you instantly if someone is poking around and scanning for secrets within your private code and environments long before they attempt to reset the password or change privileges.&lt;/p&gt;

&lt;h2&gt;
  
  
  It is time to Harden Our Tactical Stance.
&lt;/h2&gt;

&lt;p&gt;Towards the end of Mandiant's comprehensive guide is the advice to:"Prepare before an incident: Map every service account to owner and workload, run continuous discovery cycles to find systems and credentials, and onboard all human and non-human privileged identities into PAM; enforce unique credentials, MFA, and API-based rotation."&lt;/p&gt;

&lt;p&gt;The time has come to bring non-human identities into our privileged access management strategy. We must realize that non-human identities have the same potential to affect our mission-critical systems as some of our administrators or domain access controllers.&lt;/p&gt;

&lt;p&gt;The only real way we can move towards a future of "Iterative Optimization," the highest level of PAM access, is to deal with the operational realities of today. This starts with getting a baseline for your NHI security posture health today. Put GitGuardian on the front line where PAM cannot see. Stop secrets at the source. Rotate fast when exposure happens. Correlate across systems. Turn every response into proof.&lt;/p&gt;

&lt;p&gt;Again, we thank Mandiant and Google for guidance that keeps the field moving forward. &lt;/p&gt;

&lt;p&gt;Your next move is clear. Widen the scope, get visibility, and move to a world where any leaks are caught well before an attacker can exploit them. GitGuardian is here to help you on your journey to PAM maturity.&lt;/p&gt;

</description>
      <category>security</category>
      <category>pam</category>
      <category>iam</category>
      <category>google</category>
    </item>
    <item>
      <title>Rethinking Security Resilience And Getting Back To Basics At CornCon 11</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Wed, 29 Oct 2025 14:53:51 +0000</pubDate>
      <link>https://dev.to/gitguardian/rethinking-security-resilience-and-getting-back-to-basics-at-corncon-11-4lec</link>
      <guid>https://dev.to/gitguardian/rethinking-security-resilience-and-getting-back-to-basics-at-corncon-11-4lec</guid>
      <description>&lt;p&gt;The &lt;a href="https://en.wikipedia.org/wiki/Government_Bridge?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;first railroad bridge to span the Mississippi River&lt;/a&gt; was built between Davenport, IA, and Rock Island, IL in 1856. It burned down just 15 days later. It was the victim of a steamboat collision stemming from a simmering conflict between rival modes of commerce. In hindsight, this disaster wasn't a structural failure; it was a breakdown in communication and a clash of trust boundaries. Like that doomed bridge, many of our cybersecurity defenses are one misalignment away from collapse. Fixing those issues and making our systems more resilient was very top of mind for everyone who gathered in Davenport for &lt;a href="https://corncon.net/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;CornCon 11&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Over 400 security practitioners gathered to take part in this three-day event that featured over 50 sessions from more than 70 subject matter experts, 4 workshops, and a capture-the-flag. The first of the three days was a CISO summit, gathering the security leads of dozens of companies to talk about how to lead companies towards better security posture while dealing with the realities of tighter budgets and AI everywhere. CornCon also hosts a &lt;a href="https://www.zeffy.com/ticketing/the-children-of-the-corncon-kids-hacker-camp-oct--2025?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Kids' Hacker Camp&lt;/a&gt;, helping the next generation of defenders get an early start. And of course, there was a lot of fun and amazing hallway conversations. Here are just a few highlights from CornCon 11. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/posts/nullsession_corncon-cybersecurity-conference-still-has-activity-7376294285563846656-P9wt?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8miv8adfz6lj6m0p000d.png" alt="CornCon logo" width="800" height="262"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Sustainability Over Perfection
&lt;/h2&gt;

&lt;p&gt;In "Cybersecurity Doesn't Need a Six-Pack: Embrace the Dadbod," &lt;a href="https://www.linkedin.com/in/douglasabrush/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Douglas Brush, Cyber &amp;amp; Privacy Personal Trainer and CISO Whisperer at Brush Cyber&lt;/a&gt;, started with what honestly sounded like a joke: "Cybersecurity needs to embrace the Dadbod." But as Douglas walked through his story, his point became rather serious, as he laid down a challenge to the entire aesthetic of security success.&lt;/p&gt;

&lt;p&gt;At the heart of his talk was culture, and that we've been fed a lie for decades, that elite security looks flawless. We worship hacker heroes and burn ourselves out chasing zero-risk illusions. He took aim at the Gen X "train through the pain" mentality and the toxic pride in 80-hour work weeks. Douglas argued that this mindset is unsustainable, and it's killing programs and people alike. &lt;/p&gt;

&lt;p&gt;After a health crisis, he realized he'd been tracking the wrong metrics, vanity measures that didn't improve outcomes in his personal life. He realized he had also been focused on the wrong metrics as a CISO. He took a step back and realized he needed to refocus on fundamentals, such as defensibility over decoration and things like rolling out and enforcing MFA everywhere before going after SOC2. In his analogy, AI tools are like steroids, powerful, flashy, even transformative, but dangerously easy to abuse when the core isn't strong.&lt;/p&gt;

&lt;p&gt;Douglas left us with the reminder that security programs should look less like Marvel superheroes and more like reliable, resilient dads. Not flawless, but always there when it counts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m2ulawvj6m2u?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkj3d9bj94qe9bq1az8u2.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Douglas Brush&lt;/p&gt;

&lt;h2&gt;
  
  
  When Conditional Access Becomes Conditional Exposure
&lt;/h2&gt;

&lt;p&gt;In his talk "Abusing Holes in Entra Conditional Access," &lt;a href="https://www.linkedin.com/in/techbrandon/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Brandon Colley, Senior Security Consultant at TrustedSec&lt;/a&gt;, explained how Microsoft Entra Conditional Access policies are meant to enforce boundaries, but misconfigurations can open some unexpected security holes. The way most orgs implement Conditional Access is unmanageable, inconsistent, and dangerously complex. &lt;/p&gt;

&lt;p&gt;The first major issue Brandon discussed was "policy sprawl," where teams keep adding new policies, sometimes ending up hitting the upper limit of 195. The more policies that exist, the harder it is to understand.  To make matters worse, teams tie multiple policies together using the "AND" operator, leading to very complex situations where it is hard to tell if a role is covered in all circumstances.&lt;/p&gt;

&lt;p&gt;He encouraged everyone in the audience to, at a minimum, require MFA for real admins. Entra ships with over 100 roles, not just the 14 defaults. Check to make sure you have locked down the critical ones: Authentication Policy Administrator, Directory Writers, External Identity Provider Administrator, Hybrid Identity Administrator, Identity Governance Administrator, and Intune Administrator.&lt;/p&gt;

&lt;p&gt;Branden also walked through some of the tools he uses for finding security issues and tuning Entra policies. The &lt;a href="https://learn.microsoft.com/en-us/entra/identity/conditional-access/what-if-tool?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Conditional Access What If tool&lt;/a&gt; gives you a playground to test things, telling you how a policy evaluates for a specific user and client. &lt;a href="https://github.com/dafthack/MFASweep?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;MFASweep&lt;/a&gt; is a security tool that can flag accounts without MFA enabled and show when PowerShell can be accessed. He also encouraged us to check out &lt;a href="https://maester.dev/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;maester.dev&lt;/a&gt;, which provides a test automation framework for tuning Microsoft Conditional Access. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m2whpftjfn25?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftxgjd9quvudp68see6kt.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Brandon Colley&lt;/p&gt;

&lt;h2&gt;
  
  
  Every Villain Has an Origin Story
&lt;/h2&gt;

&lt;p&gt;In the intro to her entertaining and informative session "Cyber Super Villains: The Real-Life Threats Lurking in Our Digital World," &lt;a href="https://www.linkedin.com/in/hello-paige-hanson/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Paige Hanson, Co-Founder of SecureLabs&lt;/a&gt;, asked us "what makes a great adversary?" Throughout the rest of her talk, she used comic book archetypal villains to explain how real-world threat actors work.&lt;/p&gt;

&lt;p&gt;She started with the X-Men villain Mystique, whose shape-shifting maps to impersonation scams and voice cloning. Magneto's obsession with control parallels ransomware gangs and crime-as-a-service economies. From the DC universe, Scarecrow's fear tactics echo the psychological chaos of water plant shutdowns and phishing-driven panic. Even Brainiac's data weaponization now feels tangible in the age of AI-enhanced fraud.&lt;/p&gt;

&lt;p&gt;Paige reminded us that, unlike these fictional villains, attackers are not solo geniuses; they're part of modular, scalable crime ecosystems where we have seen do-it-yourself fraud turned franchise. Crime today is composable and AI-driven.&lt;/p&gt;

&lt;p&gt;But there are parallels to heroes on the other side, trying to protect us all. This included an obvious comparison of Captain America with governance bodies like CISA and NIST. Batman and Deadpool, who work in unconventional ways to get the job done, mirror our real-world heroes who focus on ethical hacking, uncovering vulnerabilities and exploits before the attackers can succeed. &lt;/p&gt;

&lt;p&gt;Ultimately, we need to act like the Avengers, where everyone brings their skills to the table to defeat evil, no matter what form it takes.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m2ucwqz6u42p?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fchb2oqvaiax7xq5zra2y.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Paige Hanson&lt;/p&gt;

&lt;h2&gt;
  
  
  Offensive Insight With A Defensive Heart
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/in/sean-juroviesky/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Sean Juroviesky, Senior Security Engineer at SoundCloud&lt;/a&gt;, delivered an honest assessment of offensive security's current failings in their talk "Bridging the Gap: Delivering Offensive insights to the Blue Team." They explained that we keep finding the same things year after year in our pentesting reports because the current model is performative, not transformational.&lt;/p&gt;

&lt;p&gt;Pentration tests are far too often checkbox exercises. Insurance-mandated and many customer contracts demand them, but they are done as one-off or annual exercises where the blue team is not adequately involved. This results in too few issues getting patched and continued deprioritization of the needed work. The average they found was that it took teams 3.2 years to resolve the most critical findings.&lt;/p&gt;

&lt;p&gt;Sean advocated for a new model, one where red teams work with blue teams to identify targets that are both impactful and fixable. They emphasized aligning findings with business risk, not just technical severity. If a vulnerability could lead to a revenue-impacting event (a RIE), then it matters. If not, it's probably noise.&lt;/p&gt;

&lt;p&gt;Sean stressed that red teams can't just drop reports, but instead need to diagram downstream impacts, highlight system interdependencies, and acknowledge when fixes are unrealistic. They can realistically only do that by working with the rest of the org to get the needed context. They reminded us that security is a long game, and every pentest should leave the organization not just knowing where they are exposed, but better equipped to make meaningful security changes.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m2ufcotjjm2p?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2y3xlvr5cchb8zrsfros.png" width="800" height="427"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sean Juroviesky&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting The Basics Right Has Never Been More Important
&lt;/h2&gt;

&lt;p&gt;Throughout CornCon, one theme kept coming up in most talks: the tools matter, but the people using them matter more. Multiple speakers made a call for returning to the fundamentals as the industry keeps accelerating. Tech will not save us by itself. Security is a human discipline, and it gets stronger when we practice together and learn from each other.&lt;/p&gt;

&lt;p&gt;Back to basics showed up in identity and operations. Conditional Access works when teams standardize names, require MFA for real admins, keep location rules precise, and treat exclusions as temporary. The CISO panel echoed that we need to communicate up and across the business. Putting a human in the loop means automation can accelerate good judgment instead of scaling bad processes. &lt;/p&gt;

&lt;p&gt;Speakers talked about reframing programs around balance and sustainability instead of perfection theater. We need to build succession plans and coach our teams to speak revenue, risk, and recovery time so security reads as a partner, not a cost center. We need to work together to translate the red team's efforts into actionable plans, not just more reports to file away. &lt;/p&gt;

&lt;p&gt;The theme of the weekend was not a new silver bullet. It was a return to craft. Basics done well. Teams equipped and trusted. Communities that make people better at their jobs and kinder to themselves. That is how you build security that lasts.&lt;/p&gt;

&lt;p&gt;Your author was also able to give a talk about getting back to basics, where I argued for focusing on taming secrets sprawl. If we can focus on empowering our developers and DevOps teams to make safer authentication decisions, such as not hardcoding credentials, then we have a real hope of taking the machine identity mess we are facing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Focusing On Simple Practices For Durable Resilience
&lt;/h2&gt;

&lt;p&gt;CornCon 11 was a reminder that resilience is not a new tool. It is fundamentals practiced with discipline by people who trust each other. Earning that trust requires patience and real effort to bridge the gap between teams. We need to teach the 'why' behind the changes we want others to make, so the fixes stick when you are not in the room.&lt;/p&gt;

&lt;p&gt;If you want lasting gains, invest in the humans. Let's work to build more places where practitioners can learn, vent, and trade playbooks without posturing. Tie findings to business outcomes so leaders see security as a partner. Keep the craft simple, repeatable, and humane. That is how you build defenses that hold when the river rises.&lt;/p&gt;

</description>
      <category>techtalks</category>
      <category>security</category>
      <category>cybersecurity</category>
      <category>devrel</category>
    </item>
    <item>
      <title>SREday SF 2025: Human Centered SRE In An AI World</title>
      <dc:creator>Dwayne McDaniel</dc:creator>
      <pubDate>Sun, 19 Oct 2025 20:53:49 +0000</pubDate>
      <link>https://dev.to/dwayne_mcdaniel/sreday-sf-2025-human-centered-sre-in-an-ai-world-4a6k</link>
      <guid>https://dev.to/dwayne_mcdaniel/sreday-sf-2025-human-centered-sre-in-an-ai-world-4a6k</guid>
      <description>&lt;p&gt;&lt;a href="https://www.sfmta.com/getting-around/muni/cable-cars?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;San Francisco's cable cars&lt;/a&gt; are the only moving National Historic Landmark in the United States, a century-old system that can deliver modern reliability when skilled people guide the machinery. Watching a gripman work the brake down Powell Street is a lesson in human-centered control. You can mechanize the track, instrument the descent, and rely on a person to ultimately make emergency calls that keep riders safe. This made the city a perfect backdrop for &lt;a href="https://sreday.com/2025-san-francisco-q4/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;SRE Day San Francisco 2025&lt;/a&gt;. &lt;/p&gt;

&lt;p&gt;Throughout the day, around one hundred Site Reliability Engineers, DevOps professionals, and other IT folks gathered for 2 tracks of talks. Throughout the 20 sessions, we heard a recurring sentiment that tools matter, telemetry matters, but human judgment is the boundary between graceful resilience and quiet catastrophe. The best automation behaves like that cable car system, ever present and predictable, but designed to work with people rather than erase them. That is the posture security and Site Reliability must take as AI systems, non-human identities, and agentic automation push deeper into production. Here are just a few highlights from the latest event, organized by the &lt;a href="https://sreday.com/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;SREday&lt;/a&gt; team around the world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automation that listens to people
&lt;/h2&gt;

&lt;p&gt;In the session "The Human Factor in Site Reliability: Designing Automation That Amplifies Engineering," from &lt;a href="https://www.linkedin.com/in/jimmykatiyar/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Jimmy Katiyar, Senior Product Manager at SiriusXM,&lt;/a&gt; the focus was that automation without context can be spectacular at the wrong thing. Jimmy's stories from SiriusXM cut through the promises-only view of full autonomy. He didn't deride automation, but said that human judgment must stay in the loop for decisions that can damage customer trust or compound blast radius.&lt;/p&gt;

&lt;p&gt;He explained that concrete operational outcomes need humans to remain at the center. Teams can see faster mean time to recovery (MTTR) by pairing runbook automation and enriched alerts with explicit human decision points. There are fewer repeated incidents when engineers look past retries and ask whether a deeper change in a model or schema has broken a hidden contract. He has seen this team achieve higher reliability by allowing human judgment to pause automation when confidence is low and ambiguity is high.&lt;/p&gt;

&lt;p&gt;AI and rules engines can do the heavy lifting and toil, but people must make the calls that depend on the broader context the model cannot see.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m2cmfivhy52b?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpd0auzzx3dullb2adec3.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jimmy Katiyar&lt;/p&gt;

&lt;h2&gt;
  
  
  Observability that drives decisions
&lt;/h2&gt;

&lt;p&gt;In the session "From Dashboard to Defense: Automating Resilience at Large Scale," &lt;a href="https://www.linkedin.com/in/sureshkumar-karuppuchamy-51282b7/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Sureshkumar Karuppuchamy, Engineering Lead at eBay&lt;/a&gt;, explained that dashboards do not act. People carrying a pager at two in the morning cannot parse a million metrics, and heroic incident responses burn out the best of us. He described a shift that starts with SLI-driven thinking. Measure latency and checkout success rather than chasing vanity counters. Make OpenTelemetry the first principle so metrics, traces, and logs tell the same story. Follow a user's pain through the system without hoping a heatmap reveals meaning.&lt;/p&gt;

&lt;p&gt;The pivot from static thresholds to change-aware observability is especially relevant for AI systems. Static alarms treat the world like yesterday's traffic and miss the shape of a new release or a sudden client-side error burst. Sureshkuman urged us to watch for meaningful deviation and to price it in operational risk terms, based on capacity anomaly detectors, browser error spikes, and AI models for forecasting. We need control loop security for policy enforcement through agents and for guarding against adversarial inputs that can skew telemetry.&lt;/p&gt;

&lt;p&gt;Sureshkuman walked us through a "staged autonomy pattern." First is 'shadow' mode to learn and build trust. Then on to 'suggest' mode to keep humans approving the plan. And finally, 'autonomous' mode with guardrails for transparency and reversibility. It mirrors the policy posture we want for non-human identities and privileged access, and it sets a standard for how agentic automation should behave in production. Acting as a teammate that shows its work.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m2cuqoii4g25?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frpqb128ul7ep003rqh5f.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Sureshkumar Karuppuchamy&lt;/p&gt;

&lt;h2&gt;
  
  
  Chaos that proves or disproves what you believe
&lt;/h2&gt;

&lt;p&gt;In "Transform chaos experiments into actionable insights using generative AI," from &lt;a href="https://www.linkedin.com/in/saurabh-kumar-58945a34/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Saurabh Kumar, Principal Solution Architect&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/in/ruskindantra/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Ruskin Dantra, Solutions Aarchitect, both from AWS&lt;/a&gt;, showed that the problem was not the tooling. Anyone can spin up failure. The hard part is creating a good hypothesis and a disciplined verification step. They described a loop that starts with a steady state, builds a hypothesis informed by architecture and dashboards, then runs experiments through AWS Fault Injection Service, and finally feeds the rich telemetry back into an LLM for synthesis.&lt;/p&gt;

&lt;p&gt;They exposed a frequent failure pattern where vague hypotheses that use words like 'significantly' and 'consistent' collapse under verification. When they tied the hypothesis to business-relevant metrics such as API gateway errors and user experience measures, the loop improved. &lt;/p&gt;

&lt;p&gt;If you want AI to help you verify, you must write down what good looks like in terms that reflect the user, the application, and the system. Better yet, translate steady state into business observability so a deviation has a dollar sign. That is how you keep chaos from becoming a stunt. It becomes security observability and a threat modeling tool that exercises real trust boundaries.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m2d3iabpbj2p?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcv3o12psnlq1dro9q1dc.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Saurabh Kumar (foreground) &amp;amp; Ruskin Dantra&lt;/p&gt;

&lt;h2&gt;
  
  
  Data that tells the truth about usage
&lt;/h2&gt;

&lt;p&gt;In "10 Billion Downloads: Insights and Trends in Open Source," &lt;a href="https://www.linkedin.com/in/avi-press-4437a356/?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;Avi Press Founder &amp;amp; CEO at Scarf&lt;/a&gt;, explained that downloads are not users. In his work, they keep metrics on open source package downloads and usage. While impressive download numbers might generate buzz, automated systems pull down images and artifacts constantly, and a single project can rack up astronomical counts while active usage remains flat or even declines. The ratio of downloads to unique users can span orders of magnitude and often hides behind proxies, mirrors, and CI noise. If you are making security and product decisions based on download charts, you are probably flying blind.&lt;/p&gt;

&lt;p&gt;Avi's larger point was about observability and identity in the supply chain. Registries and gateways can see what gets downloaded, when, and from where. Rich user agents and CI flags help distinguish humans from automation. That distinction matters when you are prioritizing vulnerability response, dependency policy, and how you invest in controls that protect your distribution. It also matters when you evaluate the risk of secrets sprawl in build systems and the behavior of agents that authenticate to registries on your behalf.&lt;/p&gt;

&lt;p&gt;Avi also said people do not like to upgrade or think about upgrading. From their research, they estimate 80% of traffic is pulling the &lt;code&gt;:latest&lt;/code&gt; version of a package. This can introduce new vulnerabilities, especially in fast-moving attacks like we recently saw with &lt;a href="https://blog.gitguardian.com/shai-hulud-a-persistent-secret-leaking-campaign/" rel="noopener noreferrer"&gt;Shai-Hulud&lt;/a&gt;. On the other end of the aisle, though, when they see a project download a specific "pinned" version of a package, they never download another version of that package ever again. This likely contains many packages with known vulnerabilities being reused over and over, as it has proved stable so far.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://bsky.app/profile/mdwayne-real.bsky.social/post/3m2d5huizh62u?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3oo6ghh8ot3wjemi9ipm.png" width="800" height="602"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Avi Press&lt;/p&gt;

&lt;h2&gt;
  
  
  Human-centered automation
&lt;/h2&gt;

&lt;p&gt;Complex, automated systems are always socio-technical. Every outage that matters crosses a human boundary. A pager decision, a rollback judgment, a missing intent record. The universal truth beneath the sessions is that human judgment is the only control that adapts fast enough to ambiguity. We keep people in the loop because ambiguity beats rules, and production is mostly ambiguity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Ambiguity Defeats Static Policy
&lt;/h3&gt;

&lt;p&gt;Dashboards, thresholds, and runbooks fall short at the worst moments because production signals are underspecified representations of reality. Metrics compress experience. Traces compress causality. Logs compress narrative. Compression discards context. When AI or rules act on compressed inputs without recovering context, they optimize the wrong objective. Security feels this first. Adversaries live in the discarded context. They weaponize edge cases that static policy cannot name.&lt;/p&gt;

&lt;h3&gt;
  
  
  Incentives Warp Signals
&lt;/h3&gt;

&lt;p&gt;Incentives across teams and AI systems push speed and simplicity, not fidelity. Teams collect telemetry that is cheap to compute and easy to visualize. Vendors sell abstractions that look clean while burying the messy parts. Open source download counts become a proxy for adoption because they are easy to chart. The same dynamic exists in access control where broad roles and shared secrets move faster than granular IAM for machine identities. Speed without fidelity produces brittle control and operational risk.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification Lags Change
&lt;/h3&gt;

&lt;p&gt;We discover governance gaps after damage, as verification still moves slower than change. Hypotheses remain vague in many experiments and POCs. Evidence remains scattered. Postmortems reconstruct truth from fragments. LLMs can summarize, but they cannot replace a precise definition of steady state or explicit success criteria. Until verification is codified as a fast loop tied to business outcomes, risk signals arrive too late to shape decisions and we will continue to see timid automation or reckless autonomy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Trust Is The real SLO
&lt;/h3&gt;

&lt;p&gt;Trust is the currency customers spend to stay when something goes wrong. Trust is also the currency engineers spend to grant autonomy to a system. If a rollout can explain itself, if an agent can justify access, if a failure can be reversed cleanly, people keep trusting. If not, they route around automation. Security is trust formalized as policy and evidence, just as reliability is trust expressed as steady state behavior. The universal truth is that both disciplines are negotiating the same contract.&lt;/p&gt;

&lt;h2&gt;
  
  
  The craft of shared control
&lt;/h2&gt;

&lt;p&gt;San Francisco's steep terrain runs on small rituals of control, and SRE follows the same rhythm. We need to purposefully refine our automation to achieve the metrics that matter to the business. The message was to build systems where humans and machines share responsibility through transparent, auditable, and reversible decisions.&lt;/p&gt;

&lt;p&gt;Your author was able to talk about another kind of observability, as I laid out the findings of &lt;a href="https://www.gitguardian.com/state-of-secrets-sprawl-report-2025?ref=blog.gitguardian.com" rel="noopener noreferrer"&gt;the State of Secrets Sprawl&lt;/a&gt;. Just as with observability, teams need to create a plan to identify, store, and automate the rotation of secrets at scale. Those plans have a common path: Start small by choosing one decision boundary, make it observable, practice the rollback, and repeat. Resilience grows not from total autonomy but from the craft of earning trust one reliable action at a time.&lt;/p&gt;

</description>
      <category>techtalks</category>
      <category>sre</category>
      <category>security</category>
      <category>cybersecurity</category>
    </item>
  </channel>
</rss>
