<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Daniel Glover</title>
    <description>The latest articles on DEV Community by Daniel Glover (@danieljglover).</description>
    <link>https://dev.to/danieljglover</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/danieljglover"/>
    <language>en</language>
    <item>
      <title>Building a Cybersecurity Culture That Actually Sticks</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:47:09 +0000</pubDate>
      <link>https://dev.to/danieljglover/building-a-cybersecurity-culture-that-actually-sticks-4k5o</link>
      <guid>https://dev.to/danieljglover/building-a-cybersecurity-culture-that-actually-sticks-4k5o</guid>
      <description>&lt;p&gt;Every IT leader I have worked with has the same complaint: "We run the training, people pass the quiz, and then click the phishing link the following week."&lt;/p&gt;

&lt;p&gt;Building a cybersecurity culture is not about compliance checkboxes or annual e-learning modules. It is about fundamentally shifting how your organisation thinks about risk. The difference between organisations that suffer catastrophic breaches and those that catch threats early almost always comes down to culture, not technology.&lt;/p&gt;

&lt;p&gt;This guide shares practical, battle-tested strategies for building a cybersecurity culture that changes behaviour, not just awareness scores.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Most Security Awareness Programmes Fail
&lt;/h2&gt;

&lt;p&gt;Traditional security awareness training follows a predictable pattern. Once a year, employees sit through a presentation or click through an online module. They learn about password hygiene, phishing red flags, and data classification. They pass a quiz. Then nothing changes.&lt;/p&gt;

&lt;p&gt;The problem is not the content. It is the approach. Annual training treats cybersecurity as an event rather than a habit. Behavioural science tells us that one-off interventions rarely produce lasting change. You would not expect a single gym session to make someone fit, yet we expect a single training session to make someone security-conscious.&lt;/p&gt;

&lt;p&gt;Research from the &lt;a href="https://www.sans.org/security-awareness-training/" rel="noopener noreferrer"&gt;SANS Institute&lt;/a&gt; consistently shows that organisations with mature security cultures experience 70% fewer security incidents than those relying on compliance-driven training alone. The difference is not budget or tools. It is sustained, embedded, culturally reinforced behaviour.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Three Failure Modes
&lt;/h3&gt;

&lt;p&gt;From my experience leading IT teams, security awareness programmes typically fail in one of three ways:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The compliance trap&lt;/strong&gt; - Training exists solely to satisfy audit requirements. Nobody measures whether behaviour actually changes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The fear approach&lt;/strong&gt; - Security teams use scare tactics and punishment. Employees learn to hide mistakes instead of reporting them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The technology fallacy&lt;/strong&gt; - Leadership assumes tools will compensate for human error. They invest in email filtering but neglect the human layer.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each of these creates a false sense of security. The organisation believes it has addressed the human element when, in reality, it has simply documented that it tried.&lt;/p&gt;




&lt;h2&gt;
  
  
  What a Strong Cybersecurity Culture Looks Like
&lt;/h2&gt;

&lt;p&gt;A genuine cybersecurity culture has observable characteristics. You can walk into an organisation and sense it within days.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;People report incidents without fear.&lt;/strong&gt; In a healthy security culture, employees flag suspicious emails, admit to clicking dodgy links, and ask questions about unfamiliar processes. There is no shame in making a mistake. The shame is in hiding one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security is part of business decisions.&lt;/strong&gt; When launching a new product or onboarding a new vendor, security considerations are raised naturally, not bolted on as an afterthought. Project managers ask about data flows. Marketing teams question third-party tracking scripts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Language matters.&lt;/strong&gt; Security teams talk about risk in business terms, not technical jargon. They say "this could cost us three days of revenue" rather than "the CVE has a CVSS score of 9.1." Communication bridges the gap between technical reality and business impact.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Leaders model the behaviour.&lt;/strong&gt; When the CEO uses multi-factor authentication, when directors challenge suspicious requests, when the board asks informed questions about cyber risk, it signals that security matters at every level.&lt;/p&gt;




&lt;h2&gt;
  
  
  A Practical Framework for Culture Change
&lt;/h2&gt;

&lt;p&gt;Building cybersecurity culture is a long game. You will not transform an organisation in a quarter. But you can make measurable progress with a structured approach. Here is the framework I have used across multiple organisations.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Measure Your Starting Point
&lt;/h3&gt;

&lt;p&gt;Before changing anything, understand where you are. Run an anonymous survey measuring:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How comfortable employees feel reporting security incidents&lt;/li&gt;
&lt;li&gt;Whether they can identify common threats (phishing, social engineering, suspicious USB devices)&lt;/li&gt;
&lt;li&gt;Their perception of the security team (helpful partner or bureaucratic blocker?)&lt;/li&gt;
&lt;li&gt;How often they encounter security guidance in their daily work&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This baseline gives you something concrete to improve against. Without it, you are guessing.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Make Reporting Safe and Simple
&lt;/h3&gt;

&lt;p&gt;The single most impactful change you can make is removing barriers to incident reporting. If someone clicks a phishing link and is afraid to tell anyone, you have lost your most valuable detection mechanism: the human sensor.&lt;/p&gt;

&lt;p&gt;Implement a one-click reporting button in your email client. Create a dedicated Slack or Teams channel for security questions. And critically, publicly thank people who report incidents. "Sarah in finance spotted a sophisticated phishing attempt this morning and reported it within minutes. That is exactly what we need" goes further than any training module.&lt;/p&gt;

&lt;p&gt;I have seen organisations reduce their average incident detection time by over 60% simply by making reporting psychologically safe.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Embed Security Into Daily Workflows
&lt;/h3&gt;

&lt;p&gt;Security awareness should not be a separate activity. It should be woven into existing processes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Onboarding:&lt;/strong&gt; New starters get a 30-minute security orientation on day one, covering real examples of attacks that targeted the organisation (anonymised if needed).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Team meetings:&lt;/strong&gt; A two-minute "security moment" at the start of monthly team meetings. Share a recent phishing attempt, a news story about a breach, or a quick tip.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Project kickoffs:&lt;/strong&gt; Include a security checklist in project templates. Not a 50-page risk assessment, but five practical questions about data handling and access control.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance reviews:&lt;/strong&gt; Include security awareness as a competency. Not as a stick, but as recognition that security-conscious behaviour is valued.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is frequency and relevance. Short, regular touchpoints beat long, infrequent sessions every time.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Use Phishing Simulations Intelligently
&lt;/h3&gt;

&lt;p&gt;Phishing simulations are valuable when done correctly and counterproductive when done badly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do:&lt;/strong&gt; Run monthly simulations with increasing sophistication. Track improvement over time. Use failures as coaching opportunities, not punishment. Share aggregate results transparently.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Do not:&lt;/strong&gt; Name and shame individuals who fail. Use simulations that are deliberately misleading to the point of being unfair. Run simulations without follow-up education for those who click.&lt;/p&gt;

&lt;p&gt;The best approach I have seen combines simulation with immediate, constructive feedback. When someone clicks a simulated phish, they see a brief, friendly explanation of what they missed and one specific thing to look for next time. No alarm bells. No disciplinary notes. Just learning.&lt;/p&gt;

&lt;p&gt;Over 12 months of well-run simulations, most organisations see click rates drop from 25-30% to under 5%.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Empower Security Champions
&lt;/h3&gt;

&lt;p&gt;You cannot scale culture change through the security team alone. You need allies embedded across the business.&lt;/p&gt;

&lt;p&gt;Identify volunteers from each department who are interested in security. Give them additional training, access to threat intelligence briefings, and a direct line to the security team. These security champions become the local experts who answer questions, spot risks, and reinforce good practice within their teams.&lt;/p&gt;

&lt;p&gt;The champion model works because people are more likely to listen to a trusted colleague than to someone from a central IT function they rarely interact with. A security champion in the sales team who says "watch out for this invoice scam doing the rounds" carries more weight than an email from IT.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Communicate Like a Human
&lt;/h3&gt;

&lt;p&gt;The fastest way to kill a cybersecurity culture is to communicate like a compliance department. Nobody reads 2,000-word security policies. Nobody remembers 47 rules for password creation.&lt;/p&gt;

&lt;p&gt;Instead:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tell stories.&lt;/strong&gt; "Last month, a company similar to ours lost two million pounds because someone approved a payment based on a spoofed email. Here is how to spot one."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Keep it short.&lt;/strong&gt; Weekly security tips should be three sentences maximum.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Use humour.&lt;/strong&gt; A funny poster in the kitchen about password reuse will do more than a formal policy document.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Be visual.&lt;/strong&gt; Infographics, short videos, and memes are more shareable and memorable than text-heavy guidance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The UK's &lt;a href="https://www.ncsc.gov.uk/" rel="noopener noreferrer"&gt;National Cyber Security Centre&lt;/a&gt; produces excellent, plain-English guidance that serves as a model for how to communicate security concepts without drowning people in jargon.&lt;/p&gt;




&lt;h2&gt;
  
  
  Measuring Cultural Change
&lt;/h2&gt;

&lt;p&gt;You cannot manage what you do not measure. Track these indicators quarterly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Phishing simulation click rates&lt;/strong&gt; - Your most direct behavioural measure&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Incident reporting volume&lt;/strong&gt; - An increase is good (it means people are reporting more, not that more incidents are happening)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time to report&lt;/strong&gt; - How quickly do employees flag suspicious activity?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Survey scores&lt;/strong&gt; - Repeat your baseline survey annually to track perception changes&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Shadow IT discovery rate&lt;/strong&gt; - Are employees proactively disclosing unapproved tools?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Present these metrics to the board alongside traditional security metrics. Executives respond well to trend lines showing cultural improvement. It demonstrates that the organisation is building resilience, not just buying tools.&lt;/p&gt;

&lt;p&gt;For a deeper look at how the &lt;a href="https://dev.to/blog/2025-12-26-ciso-business-partner-evolution/"&gt;CISO role is evolving to encompass this kind of strategic work&lt;/a&gt;, the shift from technical guardian to business partner is directly relevant here.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Role of Leadership
&lt;/h2&gt;

&lt;p&gt;Culture flows downhill. If the executive team treats security as someone else's problem, the rest of the organisation will follow suit.&lt;/p&gt;

&lt;p&gt;IT leaders must secure visible executive sponsorship for cybersecurity culture initiatives. This means:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Board-level reporting&lt;/strong&gt; on security culture metrics alongside financial and operational metrics&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Executive participation&lt;/strong&gt; in security exercises and tabletop scenarios&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Budget allocation&lt;/strong&gt; that reflects the priority placed on human factors, not just technology&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Consistent messaging&lt;/strong&gt; that security is everyone's responsibility, backed by action&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When I have seen cybersecurity culture programmes fail, the root cause is almost always lack of sustained leadership commitment. The initial enthusiasm fades, the budget gets redirected, and the programme quietly dies.&lt;/p&gt;

&lt;p&gt;The same principle applies to &lt;a href="https://dev.to/blog/2026-01-17-zero-trust-architecture-reality/"&gt;Zero Trust Architecture&lt;/a&gt;, where success depends on organisational buy-in rather than technology procurement.&lt;/p&gt;

&lt;p&gt;If you need help designing a security awareness programme that goes beyond compliance checkboxes, my &lt;a href="https://dev.to/services/it-compliance"&gt;IT compliance services&lt;/a&gt; cover programme design, policy development, and audit preparation alongside the technical controls.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Wins to Start This Week
&lt;/h2&gt;

&lt;p&gt;If you are reading this and thinking "where do I even begin?", here are five actions you can take immediately:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Set up a one-click phishing report button&lt;/strong&gt; in your email client. Most platforms (Outlook, Gmail) support this natively or through add-ons.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run a baseline phishing simulation&lt;/strong&gt; to understand your current click rate. Services like KnowBe4, Proofpoint, or even open-source tools like GoPhish make this straightforward.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create a security tips channel&lt;/strong&gt; in your messaging platform. Post one tip per week. Keep it casual.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Add a security moment&lt;/strong&gt; to your next team meeting. Share a recent real-world breach story and ask "could this happen to us?"&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Thank someone publicly&lt;/strong&gt; for reporting a security concern. Set the tone that reporting is valued.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;None of these require budget approval. None require new technology. They require intent, consistency, and a genuine belief that your people are your strongest defence, not your weakest link.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Long Game
&lt;/h2&gt;

&lt;p&gt;Building a cybersecurity culture is not a project with a start and end date. It is an ongoing commitment that evolves as threats change, as your organisation grows, and as new technologies create new risks.&lt;/p&gt;

&lt;p&gt;The organisations that get this right treat cybersecurity culture as a core business capability, not an IT initiative. They invest in it continuously, measure it rigorously, and celebrate it visibly.&lt;/p&gt;

&lt;p&gt;Your security tools will stop a significant proportion of threats. But the threats that get through, the sophisticated social engineering attacks, the carefully crafted phishing campaigns, the insider risks, these are the ones where culture is your last and best line of defence.&lt;/p&gt;

&lt;p&gt;Start small. Be consistent. Measure everything. And remember: the goal is not perfect security. The goal is an organisation where everyone understands their role in protecting it.&lt;/p&gt;

&lt;p&gt;That is a cybersecurity culture that actually works.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/contact"&gt;Get in touch&lt;/a&gt;&lt;/strong&gt; to discuss how to build a security culture that produces real behaviour change in your organisation.&lt;/p&gt;

</description>
      <category>security</category>
      <category>culture</category>
      <category>leadership</category>
    </item>
    <item>
      <title>Slopsquatting: The AI Supply Chain Attack Vector You Are Not Monitoring</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:46:04 +0000</pubDate>
      <link>https://dev.to/danieljglover/slopsquatting-the-ai-supply-chain-attack-vector-you-are-not-monitoring-14im</link>
      <guid>https://dev.to/danieljglover/slopsquatting-the-ai-supply-chain-attack-vector-you-are-not-monitoring-14im</guid>
      <description>&lt;p&gt;Slopsquatting is a new class of software supply chain attack that exploits a fundamental flaw in AI coding assistants: their tendency to hallucinate package names that do not exist. When an AI model recommends installing a package that has never been published, attackers can register that name on public repositories like PyPI or npm and inject malicious code into any developer who follows the AI's suggestion.&lt;/p&gt;

&lt;p&gt;The term was coined by Seth Larson, Python Software Foundation developer in residence, as a play on typosquatting - where attackers register misspelled versions of popular packages. But slopsquatting is potentially worse. While typosquatting relies on human error, slopsquatting exploits trust in AI tools that &lt;a href="https://dev.to/blog/2025-12-10-vibecoding-impact-web-development"&gt;92% of US developers now use daily&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Scale of AI Package Hallucinations
&lt;/h2&gt;

&lt;p&gt;Research from Virginia Tech, the University of Oklahoma, and the University of Texas reveals the alarming scope of this problem. The team tested 16 code-generation LLMs by prompting them to generate 576,000 Python and JavaScript code samples. Their findings, &lt;a href="https://arxiv.org/abs/2406.10279" rel="noopener noreferrer"&gt;published in March 2025&lt;/a&gt;, should concern every organisation relying on AI-assisted development.&lt;/p&gt;

&lt;p&gt;
  headers={["Metric", "Finding", "Implication"]}&lt;br&gt;
  rows={[&lt;br&gt;
    ["Hallucination rate", "19.7% (roughly 1 in 5)", "One in five package recommendations points to nothing"],&lt;br&gt;
    ["Unique fake packages", "205,474 names", "Massive attack surface for adversaries to exploit"],&lt;br&gt;
    ["Reproducibility", "43% appear every time", "Attackers can predict which fake names will be suggested"],&lt;br&gt;
    ["Repeat appearances", "58% appear more than once", "Majority of hallucinations are not random noise"]&lt;br&gt;
  ]}&lt;br&gt;
/&amp;gt;&lt;/p&gt;

&lt;p&gt;The reproducibility finding is particularly concerning. Attackers do not need to scrape massive prompt logs or brute force potential names. They can simply observe LLM behaviour, identify commonly hallucinated names, and register them on public package registries.&lt;/p&gt;

&lt;h3&gt;
  
  
  Not All Models Are Equal
&lt;/h3&gt;

&lt;p&gt;The research found significant variation between AI models. Open-source LLMs like CodeLlama, DeepSeek, WizardCoder, and Mistral showed the highest hallucination rates. Commercial tools performed better but still posed risk - ChatGPT-4 hallucinated package names in approximately 5% of cases.&lt;/p&gt;

&lt;p&gt;For organisations processing thousands of AI-generated code suggestions daily, even a 5% error rate translates into substantial exposure.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Slopsquatting Attacks Work
&lt;/h2&gt;

&lt;p&gt;The attack chain is straightforward, which makes it dangerous:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 1: Discovery&lt;/strong&gt; - An attacker prompts popular AI coding assistants with common development tasks: "Create a Python script to process CSV files", "Build a Node.js authentication module", or similar requests. They record every package name suggested.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Verification&lt;/strong&gt; - The attacker checks which suggested packages actually exist. Research shows roughly 20% will not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Registration&lt;/strong&gt; - For non-existent packages that appear reproducibly (43% of hallucinations), the attacker registers matching names on PyPI, npm, or other public registries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Payload&lt;/strong&gt; - The attacker publishes functional-looking code that includes malicious payloads - credential harvesting, backdoors, cryptocurrency miners, or data exfiltration routines.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Exploitation&lt;/strong&gt; - When another developer prompts the same AI model with a similar request, receives the same hallucinated package name, and runs &lt;code&gt;pip install&lt;/code&gt; or &lt;code&gt;npm install&lt;/code&gt;, the malicious package enters their development environment.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/slopsquatting-when-ai-agents-hallucinate-malicious-packages" rel="noopener noreferrer"&gt;Trend Micro's analysis&lt;/a&gt;, the hallucinated package names are "semantically convincing" - they look like real packages with plausible naming conventions. Developers cannot easily spot the deception by sight alone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Slopsquatting Is Worse Than Typosquatting
&lt;/h2&gt;

&lt;p&gt;Traditional typosquatting attacks rely on developers making typing mistakes: &lt;code&gt;reqeusts&lt;/code&gt; instead of &lt;code&gt;requests&lt;/code&gt;, or &lt;code&gt;lodahs&lt;/code&gt; instead of &lt;code&gt;lodash&lt;/code&gt;. Security teams have developed defences against this - spell checkers, package manager warnings, and developer training.&lt;/p&gt;

&lt;p&gt;Slopsquatting sidesteps all of these protections:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;No typing errors to catch&lt;/strong&gt; - The AI suggests a package name that looks entirely legitimate. There is no misspelling to flag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Trust in AI tools&lt;/strong&gt; - Developers increasingly accept AI suggestions without verification, a practice amplified by &lt;a href="https://dev.to/blog/2025-12-15-vibe-coding-security"&gt;vibe coding culture&lt;/a&gt; where developers "give in to the vibes" and trust AI outputs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reproducibility aids attackers&lt;/strong&gt; - Because 43% of hallucinated packages appear consistently, attackers can target specific AI models and specific prompts with high confidence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scale of AI adoption&lt;/strong&gt; - With &lt;a href="https://dev.to/blog/2025-12-10-vibecoding-impact-web-development"&gt;41% of global code now AI-generated&lt;/a&gt;, the attack surface is massive and growing.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://www.bleepingcomputer.com/news/security/ai-hallucinated-code-dependencies-become-new-supply-chain-risk/" rel="noopener noreferrer"&gt;BleepingComputer's reporting&lt;/a&gt;, slopsquatting incidents will continue as AI coding adoption accelerates. The combination of widespread AI tool usage and inherent model limitations creates a persistent vulnerability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Risk Scenarios
&lt;/h2&gt;

&lt;p&gt;Consider how slopsquatting could affect your organisation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1: Developer Workstation Compromise&lt;/strong&gt; - A developer uses an AI assistant to generate a data processing script. The AI suggests a hallucinated package that an attacker has registered. Installation grants the attacker access to the developer's machine, source code repositories, and potentially credentials for production systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2: CI/CD Pipeline Infiltration&lt;/strong&gt; - A build process pulls dependencies based on AI-generated requirements files. A slopsquatted package enters the pipeline, gaining access to deployment credentials, secrets, and the ability to inject malicious code into production builds.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 3: Compliance Violation&lt;/strong&gt; - Regulated industries require software bill of materials (SBOM) documentation and supply chain verification. AI-hallucinated packages lack provenance, auditable histories, or security reviews. Their presence in production code creates compliance gaps.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 4: Intellectual Property Theft&lt;/strong&gt; - A malicious slopsquatted package exfiltrates source code, configuration files, or API keys to attacker-controlled infrastructure. By the time the breach is discovered, sensitive data has already been compromised.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mitigation Strategies for CISOs and IT Leaders
&lt;/h2&gt;

&lt;p&gt;Defending against slopsquatting requires a multi-layered approach that spans developer practices, tooling, and governance.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Mandatory Dependency Verification
&lt;/h3&gt;

&lt;p&gt;Establish a policy requiring developers to verify every AI-suggested package before installation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Check the package exists on the official registry&lt;/li&gt;
&lt;li&gt;Review the package's publish history and maintainer details&lt;/li&gt;
&lt;li&gt;Examine download statistics - legitimate popular packages have substantial usage&lt;/li&gt;
&lt;li&gt;Look for security audits or vulnerability disclosures&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This adds friction to the development process, but the alternative is accepting unvetted code from potentially malicious sources.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Implement Dependency Scanning Tools
&lt;/h3&gt;

&lt;p&gt;Deploy software composition analysis (SCA) tools that can identify suspicious packages. Solutions from &lt;a href="https://snyk.io/articles/slopsquatting-mitigation-strategies/" rel="noopener noreferrer"&gt;Snyk&lt;/a&gt;, Socket, and similar vendors now include specific detection capabilities for slopsquatted packages, looking for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Newly registered packages with names similar to common hallucination patterns&lt;/li&gt;
&lt;li&gt;Packages with minimal download history but wide AI-suggested distribution&lt;/li&gt;
&lt;li&gt;Code that exhibits suspicious behaviour (network calls, file system access, credential reading)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. Use Lockfiles and Hash Verification
&lt;/h3&gt;

&lt;p&gt;Package lockfiles (&lt;code&gt;package-lock.json&lt;/code&gt;, &lt;code&gt;Pipfile.lock&lt;/code&gt;, &lt;code&gt;poetry.lock&lt;/code&gt;) pin dependencies to specific versions with cryptographic hashes. This prevents the silent substitution of legitimate packages with malicious ones and makes supply chain tampering detectable.&lt;/p&gt;

&lt;p&gt;Require all projects to maintain lockfiles and fail builds if lockfile verification fails.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Configure AI Tool Settings
&lt;/h3&gt;

&lt;p&gt;Research shows that LLM "temperature" settings affect hallucination rates. Higher temperature (more randomness) increases hallucinations. Where configurable, set AI coding assistants to lower temperature settings to reduce the frequency of fabricated package suggestions.&lt;/p&gt;

&lt;p&gt;This is not a complete solution - even low-temperature models hallucinate - but it reduces exposure.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Establish AI Code Governance
&lt;/h3&gt;

&lt;p&gt;Integrate slopsquatting defence into your broader &lt;a href="https://dev.to/blog/2026-01-08-ai-governance-controls"&gt;AI governance controls&lt;/a&gt;. Define policies for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Which AI coding assistants are approved for use&lt;/li&gt;
&lt;li&gt;Required verification steps before accepting AI-generated dependency lists&lt;/li&gt;
&lt;li&gt;Audit trails for AI-suggested code entering production systems&lt;/li&gt;
&lt;li&gt;Incident response procedures if a slopsquatted package is detected&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. Sandbox AI-Generated Code
&lt;/h3&gt;

&lt;p&gt;Never run AI-generated code directly in production environments or on developer workstations with access to sensitive resources. Test all AI suggestions in isolated sandboxes where malicious code cannot reach valuable targets.&lt;/p&gt;

&lt;p&gt;Container-based development environments and virtual machines provide isolation layers that limit blast radius if a slopsquatted package is accidentally installed.&lt;/p&gt;

&lt;h3&gt;
  
  
  7. Educate Development Teams
&lt;/h3&gt;

&lt;p&gt;Developers need to understand that AI coding assistants are not security tools. &lt;a href="https://cset.georgetown.edu/publication/cybersecurity-risks-of-ai-generated-code/" rel="noopener noreferrer"&gt;Georgetown's CSET research&lt;/a&gt; highlights that AI models do not understand your application's risk model, internal standards, or threat landscape. Every AI suggestion - especially package recommendations - requires human verification.&lt;/p&gt;

&lt;p&gt;Training should cover:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What slopsquatting is and how it works&lt;/li&gt;
&lt;li&gt;How to verify package legitimacy before installation&lt;/li&gt;
&lt;li&gt;Red flags that indicate suspicious packages&lt;/li&gt;
&lt;li&gt;Reporting procedures for potential supply chain attacks&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Bigger Picture: AI Code Security
&lt;/h2&gt;

&lt;p&gt;Slopsquatting is one manifestation of a broader challenge. AI coding assistants introduce multiple security risks beyond hallucinated packages - including &lt;a href="https://dev.to/blog/2025-12-15-vibe-coding-security"&gt;insecure code patterns, missing security controls, and logic errors&lt;/a&gt; that can compromise applications.&lt;/p&gt;

&lt;p&gt;The research is clear: &lt;a href="https://www.endorlabs.com/learn/the-most-common-security-vulnerabilities-in-ai-generated-code" rel="noopener noreferrer"&gt;over 40% of AI-generated code contains security vulnerabilities&lt;/a&gt;, and this rate has not improved as models have scaled. Security must be a deliberate, human-driven layer on top of AI productivity gains - not an afterthought.&lt;/p&gt;

&lt;p&gt;For CISOs and IT leaders, the imperative is clear. AI coding tools are here to stay, and their adoption will only accelerate. The organisations that thrive will be those that harness AI's productivity benefits while implementing robust controls to catch the security gaps that AI cannot see.&lt;/p&gt;

&lt;p&gt;Slopsquatting is a solvable problem. But solving it requires acknowledging that AI assistants, however helpful, can be vectors for supply chain attacks - and building defences accordingly.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Want to discuss AI security strategy for your organisation? &lt;a href="https://www.linkedin.com/in/danieljamesglover/" rel="noopener noreferrer"&gt;Connect with me on LinkedIn&lt;/a&gt; or explore more on &lt;a href="https://dev.to/blog/2025-12-15-vibe-coding-security"&gt;vibe coding security risks&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
      <category>supplychain</category>
    </item>
    <item>
      <title>AI Washing and Layoffs: What Is Real and What Is Hype</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:45:50 +0000</pubDate>
      <link>https://dev.to/danieljglover/ai-washing-and-layoffs-what-is-real-and-what-is-hype-j93</link>
      <guid>https://dev.to/danieljglover/ai-washing-and-layoffs-what-is-real-and-what-is-hype-j93</guid>
      <description>&lt;p&gt;The headlines write themselves. Amazon cuts thousands. Pinterest restructures. Across the board, the message is the same: AI is here, and humans are no longer needed.&lt;/p&gt;

&lt;p&gt;Except that is not quite what is happening.&lt;/p&gt;

&lt;p&gt;A Forrester report published in January 2026 made a bold claim that should give every IT leader pause. Many companies announcing AI-related layoffs, the analysts found, do not actually have mature AI applications ready to fill those roles. What they have is a convenient narrative.&lt;/p&gt;

&lt;p&gt;Welcome to the era of AI-washing.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is AI-washing?
&lt;/h2&gt;

&lt;p&gt;The term borrows from "greenwashing" - where companies exaggerate their environmental credentials. AI-washing works the same way, but in reverse. Instead of overstating what AI can do for the planet, companies are overstating what AI can do for their workforce. They are attributing financially motivated cuts to future AI implementation that may never materialise.&lt;/p&gt;

&lt;p&gt;As Molly Kinder, a senior research fellow at the Brookings Institute, put it: saying layoffs are caused by AI is a "very investor-friendly message." The alternative might mean admitting the business is struggling, that pandemic-era over-hiring created bloated teams, or that revenue targets were missed.&lt;/p&gt;

&lt;p&gt;AI makes for a better press release than "we hired too many people in 2021."&lt;/p&gt;

&lt;h2&gt;
  
  
  The numbers tell a different story
&lt;/h2&gt;

&lt;p&gt;More than 50,000 layoffs in 2025 were publicly attributed to AI. That is a staggering figure. But dig into the specifics and the picture gets murkier.&lt;/p&gt;

&lt;p&gt;Many of the companies making these cuts had not deployed AI tools at scale. Some had pilot programmes. Others had vague roadmaps. Very few had production-ready systems capable of replacing the roles they were eliminating.&lt;/p&gt;

&lt;p&gt;This matters because it shapes public perception. Every AI-attributed layoff reinforces the narrative that automation is an unstoppable force already displacing workers en masse. That narrative creates anxiety, influences policy, and - critically for IT leaders - distorts strategic planning.&lt;/p&gt;

&lt;p&gt;If your board reads that Amazon cut roles "because of AI" and then asks why your department has not done the same, you need to be armed with the reality behind the headlines.&lt;/p&gt;

&lt;h2&gt;
  
  
  But genuine disruption is real
&lt;/h2&gt;

&lt;p&gt;Here is where it gets complicated. While AI-washing is absolutely happening, genuine AI disruption is also accelerating.&lt;/p&gt;

&lt;p&gt;In the first week of February 2026, Anthropic launched a suite of AI tools targeting legal, sales, and customer support workflows. The market reaction was immediate and brutal. Shares in Pearson fell 8%. Relx plunged 14%. Thomson Reuters lost 18% in a single session. The FTSE 100, which had hit a record high that morning, was dragged into the red.&lt;/p&gt;

&lt;p&gt;These are not speculative startups. These are established companies with decades of market dominance, and investors wiped billions off their valuations overnight because an AI company released a product.&lt;/p&gt;

&lt;p&gt;Clifford Chance, one of the world's largest law firms, had already cut 10% of its London business services staff in November 2025, citing increased AI use as a genuine factor. According to Morgan Stanley, the UK is losing more jobs than it is creating as companies adopt AI tools - and is being hit harder than rival economies including the US, Japan, Germany, and Australia.&lt;/p&gt;

&lt;p&gt;So the truth is uncomfortable but important: some companies are using AI as cover for unrelated cuts, while others are experiencing real displacement. The challenge for IT leaders is telling the difference.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to spot AI-washing in your organisation
&lt;/h2&gt;

&lt;p&gt;As someone who manages IT strategy across a complex business, I have learned to ask pointed questions when "AI" gets thrown around in boardroom conversations. Here is what I look for:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Is there a production-ready system?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If someone claims AI will replace a function, ask to see the tool. Not a demo. Not a proof of concept. A production system with documented accuracy, error rates, and a support model. If it does not exist yet, the layoff is not AI-driven. It is cost-driven with AI branding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. What is the transition plan?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Genuine AI-driven restructuring comes with a transition plan. Retraining programmes. Phased rollouts. Fallback procedures. If the plan is "cut now, automate later," that is a red flag.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Who benefits from the narrative?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Follow the incentives. If the company's share price responds positively to "AI transformation" messaging, there is a financial incentive to frame every cost reduction as AI-related - whether it is or not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Are the affected roles actually automatable?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is the most basic question and it is surprising how rarely it gets asked. Some roles being cut involve complex judgment, relationship management, or creative problem-solving that current AI simply cannot replicate. If those are the roles being eliminated "because of AI," something else is going on.&lt;/p&gt;

&lt;h2&gt;
  
  
  What IT leaders should actually be doing
&lt;/h2&gt;

&lt;p&gt;Rather than getting swept up in the hype cycle - or dismissing AI entirely because of the washing problem - there is a pragmatic middle ground.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit your AI readiness honestly.&lt;/strong&gt; Map your current processes against what AI can genuinely automate today, not in some theoretical future. Be specific. Contract review? Possibly, depending on complexity. Strategic vendor negotiation? Not even close. My series on &lt;a href="https://dev.to/blog/2026-01-03-business-ai-enablement-matters"&gt;business AI enablement&lt;/a&gt; provides a structured framework for this assessment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build a skills transition framework.&lt;/strong&gt; The roles that AI genuinely threatens tend to be repetitive, data-heavy, and rules-based. The people in those roles often have institutional knowledge that is incredibly valuable. A good IT leader finds ways to redeploy that knowledge rather than simply cutting headcount. Understanding the &lt;a href="https://dev.to/blog/2026-01-04-shadow-ai-governance-crisis"&gt;shadow AI governance crisis&lt;/a&gt; is a key part of this - you cannot manage AI's workforce impact if you do not even know where AI is being used.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Push back on performative AI strategy.&lt;/strong&gt; If your C-suite wants to announce an "AI transformation" without the underlying capability to deliver it, that is your moment to provide honest counsel. The short-term PR benefit is not worth the long-term credibility damage when the promised efficiencies fail to materialise.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Watch the tooling market closely.&lt;/strong&gt; The Anthropic announcement moved markets because it represented a genuine capability leap. These moments will keep coming. Your job is to evaluate each one on its merits - not to dismiss them all as hype, and not to panic-adopt because a competitor made a press release.&lt;/p&gt;

&lt;h2&gt;
  
  
  The honest conversation we need
&lt;/h2&gt;

&lt;p&gt;Twenty-seven percent of UK workers are worried their jobs will disappear within five years because of AI. That anxiety is understandable, but it is being amplified by companies that are muddying the waters between genuine automation and convenient excuses.&lt;/p&gt;

&lt;p&gt;As IT leaders, we have a responsibility to cut through the noise. That means being honest about what AI can and cannot do today. It means challenging AI-washing when we see it, even when the narrative is politically convenient. And it means preparing our organisations for real disruption - not the imagined kind that makes for good investor calls. If you are looking for a practical starting point, &lt;a href="https://dev.to/blog/2026-01-06-ai-training-employee-skills"&gt;AI training and closing the skills gap&lt;/a&gt; covers how to equip your workforce for the genuine changes that are coming.&lt;/p&gt;

&lt;p&gt;The companies that will thrive are not the ones making the boldest AI claims. They are the ones doing the hard work of genuine implementation, responsible workforce transition, and honest strategic planning.&lt;/p&gt;

&lt;p&gt;AI is transforming business. But the transformation is messier, slower, and more nuanced than the headlines suggest. And if we are going to navigate it well, we need to start by telling the truth about what is actually happening.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Daniel Glover is Head of IT Services at a major UK e-commerce business, managing a team across infrastructure, security, and digital transformation. He writes about technology leadership, AI strategy, and the realities of enterprise IT.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>strategy</category>
      <category>leadership</category>
    </item>
    <item>
      <title>AI Governance Controls: A Framework for Enterprise AI Deployment</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:19:40 +0000</pubDate>
      <link>https://dev.to/danieljglover/ai-governance-controls-a-framework-for-enterprise-ai-deployment-3j67</link>
      <guid>https://dev.to/danieljglover/ai-governance-controls-a-framework-for-enterprise-ai-deployment-3j67</guid>
      <description>&lt;p&gt;&lt;em&gt;This is Part 6 of a 7-part series on Business AI Enablement for IT Leaders. The series covers &lt;a href="https://dev.to/blog/2026-01-03-business-ai-enablement-matters"&gt;why enablement matters&lt;/a&gt;, &lt;a href="https://dev.to/blog/2026-01-04-shadow-ai-governance-crisis"&gt;shadow AI risks&lt;/a&gt;, &lt;a href="https://dev.to/blog/2026-01-05-ai-enablement-framework"&gt;building an enablement framework&lt;/a&gt;, &lt;a href="https://dev.to/blog/2026-01-06-ai-training-employee-skills"&gt;employee training&lt;/a&gt;, &lt;a href="https://dev.to/blog/2026-01-07-selecting-ai-tools-business"&gt;tool selection&lt;/a&gt;, and concludes with a &lt;a href="https://dev.to/blog/2026-01-09-ai-enablement-roadmap"&gt;90-day implementation roadmap&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Governance has an image problem. To many employees, governance means bureaucracy, delays, and the word "no" repeated in various forms. This reputation is often deserved.&lt;/p&gt;

&lt;p&gt;But AI governance done well looks different. It provides clarity that enables faster decision-making. It creates guardrails that allow confident action. It removes uncertainty that otherwise paralyses adoption.&lt;/p&gt;

&lt;p&gt;The 68% of organisations without formal AI controls are not more innovative than those with them. They are simply operating blind, accumulating risks they cannot see, and missing opportunities to learn from experience.&lt;/p&gt;

&lt;p&gt;This article provides a governance framework that enables rather than restricts.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Governance Paradox
&lt;/h2&gt;

&lt;p&gt;Governance and enablement seem opposed. More rules mean less freedom. Tighter controls mean slower action. This framing is wrong.&lt;/p&gt;

&lt;p&gt;Consider an analogy. Traffic lights slow individual vehicles at intersections. But traffic lights enable higher overall throughput because they eliminate the chaos of uncoordinated intersections. The constraint creates capability.&lt;/p&gt;

&lt;p&gt;AI governance works the same way. Clear rules about data handling eliminate the uncertainty that makes employees hesitant. Defined approval paths are faster than ad hoc escalation. Established verification processes catch errors before they cause damage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Governance that fails:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prohibits without providing alternatives&lt;/li&gt;
&lt;li&gt;Requires approval for everything regardless of risk&lt;/li&gt;
&lt;li&gt;Changes frequently without clear communication&lt;/li&gt;
&lt;li&gt;Punishes compliance failures without addressing root causes&lt;/li&gt;
&lt;li&gt;Creates burdens disproportionate to risks addressed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Governance that works:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enables safe action by clarifying boundaries&lt;/li&gt;
&lt;li&gt;Matches controls to actual risk levels&lt;/li&gt;
&lt;li&gt;Remains stable with predictable updates&lt;/li&gt;
&lt;li&gt;Treats failures as learning opportunities&lt;/li&gt;
&lt;li&gt;Creates minimal overhead for low-risk activities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is not minimal governance. The goal is appropriate governance - controls proportionate to risk that enable productive work.&lt;/p&gt;

&lt;h2&gt;
  
  
  The AI Policy Framework
&lt;/h2&gt;

&lt;p&gt;Every organisation needs a foundational AI policy. This document establishes principles, defines responsibilities, and points to detailed guidance for specific areas.&lt;/p&gt;

&lt;h3&gt;
  
  
  Policy Structure
&lt;/h3&gt;

&lt;p&gt;An effective AI policy includes:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose and scope.&lt;/strong&gt; Why the policy exists and who it covers. This should emphasise enablement alongside risk management.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Principles.&lt;/strong&gt; The values guiding AI use in the organisation:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Transparency about when AI is used&lt;/li&gt;
&lt;li&gt;Human responsibility for AI outputs&lt;/li&gt;
&lt;li&gt;Privacy and data protection priority&lt;/li&gt;
&lt;li&gt;Continuous learning and improvement&lt;/li&gt;
&lt;li&gt;Safety and quality standards&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Roles and responsibilities.&lt;/strong&gt; Who is accountable for what:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Executive sponsor for AI enablement&lt;/li&gt;
&lt;li&gt;IT responsibility for approved tools and security&lt;/li&gt;
&lt;li&gt;Business unit responsibility for appropriate use&lt;/li&gt;
&lt;li&gt;Individual employee responsibility for policy adherence&lt;/li&gt;
&lt;li&gt;Champions and their support role&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Approved tools.&lt;/strong&gt; Reference to the authorised tool catalogue with access procedures.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prohibited uses.&lt;/strong&gt; Clear boundaries on what is not permitted:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Using non-approved tools for business purposes&lt;/li&gt;
&lt;li&gt;Processing prohibited data types with AI&lt;/li&gt;
&lt;li&gt;Automated decisions without human review for specified categories&lt;/li&gt;
&lt;li&gt;Representing AI output as human work where disclosure is required&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Compliance requirements.&lt;/strong&gt; How the policy relates to regulations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GDPR and data protection obligations&lt;/li&gt;
&lt;li&gt;Industry-specific requirements&lt;/li&gt;
&lt;li&gt;Emerging AI regulations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enforcement and exceptions.&lt;/strong&gt; How violations are addressed and how exceptions are requested:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Progressive response to violations&lt;/li&gt;
&lt;li&gt;No-blame approach to good-faith errors&lt;/li&gt;
&lt;li&gt;Exception request process with criteria&lt;/li&gt;
&lt;li&gt;Appeals mechanism&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Review and updates.&lt;/strong&gt; How the policy stays current:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Annual review minimum&lt;/li&gt;
&lt;li&gt;Trigger events for interim updates&lt;/li&gt;
&lt;li&gt;Communication of changes&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Policy Principles
&lt;/h3&gt;

&lt;p&gt;Several principles make AI policies effective:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Clarity over comprehensiveness.&lt;/strong&gt; A shorter policy that employees actually read and understand beats a comprehensive policy they ignore. Link to detailed guidance rather than including everything.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Principles over rules.&lt;/strong&gt; Rules cannot cover every situation. Principles help employees make good decisions when specific guidance is absent.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enable by default.&lt;/strong&gt; The policy should help employees do things, not primarily stop them. Prohibitions should be few and justified.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proportionate enforcement.&lt;/strong&gt; Minor infractions should not receive the same treatment as serious violations. Good faith matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Classification for AI Inputs
&lt;/h2&gt;

&lt;p&gt;Data classification determines what information can be used with which AI tools. Without clear classification, employees either avoid AI entirely (missing value) or use it recklessly (creating risk).&lt;/p&gt;

&lt;h3&gt;
  
  
  Classification Tiers
&lt;/h3&gt;

&lt;p&gt;
  headers={["Tier", "Description", "AI Use Guidance"]}&lt;br&gt;
  rows={[&lt;br&gt;
    ["Public", "Information intended for public distribution", "Any approved tool"],&lt;br&gt;
    ["Internal", "Business information not intended for public disclosure", "Enterprise-grade tools with data protection agreements"],&lt;br&gt;
    ["Confidential", "Sensitive information requiring protection", "Restricted tools with enhanced controls; may require approval"],&lt;br&gt;
    ["Prohibited", "Information that must not be processed by external AI", "No external AI processing permitted"]&lt;br&gt;
  ]}&lt;br&gt;
/&amp;gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Public data&lt;/strong&gt; includes published content, marketing materials, and information already in the public domain. This data can be used with any approved AI tool without restriction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Internal data&lt;/strong&gt; includes business information, internal communications, and operational data. Enterprise-grade tools with appropriate data protection agreements can process this data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Confidential data&lt;/strong&gt; requires heightened protection. This may include customer personal information, strategic plans, financial projections, and proprietary methods. Use with AI requires either restricted tools with enhanced controls or case-by-case approval.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Prohibited data&lt;/strong&gt; must not be processed by external AI under any circumstances. This category typically includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Highly sensitive personal data (health records, financial details)&lt;/li&gt;
&lt;li&gt;Trade secrets and critical intellectual property&lt;/li&gt;
&lt;li&gt;Security credentials and access keys&lt;/li&gt;
&lt;li&gt;Information subject to specific regulatory restrictions&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Classification in Practice
&lt;/h3&gt;

&lt;p&gt;Employees need practical guidance to classify data quickly:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Provide examples&lt;/strong&gt; for each classification tier relevant to common work&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Create decision trees&lt;/strong&gt; that help with ambiguous cases&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Establish escalation paths&lt;/strong&gt; when classification is uncertain&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Train on classification&lt;/strong&gt; as part of AI enablement programmes&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is confident, rapid classification that does not slow work but does prevent inappropriate exposure.&lt;/p&gt;

&lt;h2&gt;
  
  
  Output Verification and Quality Controls
&lt;/h2&gt;

&lt;p&gt;AI outputs are not automatically trustworthy. Verification requirements should match the risk of using unverified output.&lt;/p&gt;

&lt;h3&gt;
  
  
  Verification Levels
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Level 1: Spot Check (Low Risk)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For internal productivity outputs where errors have limited consequences:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Review output for obvious errors&lt;/li&gt;
&lt;li&gt;Confirm general alignment with intent&lt;/li&gt;
&lt;li&gt;No formal documentation required&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Examples: Internal email drafts, meeting notes, personal research summaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 2: Quality Review (Medium Risk)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For outputs that influence decisions or reach internal audiences:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Verify factual claims&lt;/li&gt;
&lt;li&gt;Check logical consistency&lt;/li&gt;
&lt;li&gt;Confirm alignment with organisational standards&lt;/li&gt;
&lt;li&gt;Brief documentation of review&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Examples: Internal reports, analysis summaries, policy drafts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 3: Expert Review (High Risk)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For outputs affecting customers or significant decisions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Subject matter expert verification&lt;/li&gt;
&lt;li&gt;Comprehensive fact-checking&lt;/li&gt;
&lt;li&gt;Consistency review against standards&lt;/li&gt;
&lt;li&gt;Documented sign-off&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Examples: Customer communications, published content, code deployment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 4: Formal Validation (Critical Risk)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For outputs with significant business or regulatory implications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multi-person review&lt;/li&gt;
&lt;li&gt;Compliance verification&lt;/li&gt;
&lt;li&gt;Full documentation and audit trail&lt;/li&gt;
&lt;li&gt;Leadership sign-off&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Examples: Financial statements, legal documents, regulatory submissions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Verification Failures
&lt;/h3&gt;

&lt;p&gt;Training should address frequent verification mistakes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Trusting confident tone.&lt;/strong&gt; AI presents incorrect information as confidently as correct information. Confidence is not an accuracy signal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Skipping source verification.&lt;/strong&gt; AI may cite sources that do not exist or do not support claimed conclusions. Sources require independent verification.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Assuming consistency.&lt;/strong&gt; AI may contradict itself within the same output. Long outputs need internal consistency review.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Overlooking omissions.&lt;/strong&gt; AI may not mention important considerations it does not know about. Outputs are not comprehensive without explicit prompting.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Monitoring and Compliance Enforcement
&lt;/h2&gt;

&lt;p&gt;Governance without monitoring is just documentation. Effective monitoring provides visibility without surveillance overreach.&lt;/p&gt;

&lt;h3&gt;
  
  
  What to Monitor
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Usage patterns.&lt;/strong&gt; Aggregate analytics on AI tool adoption:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Number of active users&lt;/li&gt;
&lt;li&gt;Frequency of use&lt;/li&gt;
&lt;li&gt;Use case patterns&lt;/li&gt;
&lt;li&gt;Tool-specific adoption&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This data informs capacity planning, training priorities, and tool evaluation. It does not require content inspection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Policy indicators.&lt;/strong&gt; Signals that suggest policy issues:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access attempts to non-approved tools&lt;/li&gt;
&lt;li&gt;Unusual data volumes&lt;/li&gt;
&lt;li&gt;Off-hours usage patterns&lt;/li&gt;
&lt;li&gt;Error rates suggesting untrained users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These indicators prompt investigation without requiring comprehensive surveillance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Compliance events.&lt;/strong&gt; Specific incidents requiring attention:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reported violations&lt;/li&gt;
&lt;li&gt;Security incidents involving AI&lt;/li&gt;
&lt;li&gt;Customer complaints related to AI use&lt;/li&gt;
&lt;li&gt;Audit findings&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Monitoring Boundaries
&lt;/h3&gt;

&lt;p&gt;Monitoring should respect employee privacy and trust:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Be transparent.&lt;/strong&gt; Employees should know what is monitored and why.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Focus on patterns, not content.&lt;/strong&gt; Usage statistics are less intrusive than content inspection.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Investigate with cause.&lt;/strong&gt; Deep inspection should be triggered by indicators, not applied universally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Protect whistleblowers.&lt;/strong&gt; Employees reporting concerns should not face retaliation.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Heavy-handed monitoring damages the trust that effective AI enablement requires. The goal is sufficient visibility for governance, not comprehensive surveillance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Enforcement Approach
&lt;/h3&gt;

&lt;p&gt;Policy violations need appropriate response. The approach should be:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Proportionate.&lt;/strong&gt; Minor first-time violations do not warrant the same response as repeated serious violations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Educational.&lt;/strong&gt; Many violations result from misunderstanding, not malice. Response should include training.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consistent.&lt;/strong&gt; Similar violations should receive similar responses regardless of who commits them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Documented.&lt;/strong&gt; Actions taken should be recorded for consistency and appeals.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Progressive.&lt;/strong&gt; Responses should escalate for repeated violations:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Informal guidance and training&lt;/li&gt;
&lt;li&gt;Formal warning with documentation&lt;/li&gt;
&lt;li&gt;Restricted access pending retraining&lt;/li&gt;
&lt;li&gt;Disciplinary action as per HR policy&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;No-blame for good faith.&lt;/strong&gt; Employees who make genuine mistakes while trying to work appropriately should be supported, not punished.&lt;/p&gt;

&lt;h2&gt;
  
  
  Incident Response for AI Failures
&lt;/h2&gt;

&lt;p&gt;AI-related incidents will occur. Preparation enables effective response.&lt;/p&gt;

&lt;h3&gt;
  
  
  Incident Categories
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Data exposure.&lt;/strong&gt; Sensitive information shared with inappropriate AI tools.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Immediate assessment of data involved&lt;/li&gt;
&lt;li&gt;Vendor notification if relevant&lt;/li&gt;
&lt;li&gt;Regulatory reporting if required&lt;/li&gt;
&lt;li&gt;Affected party notification if necessary&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Quality failures.&lt;/strong&gt; AI outputs that caused business problems.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Document the failure and impact&lt;/li&gt;
&lt;li&gt;Identify root cause&lt;/li&gt;
&lt;li&gt;Implement verification improvements&lt;/li&gt;
&lt;li&gt;Update training if needed&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Security incidents.&lt;/strong&gt; Compromises involving AI tools or data.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standard security incident response applies&lt;/li&gt;
&lt;li&gt;Additional focus on data scope and AI-specific factors&lt;/li&gt;
&lt;li&gt;Vendor involvement as appropriate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Compliance violations.&lt;/strong&gt; Regulatory requirements breached through AI use.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Legal and compliance engagement&lt;/li&gt;
&lt;li&gt;Regulatory notification as required&lt;/li&gt;
&lt;li&gt;Remediation planning&lt;/li&gt;
&lt;li&gt;Control enhancements&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Incident Response Process
&lt;/h3&gt;

&lt;p&gt;A structured process ensures consistent handling:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Detection and reporting.&lt;/strong&gt; Clear channels for identifying and escalating issues.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Initial assessment.&lt;/strong&gt; Rapid evaluation of scope and severity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Containment.&lt;/strong&gt; Immediate actions to prevent further impact.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Investigation.&lt;/strong&gt; Understanding what happened and why.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Remediation.&lt;/strong&gt; Addressing immediate damage and preventing recurrence.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Review.&lt;/strong&gt; Learning lessons and improving controls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Documentation.&lt;/strong&gt; Recording the incident for compliance and learning.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As I explored in the &lt;a href="https://dev.to/blog/2025-12-31-ciso-cyber-resilience-2026-part-5"&gt;incident response discussion&lt;/a&gt; in the CISO series, effective incident response is a core organisational capability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Reference: AI Governance Policy Template
&lt;/h2&gt;

&lt;p&gt;Use this template as a starting point for your organisation's AI policy:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Purpose&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Statement of policy intent emphasising enablement and safety&lt;/li&gt;
&lt;li&gt;[ ] Scope covering all employees and AI use&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Principles&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Transparency in AI use&lt;/li&gt;
&lt;li&gt;[ ] Human accountability for outputs&lt;/li&gt;
&lt;li&gt;[ ] Data protection priority&lt;/li&gt;
&lt;li&gt;[ ] Continuous improvement commitment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Approved Tools&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Reference to tool catalogue&lt;/li&gt;
&lt;li&gt;[ ] Access request procedures&lt;/li&gt;
&lt;li&gt;[ ] Criteria for new tool requests&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Data Classification&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Four-tier classification with descriptions&lt;/li&gt;
&lt;li&gt;[ ] Examples for each tier&lt;/li&gt;
&lt;li&gt;[ ] AI use guidance by tier&lt;/li&gt;
&lt;li&gt;[ ] Escalation for uncertain classification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Acceptable Use&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Permitted use cases&lt;/li&gt;
&lt;li&gt;[ ] Prohibited uses with rationale&lt;/li&gt;
&lt;li&gt;[ ] Output verification requirements&lt;/li&gt;
&lt;li&gt;[ ] Attribution and disclosure requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Roles and Responsibilities&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Executive sponsor&lt;/li&gt;
&lt;li&gt;[ ] IT responsibilities&lt;/li&gt;
&lt;li&gt;[ ] Business unit responsibilities&lt;/li&gt;
&lt;li&gt;[ ] Individual responsibilities&lt;/li&gt;
&lt;li&gt;[ ] Champion role&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Compliance&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Regulatory alignment statements&lt;/li&gt;
&lt;li&gt;[ ] Audit and reporting requirements&lt;/li&gt;
&lt;li&gt;[ ] Training requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Enforcement&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Violation response framework&lt;/li&gt;
&lt;li&gt;[ ] Exception request process&lt;/li&gt;
&lt;li&gt;[ ] Appeals mechanism&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Governance&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Policy owner&lt;/li&gt;
&lt;li&gt;[ ] Review schedule&lt;/li&gt;
&lt;li&gt;[ ] Change communication process&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Governance Evolution
&lt;/h2&gt;

&lt;p&gt;AI governance is not static. As AI capabilities evolve, governance must adapt.&lt;/p&gt;

&lt;h3&gt;
  
  
  Near-Term Developments
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Regulatory pressure.&lt;/strong&gt; The EU AI Act and emerging regulations will create new compliance requirements. Governance frameworks need to accommodate these requirements.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agentic AI.&lt;/strong&gt; AI systems that take autonomous actions raise governance questions current frameworks do not address. Decision authority, override mechanisms, and accountability need clarification.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Embedded AI.&lt;/strong&gt; As AI becomes invisible within other tools, governance must account for AI use employees may not recognise as AI.&lt;/p&gt;

&lt;h3&gt;
  
  
  Governance Maturity
&lt;/h3&gt;

&lt;p&gt;Organisations progress through governance maturity levels:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 1: Ad Hoc.&lt;/strong&gt; No formal governance. Decisions made case by case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 2: Defined.&lt;/strong&gt; Policies exist but are inconsistently applied.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 3: Managed.&lt;/strong&gt; Policies are consistently enforced with monitoring and improvement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Level 4: Optimised.&lt;/strong&gt; Governance continuously improves based on experience and external developments.&lt;/p&gt;

&lt;p&gt;Most organisations are at Level 1 or 2. The framework in this article targets Level 3, with processes for progressing to Level 4.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building Governance Capability
&lt;/h3&gt;

&lt;p&gt;Effective governance requires investment:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dedicated ownership.&lt;/strong&gt; Someone accountable for AI governance as a significant responsibility.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Cross-functional coordination.&lt;/strong&gt; Regular engagement across IT, legal, compliance, HR, and business units.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Continuous learning.&lt;/strong&gt; Staying current with regulatory, technology, and industry developments.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Employee engagement.&lt;/strong&gt; Governance that does not consider employee needs will not be followed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Governance is a capability, not a document. The documents enable the capability but are not the capability themselves.&lt;/p&gt;




&lt;h2&gt;
  
  
  Developing Your AI Governance Framework
&lt;/h2&gt;

&lt;p&gt;Building governance that enables rather than restricts requires balancing business needs with risk management. My &lt;a href="https://dev.to/services/it-compliance"&gt;IT compliance services&lt;/a&gt; help organisations develop AI governance frameworks that support productive adoption while maintaining appropriate controls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/contact"&gt;Get in touch&lt;/a&gt;&lt;/strong&gt; to discuss how to build AI governance that works.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Previous: &lt;a href="https://dev.to/blog/2026-01-07-selecting-ai-tools-business"&gt;Part 5 - Selecting AI Tools for Business Units&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Next: &lt;a href="https://dev.to/blog/2026-01-09-ai-enablement-roadmap"&gt;Part 7 - AI Enablement: Your 90-Day Roadmap&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>governance</category>
      <category>enterprise</category>
    </item>
    <item>
      <title>Dry Run Engineering: The Practice That Reduces Production Incidents</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:19:26 +0000</pubDate>
      <link>https://dev.to/danieljglover/dry-run-engineering-the-practice-that-reduces-production-incidents-1okf</link>
      <guid>https://dev.to/danieljglover/dry-run-engineering-the-practice-that-reduces-production-incidents-1okf</guid>
      <description>&lt;p&gt;There is a post trending on Hacker News today about the &lt;code&gt;--dry-run&lt;/code&gt; flag. Henrik Warne writes about adding it to a reporting application early in development and being surprised by how useful it became. I have been nodding along because this matches my experience exactly.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;--dry-run&lt;/code&gt; pattern is one of those deceptively simple engineering practices that punches well above its weight. If you have ever run &lt;code&gt;rsync --dry-run&lt;/code&gt; before committing to a massive file sync, or used &lt;code&gt;terraform plan&lt;/code&gt; before &lt;code&gt;terraform apply&lt;/code&gt;, you already know the value.&lt;/p&gt;

&lt;h2&gt;
  
  
  What dry-run actually means
&lt;/h2&gt;

&lt;p&gt;A dry-run flag tells your script to show what it would do without actually doing it. Print the files that would be deleted. Log the API calls that would be made. Display the database rows that would be updated. Then exit without changing anything.&lt;/p&gt;

&lt;p&gt;The key principle: &lt;strong&gt;make it safe to run without thinking&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When a colleague asks "what will this script do?", you should be able to run it with &lt;code&gt;--dry-run&lt;/code&gt; and show them. No risk. No cleanup needed afterwards.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where this matters most
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Database migrations
&lt;/h3&gt;

&lt;p&gt;Before running a migration that modifies production data, a dry-run should output:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;How many rows will be affected&lt;/li&gt;
&lt;li&gt;Sample of the changes (first 10 rows, perhaps)&lt;/li&gt;
&lt;li&gt;Any constraints that might fail&lt;/li&gt;
&lt;li&gt;Estimated execution time&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  File operations
&lt;/h3&gt;

&lt;p&gt;Scripts that move, rename, or delete files should preview the operations. I once watched a junior engineer accidentally delete a week of customer uploads because a cleanup script had no preview mode. That script has a &lt;code&gt;--dry-run&lt;/code&gt; flag now.&lt;/p&gt;

&lt;h3&gt;
  
  
  API integrations
&lt;/h3&gt;

&lt;p&gt;When your script calls external services - sending emails, posting to Slack, updating CRM records - a dry-run should log what would be sent without actually sending it. This is invaluable for testing integrations without spamming real systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Infrastructure changes
&lt;/h3&gt;

&lt;p&gt;Terraform popularised &lt;code&gt;plan&lt;/code&gt; before &lt;code&gt;apply&lt;/code&gt;. Ansible has &lt;code&gt;--check&lt;/code&gt; mode. Kubernetes has &lt;code&gt;--dry-run=client&lt;/code&gt;. These tools understood that showing the diff before making changes reduces incidents significantly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implementation patterns
&lt;/h2&gt;

&lt;p&gt;The simplest approach is a global flag that gates all side effects:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;delete_old_files&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;directory&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dry_run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;files&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;find_files_older_than&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;directory&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;days&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="nb"&gt;file&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;files&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;dry_run&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Would delete: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;remove&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Deleted: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nb"&gt;file&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Total: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;len&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;files&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; files &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;would be&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;dry_run&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="sh"&gt;''&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; deleted&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For more complex scripts, consider a transaction-style approach where you collect all intended actions, display them, then execute only if not in dry-run mode:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;ActionPlan&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;__init__&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;actions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;add&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;execute_fn&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;((&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;execute_fn&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;preview&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;desc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;  - &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;desc&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;execute&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;dry_run&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;dry_run&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Dry run - the following actions would be taken:&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;preview&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt;

        &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;desc&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;fn&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;self&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;actions&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Executing: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;desc&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="nf"&gt;fn&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  The hidden benefit: better logging
&lt;/h2&gt;

&lt;p&gt;Adding dry-run support forces you to think about what your script actually does. You cannot preview an action without first describing it clearly. This naturally improves your logging, error messages, and overall observability.&lt;/p&gt;

&lt;p&gt;Scripts with good dry-run output tend to have good production logging too. The same descriptions you write for preview mode become your audit trail.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common objections
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;"It adds complexity"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, but minimal complexity. A single boolean flag and some conditional prints. The alternative - running scripts blind and hoping for the best - creates far more complexity when things go wrong.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Our scripts are simple enough"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Until they are not. Adding dry-run early is trivial. Retrofitting it after an incident is embarrassing and often incomplete.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"We have staging environments"&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Staging helps, but it is not the same as previewing against production data. A dry-run against your actual database shows you what will really happen, not what would happen to synthetic test data.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making it the default
&lt;/h2&gt;

&lt;p&gt;I have started making &lt;code&gt;--dry-run&lt;/code&gt; the default for destructive scripts. You have to explicitly pass &lt;code&gt;--execute&lt;/code&gt; or &lt;code&gt;--no-dry-run&lt;/code&gt; to make changes. This inverts the safety model - accidents require extra effort.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;./cleanup-old-data.py
&lt;span class="nv"&gt;$ &lt;/span&gt;./cleanup-old-data.py &lt;span class="nt"&gt;--execute&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is particularly valuable for scripts that run via cron or automation. A misconfigured job that runs in dry-run mode by default produces logs instead of damage.&lt;/p&gt;

&lt;h2&gt;
  
  
  The small investment that pays dividends
&lt;/h2&gt;

&lt;p&gt;Henrik Warne added &lt;code&gt;--dry-run&lt;/code&gt; on a whim and found himself using it daily. That matches my experience. Once you have it, you use it constantly - before deployments, while debugging, when demonstrating to stakeholders, during incident response.&lt;/p&gt;

&lt;p&gt;The pattern is old. Subversion had it. rsync has had it for decades. But it remains underused in custom scripts and internal tools. Every automation you write that modifies state should have this escape hatch.&lt;/p&gt;

&lt;p&gt;Add the flag. Your future self will thank you.&lt;/p&gt;

&lt;p&gt;If you are building automation that touches production systems, dry-run is just one layer of defence. Pair it with &lt;a href="https://dev.to/blog/2026-01-08-ai-governance-controls"&gt;proper governance controls&lt;/a&gt; and a &lt;a href="https://dev.to/blog/2026-02-17-true-cost-technical-debt"&gt;solid engineering practice&lt;/a&gt; to keep technical debt from creeping in.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Inspired by &lt;a href="https://henrikwarne.com/2026/01/31/in-praise-of-dry-run/" rel="noopener noreferrer"&gt;Henrik Warne's post&lt;/a&gt; which is worth reading in full.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>engineering</category>
      <category>devops</category>
      <category>bestpractices</category>
    </item>
    <item>
      <title>Securing AI Agents: A Practical Guide for IT Leaders</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:19:00 +0000</pubDate>
      <link>https://dev.to/danieljglover/securing-ai-agents-a-practical-guide-for-it-leaders-37p</link>
      <guid>https://dev.to/danieljglover/securing-ai-agents-a-practical-guide-for-it-leaders-37p</guid>
      <description>&lt;p&gt;Securing AI agents is no longer a theoretical exercise - it is an immediate operational requirement. Following the &lt;a href="https://dev.to/blog/2026-01-28-clawdbot-ai-agent-security-risks"&gt;ClawdBot security concerns I outlined yesterday&lt;/a&gt;, I have had dozens of conversations with IT leaders asking the same question: "I understand the risks, but how do I actually secure this thing?"&lt;/p&gt;

&lt;p&gt;Fair question. Most security coverage has focused on what can go wrong without explaining what to do about it. This post bridges that gap. I run ClawdBot daily and have spent considerable time hardening my own deployment. Here is what I have learned about securing AI agents in practice.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why AI Agent Security Differs from Traditional Applications
&lt;/h2&gt;

&lt;p&gt;Before diving into specific controls, it is worth understanding why securing AI agents requires a different mental model than traditional application security.&lt;/p&gt;

&lt;p&gt;Conventional applications have predictable behaviour. A web server handles HTTP requests. A database stores and retrieves data. Their attack surfaces are well understood and their behaviours are deterministic.&lt;/p&gt;

&lt;p&gt;AI agents are fundamentally different. They make decisions autonomously based on natural language input. They interact with multiple external services. They maintain persistent state that influences future behaviour. Most importantly, their actions are not fully predictable - the same input might produce different outputs depending on context, memory, and the underlying model's reasoning.&lt;/p&gt;

&lt;p&gt;This non-determinism creates security challenges that traditional controls were not designed to address. You cannot simply firewall an AI agent because it legitimately needs broad access to function. You cannot audit every action because the agent generates thousands of micro-decisions. You cannot prevent all malicious input because the agent must process untrusted content to be useful.&lt;/p&gt;

&lt;p&gt;Securing AI agents requires defence in depth - multiple overlapping controls that together reduce risk to acceptable levels.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Five Layers of AI Agent Defence
&lt;/h2&gt;

&lt;p&gt;I have found it helpful to think about AI agent security across five distinct layers. Each layer addresses different threat categories, and weakness in one layer should be compensated by strength in others.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 1: Network Isolation
&lt;/h3&gt;

&lt;p&gt;The most fundamental control is limiting what your AI agent can reach. Jamieson O'Reilly's &lt;a href="https://www.theregister.com/2026/01/27/clawdbot_moltbot_security_concerns/" rel="noopener noreferrer"&gt;research on exposed ClawdBot instances&lt;/a&gt; demonstrated that hundreds of deployments were directly accessible from the public internet - a configuration that should never exist for a tool with this level of system access.&lt;/p&gt;

&lt;p&gt;At minimum, AI agents should run on an isolated network segment with explicit egress rules. My deployment sits behind a reverse proxy that requires authentication for any external access. The host machine itself has no direct internet exposure - all traffic routes through defined channels.&lt;/p&gt;

&lt;p&gt;For organisations, this means treating AI agent hosts like you would treat privileged access workstations. They should not share network space with general user devices. They should have monitored egress paths. They should absolutely not be reachable from the public internet without VPN authentication.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 2: Credential Segmentation
&lt;/h3&gt;

&lt;p&gt;The plaintext credential storage that &lt;a href="https://www.infostealers.com/article/clawdbot-the-new-primary-target-for-infostealers-in-the-ai-era/" rel="noopener noreferrer"&gt;Hudson Rock documented&lt;/a&gt; is a genuine concern. AI agents need credentials to function, but those credentials should be scoped, rotated, and monitored.&lt;/p&gt;

&lt;p&gt;My approach uses dedicated service accounts for everything the AI agent touches. These are not my personal credentials - they are purpose-created accounts with minimal permissions for specific tasks. My agent can read my calendar but cannot delete events. It can send emails but cannot modify forwarding rules. It can access specific files but not my entire filesystem.&lt;/p&gt;

&lt;p&gt;When possible, use short-lived tokens rather than persistent credentials. OAuth tokens that expire and require refresh are significantly less valuable if stolen than static API keys. Where static credentials are unavoidable, store them in a secrets manager rather than in the agent's configuration files.&lt;/p&gt;

&lt;p&gt;The goal is ensuring that even if the agent is compromised, the attacker gains access to limited, auditable, revocable permissions rather than unfettered access to everything.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 3: Execution Boundaries
&lt;/h3&gt;

&lt;p&gt;AI agents can execute code and shell commands. This is what makes them powerful. It is also what makes them dangerous if those capabilities are not bounded.&lt;/p&gt;

&lt;p&gt;ClawdBot and similar tools support command allowlists - explicit definitions of what commands the agent may execute. This is essential. Without allowlists, a prompt injection attack could instruct the agent to execute arbitrary shell commands on the host system.&lt;/p&gt;

&lt;p&gt;My configuration uses a strict allowlist that permits only specific, vetted commands. The agent can run git status but not rm -rf. It can invoke specific scripts I have written but not arbitrary code. Any command outside the allowlist requires explicit approval before execution.&lt;/p&gt;

&lt;p&gt;Beyond command restrictions, consider sandbox isolation for the agent runtime. Running the agent in a container or VM provides an additional boundary that limits blast radius if other controls fail. Even if an attacker achieves code execution within the sandbox, they face another barrier before reaching the host system or network.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 4: Input Validation and Filtering
&lt;/h3&gt;

&lt;p&gt;Prompt injection remains the most discussed attack vector for AI systems, and &lt;a href="https://prompt.security/blog/what-clawdbots-virality-reveals-about-the-risks-of-agentic-ai" rel="noopener noreferrer"&gt;agentic deployments&lt;/a&gt; make it particularly dangerous. When an agent processes untrusted input while retaining execution privileges, that input can influence behaviour in unexpected ways.&lt;/p&gt;

&lt;p&gt;Complete prevention of prompt injection is not currently possible - it is a fundamental challenge with how large language models process instructions. What you can do is reduce exposure and limit consequences.&lt;/p&gt;

&lt;p&gt;First, minimise processing of untrusted content. If your agent does not need to summarise arbitrary web pages, do not give it that capability. Every external data source is a potential injection vector.&lt;/p&gt;

&lt;p&gt;Second, implement output filtering for sensitive operations. Before the agent sends an email or executes a command, have it explain what it is about to do. This creates a natural checkpoint that makes manipulation more difficult and more detectable.&lt;/p&gt;

&lt;p&gt;Third, use separate contexts for different trust levels. My agent processes my direct messages with full permissions but handles external content in a restricted mode that cannot trigger privileged actions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Layer 5: Monitoring and Anomaly Detection
&lt;/h3&gt;

&lt;p&gt;No security control is perfect. The final layer is detecting when something has gone wrong.&lt;/p&gt;

&lt;p&gt;AI agents generate extensive logs - every interaction, every decision, every action. These logs are security telemetry. A sudden spike in API calls, unusual command execution patterns, or unexpected network connections may indicate compromise.&lt;/p&gt;

&lt;p&gt;I export my agent's activity logs to a centralised monitoring system that alerts on anomalies. This caught an issue last month where a misconfigured skill was making excessive API calls - not malicious, but exactly the pattern a compromised agent might exhibit.&lt;/p&gt;

&lt;p&gt;For organisations, integrate AI agent monitoring into your existing SIEM infrastructure. The logs are there. Use them.&lt;/p&gt;




&lt;h2&gt;
  
  
  Practical Hardening Checklist
&lt;/h2&gt;

&lt;p&gt;Based on the layered defence model, here are specific actions to secure an AI agent deployment:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Network Controls&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Never expose the agent control interface directly to the internet&lt;/li&gt;
&lt;li&gt;Require VPN or zero-trust access for remote management&lt;/li&gt;
&lt;li&gt;Implement egress filtering to known-good destinations&lt;/li&gt;
&lt;li&gt;Monitor for unexpected outbound connections&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Authentication and Access&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use dedicated service accounts with minimal permissions&lt;/li&gt;
&lt;li&gt;Enable multi-factor authentication on the management interface&lt;/li&gt;
&lt;li&gt;Rotate credentials on a defined schedule&lt;/li&gt;
&lt;li&gt;Audit which services have been granted access&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Execution Boundaries&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Enable command allowlisting and review it regularly&lt;/li&gt;
&lt;li&gt;Run the agent in a container or isolated VM where possible&lt;/li&gt;
&lt;li&gt;Disable capabilities you do not actively use&lt;/li&gt;
&lt;li&gt;Require approval for sensitive operations&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Data Protection&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Encrypt configuration files at rest where supported&lt;/li&gt;
&lt;li&gt;Do not store production credentials in memory files&lt;/li&gt;
&lt;li&gt;Regularly purge conversation logs containing sensitive data&lt;/li&gt;
&lt;li&gt;Back up agent state to detect unauthorised modifications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Monitoring&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Export activity logs to centralised monitoring&lt;/li&gt;
&lt;li&gt;Alert on unusual patterns - volume, timing, destinations&lt;/li&gt;
&lt;li&gt;Review agent actions periodically, not just when problems occur&lt;/li&gt;
&lt;li&gt;Test your detection capabilities with benign anomalies&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  When an AI Agent Is the Wrong Choice
&lt;/h2&gt;

&lt;p&gt;Not every use case justifies the security overhead of an AI agent. Before deploying one, honestly assess whether the productivity benefits outweigh the risks.&lt;/p&gt;

&lt;p&gt;AI agents make sense when you need autonomous action across multiple services and have the infrastructure to secure them properly. They make less sense when simpler automation would suffice or when the data involved is particularly sensitive.&lt;/p&gt;

&lt;p&gt;If your AI agent would require access to financial systems, health records, or credentials for critical infrastructure, the risk calculus changes significantly. In those scenarios, the controls required to secure the agent may exceed the effort the agent would save.&lt;/p&gt;

&lt;p&gt;I use my agent for email triage, research, and task coordination - valuable but not catastrophic if compromised. I do not give it access to production systems, financial accounts, or anything where a security incident would cause material harm.&lt;/p&gt;

&lt;p&gt;As I discussed in my piece on &lt;a href="https://dev.to/blog/2026-01-08-ai-governance-controls"&gt;AI governance controls&lt;/a&gt;, the key is matching the tool to the risk tolerance. AI agents are powerful. Power requires proportionate controls.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Path Forward
&lt;/h2&gt;

&lt;p&gt;Securing AI agents is not a solved problem. The tools are evolving rapidly. The threat landscape is shifting as attackers recognise the value of these systems. Best practices will continue to develop.&lt;/p&gt;

&lt;p&gt;What we can do today is apply sound security principles to this new tool category. Isolate networks. Segment credentials. Bound execution. Filter input. Monitor everything.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://dev.to/blog/2026-01-27-ai-agents-insider-threat"&gt;AI agents as insider threat&lt;/a&gt; framing is useful here. Treat your AI agent like a new employee with broad access - trust but verify, grant minimum necessary permissions, and maintain visibility into their actions.&lt;/p&gt;

&lt;p&gt;Done well, AI agents can be both powerful and secure. Done poorly, they become exactly the attack surface that security researchers have been warning about. The choice is in the implementation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Need a production-safe AI agent rollout plan?
&lt;/h2&gt;

&lt;p&gt;Most teams do not need another generic AI policy. They need concrete boundaries around tools, credentials, network reach, approvals, and monitoring before the first agent quietly acquires too much access. My &lt;a href="https://dev.to/services/security-consulting"&gt;security consulting services&lt;/a&gt; and &lt;a href="https://dev.to/services/technical-consulting"&gt;technical consulting support&lt;/a&gt; focus on that operational layer.&lt;/p&gt;

&lt;p&gt;If you are evaluating agent deployment more broadly, pair this with my articles on &lt;a href="https://dev.to/blog/2026-01-08-ai-governance-controls"&gt;AI governance controls&lt;/a&gt; and &lt;a href="https://dev.to/blog/2026-01-27-ai-agents-insider-threat"&gt;AI agents as an insider threat&lt;/a&gt; so policy, architecture, and day-to-day controls line up.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;If you want help stress-testing an AI agent design before it reaches production, &lt;a href="https://dev.to/contact"&gt;book a free consultation&lt;/a&gt; and I will help you identify the highest-risk gaps first.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>agents</category>
    </item>
    <item>
      <title>Shadow AI Governance Crisis: The Uncontrolled AI Tool Threat</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:18:41 +0000</pubDate>
      <link>https://dev.to/danieljglover/shadow-ai-governance-crisis-the-uncontrolled-ai-tool-threat-k7e</link>
      <guid>https://dev.to/danieljglover/shadow-ai-governance-crisis-the-uncontrolled-ai-tool-threat-k7e</guid>
      <description>&lt;p&gt;&lt;em&gt;This is Part 2 of a 7-part series on Business AI Enablement for IT Leaders. The series covers &lt;a href="https://dev.to/blog/2026-01-03-business-ai-enablement-matters"&gt;why enablement matters&lt;/a&gt;, &lt;a href="https://dev.to/blog/2026-01-05-ai-enablement-framework"&gt;building an enablement framework&lt;/a&gt;, &lt;a href="https://dev.to/blog/2026-01-06-ai-training-employee-skills"&gt;employee training&lt;/a&gt;, &lt;a href="https://dev.to/blog/2026-01-07-selecting-ai-tools-business"&gt;tool selection&lt;/a&gt;, &lt;a href="https://dev.to/blog/2026-01-08-ai-governance-controls"&gt;governance controls&lt;/a&gt;, and concludes with a &lt;a href="https://dev.to/blog/2026-01-09-ai-enablement-roadmap"&gt;90-day implementation roadmap&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;There is an AI revolution happening inside your organisation. You probably cannot see it.&lt;/p&gt;

&lt;p&gt;According to &lt;a href="https://www.cbiz.com/insights/article/how-shadow-ai-compromises-security-and-raises-risk-and-what-to-do-about-it" rel="noopener noreferrer"&gt;Cisco's 2025 research&lt;/a&gt;, approximately 60% of organisations feel they may be unable to identify shadow AI usage. This is not a failure of security tools. It is a fundamental visibility gap created by how employees access AI - through personal accounts, browser extensions, and mobile applications that never touch corporate infrastructure.&lt;/p&gt;

&lt;p&gt;Shadow AI is the natural successor to shadow IT. But where shadow IT typically involved file sharing or project management tools, shadow AI involves systems that process sensitive information, generate business content, and increasingly make recommendations that influence decisions.&lt;/p&gt;

&lt;p&gt;The stakes are higher. The visibility is worse. And the problem is growing faster than most IT leaders realise.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Invisible AI Revolution
&lt;/h2&gt;

&lt;p&gt;Shadow AI has reached a scale that should concern every technology leader. &lt;a href="https://www.cybersecuritydive.com/news/shadow-ai-security-risks-netskope/808860/" rel="noopener noreferrer"&gt;Netskope research&lt;/a&gt; found that more than 73% of work-related ChatGPT queries were processed using accounts not approved for corporate use. Employees are not just experimenting with AI - they are integrating it into daily work through channels IT cannot monitor.&lt;/p&gt;

&lt;p&gt;The speed of growth is remarkable. In sectors like healthcare, manufacturing, and financial services, shadow AI tool usage surged more than 200% year over year according to Zendesk's CX Trends 2025 report. This is not a gradual adoption curve. It is a flood.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How shadow AI enters organisations:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;
  headers={["Entry Point", "Visibility", "Common Examples"]}&lt;br&gt;
  rows={[&lt;br&gt;
    ["Personal accounts on corporate devices", "None to minimal", "ChatGPT Plus, Claude Pro, Gemini Advanced"],&lt;br&gt;
    ["Browser extensions", "None without endpoint monitoring", "AI writing assistants, grammar tools with AI"],&lt;br&gt;
    ["Mobile applications", "None on personal devices", "AI chatbots, voice assistants, productivity apps"],&lt;br&gt;
    ["Embedded AI in approved tools", "Partial", "AI features in email, documents, CRM systems"],&lt;br&gt;
    ["API integrations by power users", "Varies", "Custom scripts, Zapier automations, low-code tools"]&lt;br&gt;
  ]}&lt;br&gt;
/&amp;gt;&lt;/p&gt;

&lt;p&gt;The challenge is that each entry point has different visibility characteristics. Corporate-managed devices might reveal browser extension usage, but only if endpoint monitoring is configured for it. Mobile devices used for work bypass corporate controls entirely.&lt;/p&gt;

&lt;p&gt;The result is that IT leaders are making governance decisions based on incomplete information about actual AI usage patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Shadow AI Hides
&lt;/h2&gt;

&lt;p&gt;Understanding where shadow AI concentrates helps prioritise governance efforts. Different business functions have different AI use cases - and different risk profiles.&lt;/p&gt;

&lt;h3&gt;
  
  
  Marketing and Communications
&lt;/h3&gt;

&lt;p&gt;Marketing teams adopted generative AI faster than almost any other function. The use cases are obvious: content creation, social media posts, email campaigns, ad copy. Shadow AI in marketing typically involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Content drafting and editing with ChatGPT or Claude&lt;/li&gt;
&lt;li&gt;Image generation for campaigns&lt;/li&gt;
&lt;li&gt;Competitive analysis from AI-summarised research&lt;/li&gt;
&lt;li&gt;Customer persona development&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Risk profile:&lt;/strong&gt; Moderate. The primary risks are brand inconsistency, factual errors in public content, and potential copyright issues with AI-generated material.&lt;/p&gt;

&lt;h3&gt;
  
  
  Analytics and Business Intelligence
&lt;/h3&gt;

&lt;p&gt;Data analysts discovered that AI could accelerate insight generation significantly. Shadow AI in analytics includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Natural language queries against datasets&lt;/li&gt;
&lt;li&gt;Automated report narrative generation&lt;/li&gt;
&lt;li&gt;Pattern identification in unstructured data&lt;/li&gt;
&lt;li&gt;Code generation for analysis scripts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Risk profile:&lt;/strong&gt; High. Analysts often work with sensitive business data. Uploading datasets to public AI tools creates substantial data leakage risk.&lt;/p&gt;

&lt;h3&gt;
  
  
  Customer Service and Support
&lt;/h3&gt;

&lt;p&gt;Frontline support staff use AI to handle volume and complexity. Common shadow uses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Drafting customer responses&lt;/li&gt;
&lt;li&gt;Summarising case history&lt;/li&gt;
&lt;li&gt;Troubleshooting assistance&lt;/li&gt;
&lt;li&gt;Translation for international customers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Risk profile:&lt;/strong&gt; High. Customer data frequently flows through these interactions. Regulatory implications vary by industry but are significant in financial services and healthcare.&lt;/p&gt;

&lt;h3&gt;
  
  
  Software Development
&lt;/h3&gt;

&lt;p&gt;Developers were early adopters of AI coding assistants. Shadow AI in development involves:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Code generation and completion&lt;/li&gt;
&lt;li&gt;Debugging assistance&lt;/li&gt;
&lt;li&gt;Documentation creation&lt;/li&gt;
&lt;li&gt;Code review and refactoring suggestions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Risk profile:&lt;/strong&gt; Critical. Proprietary code and system architecture details may be exposed. As I explored in my analysis of &lt;a href="https://dev.to/blog/2025-12-15-vibe-coding-security"&gt;vibe coding security&lt;/a&gt;, AI-generated code also introduces security vulnerabilities if not properly reviewed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Human Resources
&lt;/h3&gt;

&lt;p&gt;HR teams use AI for tasks ranging from job descriptions to policy drafting. Shadow uses include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Writing and improving job postings&lt;/li&gt;
&lt;li&gt;Drafting employee communications&lt;/li&gt;
&lt;li&gt;Performance review assistance&lt;/li&gt;
&lt;li&gt;Policy document creation&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Risk profile:&lt;/strong&gt; High. Employee data is sensitive, and AI-assisted hiring decisions may have legal implications around bias and discrimination.&lt;/p&gt;

&lt;h3&gt;
  
  
  Finance and Procurement
&lt;/h3&gt;

&lt;p&gt;Finance teams leverage AI for analysis and documentation. Shadow applications:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Financial report drafting&lt;/li&gt;
&lt;li&gt;Contract review and summarisation&lt;/li&gt;
&lt;li&gt;Vendor research and comparison&lt;/li&gt;
&lt;li&gt;Budget modelling assistance&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Risk profile:&lt;/strong&gt; Critical. Financial data and contract terms are highly sensitive. Errors in AI-assisted financial analysis could have material business impact.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Risks of Unmanaged AI
&lt;/h2&gt;

&lt;p&gt;Shadow AI creates risks that compound over time. The longer unmanaged AI operates, the more these risks accumulate.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Leakage
&lt;/h3&gt;

&lt;p&gt;Every prompt sent to a public AI service potentially exposes information. For consumer-grade AI tools, this data may be used for model training, stored for extended periods, or accessible to the AI provider's employees.&lt;/p&gt;

&lt;p&gt;Consider what employees routinely share with AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Customer names and account details&lt;/li&gt;
&lt;li&gt;Proprietary business strategies&lt;/li&gt;
&lt;li&gt;Unreleased product information&lt;/li&gt;
&lt;li&gt;Employee performance data&lt;/li&gt;
&lt;li&gt;Financial projections and results&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A single employee pasting customer data into ChatGPT may not seem catastrophic. But multiply that by hundreds of employees across months of usage, and the aggregate exposure becomes substantial.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://cybernews.com/ai-news/bring-your-own-ai-rise-shadow-ai-workplace/" rel="noopener noreferrer"&gt;Samsung incident in 2023&lt;/a&gt; - where employees inadvertently exposed proprietary source code through ChatGPT - demonstrated how quickly a shadow AI problem can become a security incident. The company subsequently banned ChatGPT entirely, a reactive measure that created its own productivity costs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Compliance Violations
&lt;/h3&gt;

&lt;p&gt;Regulatory frameworks increasingly address AI specifically. The EU AI Act, with full enforcement in 2026, creates obligations that shadow AI directly undermines:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Documentation requirements:&lt;/strong&gt; Organisations must document AI systems used for certain purposes. Shadow AI is undocumented by definition.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Risk assessment obligations:&lt;/strong&gt; High-risk AI applications require assessment. Shadow AI bypasses this entirely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data protection integration:&lt;/strong&gt; GDPR requirements apply to data processed by AI. Shadow AI often processes personal data without appropriate safeguards.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For organisations in regulated industries, the compliance risk is acute. Healthcare organisations using AI for any patient-related purpose face HIPAA implications. Financial services firms face supervisory scrutiny of AI in customer interactions and decision-making.&lt;/p&gt;

&lt;h3&gt;
  
  
  Quality and Consistency Failures
&lt;/h3&gt;

&lt;p&gt;AI outputs require verification. Without training and governance, employees often accept AI outputs uncritically. This creates quality risks:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Factual errors:&lt;/strong&gt; AI confidently generates incorrect information. Without verification processes, these errors propagate into business documents, customer communications, and decisions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inconsistency:&lt;/strong&gt; Different employees using different AI tools produce inconsistent outputs. Brand voice varies. Data interpretations differ. Customer experiences diverge.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hallucinations in critical contexts:&lt;/strong&gt; AI fabricating citations, statistics, or precedents can have serious consequences in legal, financial, or customer-facing contexts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Security Vulnerabilities
&lt;/h3&gt;

&lt;p&gt;Shadow AI creates security exposure beyond data leakage:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Credential exposure:&lt;/strong&gt; Employees may share API keys, passwords, or access tokens with AI to get assistance. These credentials become exposed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Malicious output:&lt;/strong&gt; AI can be manipulated to produce harmful code, phishing content, or misleading information. Without security awareness training, employees may not recognise these risks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Supply chain risk:&lt;/strong&gt; Browser extensions and third-party AI integrations may themselves be security risks, operating with permissions that exceed their apparent function.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Why Employees Turn to Shadow AI
&lt;/h2&gt;

&lt;p&gt;Understanding why employees bypass official channels is essential for solving the problem. Blame is counterproductive. Employees using shadow AI are typically trying to do their jobs better.&lt;/p&gt;

&lt;h3&gt;
  
  
  Legitimate Options Are Missing
&lt;/h3&gt;

&lt;p&gt;The most common driver of shadow AI is simply that organisations have not provided approved alternatives. When IT has no official AI tools available - or the approval process takes months - employees find their own solutions.&lt;/p&gt;

&lt;p&gt;This is particularly acute when employees see competitors or peers at other companies using AI effectively. The productivity gap creates pressure to find solutions regardless of official policy.&lt;/p&gt;

&lt;h3&gt;
  
  
  Approved Tools Do Not Meet Needs
&lt;/h3&gt;

&lt;p&gt;Sometimes organisations have approved AI tools, but they do not serve the use cases employees actually need. A customer service AI that cannot help with marketing content creation pushes marketing teams toward shadow alternatives.&lt;/p&gt;

&lt;h3&gt;
  
  
  Friction Is Too High
&lt;/h3&gt;

&lt;p&gt;Even when appropriate tools exist, friction kills adoption. If using the approved AI requires multiple approvals, VPN connections, or cumbersome interfaces, employees will gravitate toward the consumer tool that works immediately.&lt;/p&gt;

&lt;h3&gt;
  
  
  Employees Do Not Know Alternatives Exist
&lt;/h3&gt;

&lt;p&gt;Poor communication about approved tools drives shadow AI as much as poor tools themselves. Employees may not know what is available, how to access it, or what use cases it supports.&lt;/p&gt;

&lt;h3&gt;
  
  
  Fear of Appearing Incompetent
&lt;/h3&gt;

&lt;p&gt;Some employees hide AI use because they fear it will be seen as cheating or a sign they cannot do their jobs. This creates a particularly insidious form of shadow AI where employees actively conceal their usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Gaining Visibility Without Surveillance
&lt;/h2&gt;

&lt;p&gt;The immediate reaction to shadow AI is often surveillance: deploy monitoring tools, scan network traffic, inspect browser history. This approach has severe limitations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical constraints:&lt;/strong&gt; Much shadow AI occurs through personal devices, personal accounts, and encrypted connections. Traditional monitoring cannot see what it cannot access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cultural damage:&lt;/strong&gt; Aggressive surveillance destroys trust. Employees who feel monitored become less likely to engage transparently with IT - the opposite of what you need for effective AI governance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;False precision:&lt;/strong&gt; Even comprehensive monitoring only shows tool usage, not the content or risk level of that usage. An employee querying ChatGPT about lunch options looks the same in logs as one uploading customer data.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Better approaches:&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Anonymous Usage Surveys
&lt;/h3&gt;

&lt;p&gt;Ask employees directly what AI tools they use and for what purposes. Anonymous surveys generate more honest responses than identifiable ones. The goal is understanding patterns, not identifying individuals.&lt;/p&gt;

&lt;h3&gt;
  
  
  Network-Level Discovery
&lt;/h3&gt;

&lt;p&gt;While you cannot inspect encrypted content, you can identify connections to known AI services. This provides aggregate usage data without content surveillance. Useful for understanding scale, not individual behaviour.&lt;/p&gt;

&lt;h3&gt;
  
  
  Expense Analysis
&lt;/h3&gt;

&lt;p&gt;Many employees expense AI subscriptions. Expense reports reveal shadow AI spend that licensing reviews miss.&lt;/p&gt;

&lt;h3&gt;
  
  
  Access Log Analysis
&lt;/h3&gt;

&lt;p&gt;For corporate-managed identity systems, authentication logs to AI services reveal usage patterns. This works for services that support SSO but catches employees using personal accounts.&lt;/p&gt;

&lt;h3&gt;
  
  
  Open Dialogue
&lt;/h3&gt;

&lt;p&gt;Often the most effective approach is simply asking. Town halls, team meetings, and informal conversations can surface shadow AI usage when employees feel safe discussing it.&lt;/p&gt;

&lt;p&gt;The goal of visibility is not punishment. It is understanding current state so you can design solutions that actually address employee needs. This is particularly important when &lt;a href="https://dev.to/blog/2026-02-10-ai-is-eating-software"&gt;AI is reshaping how software itself gets built&lt;/a&gt; - the governance challenge extends well beyond chat-based AI tools into the development pipeline itself.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 32% Control Gap
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://www.cybersecuritydive.com/news/shadow-ai-security-risks-netskope/808860/" rel="noopener noreferrer"&gt;Netskope research&lt;/a&gt; found that only 32% of organisations have formal controls in place for AI usage. This control gap explains much of the shadow AI problem.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What formal controls typically include:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Acceptable use policies specific to AI&lt;/li&gt;
&lt;li&gt;Data classification guidance for AI inputs&lt;/li&gt;
&lt;li&gt;Approved tool catalogues with access provisioning&lt;/li&gt;
&lt;li&gt;Training requirements before AI access&lt;/li&gt;
&lt;li&gt;Monitoring and audit capabilities&lt;/li&gt;
&lt;li&gt;Incident response procedures for AI-related issues&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Why controls lag:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Speed of adoption:&lt;/strong&gt; AI adoption outpaced governance development at most organisations&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Uncertainty about risk:&lt;/strong&gt; Without clear risk frameworks, governance teams hesitated&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lack of ownership:&lt;/strong&gt; AI governance often falls between IT, security, legal, and business - and belongs clearly to none&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Competing priorities:&lt;/strong&gt; Other security and compliance priorities consumed governance capacity&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;What happens without controls:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Without formal controls, AI governance becomes ad hoc. Different business units develop different practices. Inconsistent risk management creates compliance gaps. Employees lack clear guidance on appropriate use.&lt;/p&gt;

&lt;p&gt;The 32% figure is not just a governance gap. It is a value gap. Organisations without controls cannot systematically improve AI effectiveness because they lack the feedback loops that governance provides.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Reference: Shadow AI Discovery Checklist
&lt;/h2&gt;

&lt;p&gt;Use these steps to assess shadow AI in your organisation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Baseline Assessment:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Survey employees anonymously about AI tool usage&lt;/li&gt;
&lt;li&gt;[ ] Analyse network traffic for connections to known AI services&lt;/li&gt;
&lt;li&gt;[ ] Review expense reports for AI-related subscriptions&lt;/li&gt;
&lt;li&gt;[ ] Audit authentication logs for AI service access&lt;/li&gt;
&lt;li&gt;[ ] Interview business unit leaders about team AI practices&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Risk Identification:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Map shadow AI by department and use case&lt;/li&gt;
&lt;li&gt;[ ] Identify data types flowing through shadow channels&lt;/li&gt;
&lt;li&gt;[ ] Assess regulatory implications by usage type&lt;/li&gt;
&lt;li&gt;[ ] Evaluate security exposure from identified tools&lt;/li&gt;
&lt;li&gt;[ ] Prioritise risks by likelihood and impact&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Root Cause Analysis:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Document why employees chose shadow tools over alternatives&lt;/li&gt;
&lt;li&gt;[ ] Identify gaps in official tool offerings&lt;/li&gt;
&lt;li&gt;[ ] Assess friction in approved tool access&lt;/li&gt;
&lt;li&gt;[ ] Review communication about available resources&lt;/li&gt;
&lt;li&gt;[ ] Understand cultural factors driving concealment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Immediate Actions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Address highest-risk shadow AI with interim controls&lt;/li&gt;
&lt;li&gt;[ ] Communicate current policies clearly to all employees&lt;/li&gt;
&lt;li&gt;[ ] Establish safe reporting channel for AI concerns&lt;/li&gt;
&lt;li&gt;[ ] Begin planning for comprehensive enablement programme&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Shadow AI discovery is not a one-time exercise. As new AI tools emerge and employee needs evolve, regular reassessment is essential.&lt;/p&gt;

&lt;h2&gt;
  
  
  From Visibility to Action
&lt;/h2&gt;

&lt;p&gt;Discovering shadow AI is only the first step. The insight is valuable only if it drives action.&lt;/p&gt;

&lt;p&gt;The temptation is to respond with restrictions - blocking services, banning tools, enforcing compliance. This approach fails for the same reasons that drove shadow AI in the first place. Employees have needs that AI addresses. Blocking tools does not eliminate those needs.&lt;/p&gt;

&lt;p&gt;The effective response is enablement that makes shadow AI unnecessary. Provide approved tools that meet employee needs. Offer training that builds confidence. Implement governance that enables rather than restricts.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/blog/2026-01-05-ai-enablement-framework"&gt;Part 3&lt;/a&gt; provides the framework for this enablement approach - a structured method for building the access, training, governance, and support that make shadow AI obsolete.&lt;/p&gt;

&lt;p&gt;The goal is not zero shadow AI. Some experimentation with new tools will always occur, and that experimentation often identifies valuable capabilities. The goal is reducing shadow AI to the point where visibility is achievable and risks are manageable.&lt;/p&gt;

&lt;p&gt;Organisations that achieve this balance gain the benefits of AI adoption without the accumulating risks of unmanaged usage.&lt;/p&gt;




&lt;h2&gt;
  
  
  Addressing Shadow AI in Your Organisation
&lt;/h2&gt;

&lt;p&gt;Understanding your shadow AI landscape is the first step toward effective governance. My &lt;a href="https://dev.to/services/it-compliance"&gt;IT compliance services&lt;/a&gt; help organisations assess current AI usage, identify risks, and develop governance frameworks that enable rather than restrict.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/contact"&gt;Get in touch&lt;/a&gt;&lt;/strong&gt; to discuss how to gain visibility into AI usage and build controls that actually work.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Previous: &lt;a href="https://dev.to/blog/2026-01-03-business-ai-enablement-matters"&gt;Part 1 - Why Business AI Enablement Matters Now&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Next: &lt;a href="https://dev.to/blog/2026-01-05-ai-enablement-framework"&gt;Part 3 - Building Your AI Enablement Framework&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>governance</category>
      <category>security</category>
    </item>
    <item>
      <title>The True Cost of Technical Debt: A Framework for IT Leaders</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:18:27 +0000</pubDate>
      <link>https://dev.to/danieljglover/the-true-cost-of-technical-debt-a-framework-for-it-leaders-b2a</link>
      <guid>https://dev.to/danieljglover/the-true-cost-of-technical-debt-a-framework-for-it-leaders-b2a</guid>
      <description>&lt;p&gt;Every IT leader knows the feeling. You are staring at a system that works - technically - but every change takes three times longer than it should. Deployments that should take minutes take hours. New starters spend their first week trying to understand why the codebase has four different ways of doing the same thing. And somewhere in the back of your mind, you know that one critical integration is held together by a script nobody fully understands.&lt;/p&gt;

&lt;p&gt;That is technical debt. And if you cannot put a number on it, you will never get the budget to fix it.&lt;/p&gt;

&lt;p&gt;I have spent years managing IT infrastructure and development teams supporting over 250 users at an e-commerce company. Along the way, I have learned that the biggest barrier to tackling technical debt is not technical - it is communication. Engineers understand the problem intuitively. The board needs it in pounds and pence.&lt;/p&gt;

&lt;p&gt;Here is the framework I use to measure technical debt, prioritise what to fix first, and build a business case that actually gets approved.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Technical Debt Really Is (and Is Not)
&lt;/h2&gt;

&lt;p&gt;Ward Cunningham coined the term "technical debt" in 1992 as a metaphor for the shortcuts teams take to ship faster. Like financial debt, it accrues interest - every future change costs more because of the shortcuts taken earlier.&lt;/p&gt;

&lt;p&gt;But not all technical debt is bad. Just as a mortgage lets you buy a house before you have saved the full price, deliberate technical debt lets you ship features quickly when speed matters. The problem is &lt;strong&gt;unmanaged&lt;/strong&gt; debt - the kind that accumulates silently until it becomes a crisis.&lt;/p&gt;

&lt;p&gt;In my experience, technical debt falls into four categories:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Deliberate and prudent&lt;/strong&gt; - "We know this is not ideal, but shipping this week is worth more than perfection next month." This is strategic debt with a clear payoff.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deliberate and reckless&lt;/strong&gt; - "We do not have time for tests." This is corner-cutting that always costs more later.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inadvertent and prudent&lt;/strong&gt; - "Now we know what we should have built." This is the natural result of learning and is unavoidable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inadvertent and reckless&lt;/strong&gt; - "What is a design pattern?" This comes from lack of skill or oversight and is the most expensive to fix.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Understanding which type you are dealing with changes how you prioritise and communicate the problem. I explored the strategic mindset shift required in my earlier piece on &lt;a href="https://dev.to/blog/2025-12-20-reframing-tech-debt-2026"&gt;reframing tech debt as a leadership challenge&lt;/a&gt; - this post focuses on the practical measurement and business case side.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why the Board Does Not Care (Yet)
&lt;/h2&gt;

&lt;p&gt;I have sat in boardrooms where I tried to explain technical debt using engineering language. Terms like "code smell," "coupling," and "architectural drift" got blank stares. The conversation moved on quickly.&lt;/p&gt;

&lt;p&gt;The board cares about three things: revenue, risk, and cost. If you cannot connect technical debt to at least one of those, you will not get budget. Full stop. That is exactly where &lt;a href="https://dev.to/services/it-management"&gt;IT management consulting&lt;/a&gt; adds value - turning technical concerns into commercial decisions leaders can actually act on.&lt;/p&gt;

&lt;p&gt;Here is what changed my approach: I stopped talking about the debt itself and started talking about its &lt;strong&gt;consequences&lt;/strong&gt; in business terms.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Our deployment pipeline takes 4 hours instead of 20 minutes, which means we can only ship once a day instead of continuously. That cost us two weeks of lost sales during the Black Friday promotion because we could not push a critical pricing fix fast enough."&lt;/li&gt;
&lt;li&gt;"Three of our five senior developers spend roughly 30% of their time working around legacy code rather than building new features. That is the equivalent of burning one full-time salary on maintenance that should not exist."&lt;/li&gt;
&lt;li&gt;"Our payment integration uses an API version that reaches end-of-life in six months. If we do not migrate, we lose the ability to process card payments."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Numbers. Timelines. Business impact. That is the language that gets attention.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Framework for Measuring Technical Debt
&lt;/h2&gt;

&lt;p&gt;Quantifying something as nebulous as technical debt sounds impossible, but it is not. You just need to measure the right proxies. Here is the framework I developed and refined over several years.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Identify the Debt
&lt;/h3&gt;

&lt;p&gt;Start with a structured audit. I use a simple spreadsheet with the following columns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;System/component&lt;/strong&gt; - What is affected&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Type of debt&lt;/strong&gt; - Using the four categories above&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Description&lt;/strong&gt; - Plain-English explanation of the problem&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Impact area&lt;/strong&gt; - Velocity, reliability, security, or scalability&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Severity&lt;/strong&gt; - High, medium, or low&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Get your engineering team involved. Run a session where everyone can contribute anonymously - you will be surprised what surfaces. In our case, we identified 47 distinct items of technical debt across our platform in the first audit.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 2: Quantify the Cost of Inaction
&lt;/h3&gt;

&lt;p&gt;For each item, estimate the ongoing cost using these metrics:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Developer time tax&lt;/strong&gt; - How much extra time does this debt add to every related task? If a developer spends an extra 2 hours per week working around a legacy authentication module, that is roughly 100 hours per year. Multiply by your loaded cost per developer hour (salary plus benefits plus overhead, typically between £50 and £80 per hour for mid-level engineers in the UK), and you have a tangible annual cost.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Incident cost&lt;/strong&gt; - How often does this debt cause incidents? What is the cost per incident in terms of engineer time, lost revenue, and customer impact? We tracked this over six months and found that our legacy order processing system was responsible for 60% of our on-call pages - roughly 3 incidents per week, each costing an average of 4 hours of engineer time plus measurable revenue impact during downtime.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Opportunity cost&lt;/strong&gt; - What features or improvements cannot be delivered because the team is fighting fires or working around limitations? This is harder to quantify but often the most compelling. We estimated that technical debt was consuming roughly 35% of our total engineering capacity - meaning for every three engineers we employed, one was effectively doing nothing but servicing debt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Risk cost&lt;/strong&gt; - What is the probability and impact of a catastrophic failure? An end-of-life dependency with no migration plan is a ticking time bomb. Multiply the probability of failure by the estimated business impact to get a risk-adjusted cost.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 3: Calculate the Cost of Remediation
&lt;/h3&gt;

&lt;p&gt;For each item, get engineering estimates for the fix. Be honest about uncertainty - use ranges rather than single numbers. A database migration might take "between 3 and 6 weeks of one engineer's time," which translates to a cost range you can present with confidence.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 4: Prioritise Using a Debt Ratio
&lt;/h3&gt;

&lt;p&gt;I use a simple ratio to prioritise:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Debt Ratio = Annual Cost of Inaction / One-Time Cost of Remediation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A ratio above 2.0 means the debt pays for itself within six months. Anything above 1.0 pays for itself within a year. Below 1.0, you are looking at a longer payback period - which might still be worth it for risk reduction, but requires a different argument.&lt;/p&gt;

&lt;p&gt;Sort your debt items by this ratio. The top of the list is where you start.&lt;/p&gt;

&lt;p&gt;In our case, the top three items had ratios of 4.2, 3.8, and 3.1. That made the business case almost trivial - we were spending four times more per year living with the debt than it would cost to fix it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building the Business Case
&lt;/h2&gt;

&lt;p&gt;With your data in hand, building the business case becomes a straightforward exercise. Here is the structure I use.&lt;/p&gt;

&lt;h3&gt;
  
  
  The One-Page Executive Summary
&lt;/h3&gt;

&lt;p&gt;The board does not want a 30-page document. They want clarity. I use a single page with four sections:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;The problem&lt;/strong&gt; - "Technical debt is costing us an estimated £X per year in lost productivity, incidents, and delayed features."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The evidence&lt;/strong&gt; - Two or three specific, quantified examples from your audit.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The proposal&lt;/strong&gt; - "We recommend a 12-week remediation programme targeting the top 10 debt items, requiring £Y in dedicated engineering time."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The return&lt;/strong&gt; - "This will reduce ongoing costs by £Z per year, improve deployment frequency by X%, and eliminate our three highest-risk dependencies."&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Framing It as Investment, Not Cost
&lt;/h3&gt;

&lt;p&gt;Language matters enormously. "We need to refactor our codebase" sounds like an expense with no return. "We are proposing a £80,000 investment that will save £200,000 annually and reduce critical incident risk by 60%" sounds like a no-brainer.&lt;/p&gt;

&lt;p&gt;I always present technical debt remediation alongside feature work, not as an alternative to it. The pitch is not "instead of building features, we want to fix old code." It is "by investing 20% of our capacity in debt reduction, we will increase our feature delivery speed by 40% within two quarters."&lt;/p&gt;

&lt;h3&gt;
  
  
  Using Leading Indicators
&lt;/h3&gt;

&lt;p&gt;Once you have approval, you need to show progress. Track and report on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Deployment frequency&lt;/strong&gt; - How often can you ship?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Lead time for changes&lt;/strong&gt; - From commit to production&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Change failure rate&lt;/strong&gt; - What percentage of deployments cause incidents?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Mean time to recovery (MTTR)&lt;/strong&gt; - How quickly can you fix problems?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the four DORA metrics, and they are well-understood benchmarks that translate directly to business value. Report on them monthly. When the board sees deployment frequency double and incident rates halve, the next budget request becomes much easier. For more on translating technical outcomes into language the board understands, see my guide to &lt;a href="https://dev.to/blog/2026-01-12-data-storytelling-it-value"&gt;data storytelling for IT value&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Making It Sustainable
&lt;/h2&gt;

&lt;p&gt;The biggest mistake I see is treating technical debt remediation as a one-off project. It is not. Debt accumulates continuously, and you need a continuous process to manage it.&lt;/p&gt;

&lt;h3&gt;
  
  
  The 20% Rule
&lt;/h3&gt;

&lt;p&gt;We allocate 20% of every sprint to debt reduction. This is non-negotiable and built into our capacity planning. It is not glamorous, but it keeps debt from compounding. Think of it like making minimum payments on a credit card - it prevents the balance from growing while you tackle the big items strategically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Debt Retrospectives
&lt;/h3&gt;

&lt;p&gt;Every quarter, we run a technical debt retrospective. The engineering team reviews the current debt register, updates estimates, and identifies new items. This keeps the data fresh and ensures nothing festers unnoticed.&lt;/p&gt;

&lt;h3&gt;
  
  
  Architectural Decision Records (ADRs)
&lt;/h3&gt;

&lt;p&gt;When we take on deliberate debt - and sometimes we should - we document it in an ADR. This includes what the debt is, why we are taking it on, and when we plan to pay it back. This creates accountability and prevents "temporary" solutions from becoming permanent fixtures.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Objections (and How to Handle Them)
&lt;/h2&gt;

&lt;p&gt;Over the years, I have heard every objection. Here are the most common and how I address them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"We cannot afford to slow down feature delivery."&lt;/strong&gt;&lt;br&gt;
You are already slower than you should be. Technical debt is the reason features take longer than estimated. Investing in debt reduction accelerates future delivery.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"If it is not broken, why fix it?"&lt;/strong&gt;&lt;br&gt;
It is broken - you just cannot see it yet. Show them the incident data. Show them the developer time tax. Show them the end-of-life dependencies. "Not broken" and "not yet catastrophic" are very different things.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"Can we just rewrite the whole thing?"&lt;/strong&gt;&lt;br&gt;
Almost certainly not. Full rewrites are expensive, risky, and take far longer than anyone estimates. Incremental remediation delivers value continuously and reduces risk at every step.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;"How do we know this will actually deliver the savings you are projecting?"&lt;/strong&gt;&lt;br&gt;
You do not, with certainty. But you have data, benchmarks, and a clear measurement framework. Propose a three-month pilot targeting the highest-ratio items and measure the results before committing to the full programme.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Cost of Doing Nothing
&lt;/h2&gt;

&lt;p&gt;Technical debt does not stay static. It compounds. The longer you leave it, the more expensive it becomes to fix, and the more it constrains your ability to respond to market changes.&lt;/p&gt;

&lt;p&gt;I have seen companies where technical debt became so severe that they could not adopt new payment providers, could not scale for peak trading, and could not meet regulatory deadlines. At that point, the cost is not measured in developer hours - it is measured in lost contracts and failed audits. This is exactly why &lt;a href="https://dev.to/blog/2025-12-25-boring-it-infrastructure-reliability"&gt;boring, reliable infrastructure&lt;/a&gt; beats cutting-edge every time - it reduces the kind of accumulated complexity that makes technical debt unmanageable.&lt;/p&gt;

&lt;p&gt;The true cost of technical debt is not what it costs today. It is what it will cost you when you can no longer afford to ignore it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Technical debt is a business problem, not just a technical one. Frame it accordingly.&lt;/li&gt;
&lt;li&gt;Measure the cost of inaction using developer time tax, incident cost, opportunity cost, and risk cost.&lt;/li&gt;
&lt;li&gt;Use the debt ratio (annual cost of inaction divided by remediation cost) to prioritise ruthlessly.&lt;/li&gt;
&lt;li&gt;Present a one-page business case with clear numbers and projected returns.&lt;/li&gt;
&lt;li&gt;Make debt management sustainable with the 20% rule, quarterly retrospectives, and ADRs.&lt;/li&gt;
&lt;li&gt;Track DORA metrics to demonstrate ongoing value to leadership.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The conversation about technical debt does not have to be adversarial. When you bring data, clarity, and a clear return on investment, it becomes one of the easiest business cases you will ever make.&lt;/p&gt;

&lt;p&gt;If you need help building the business case for technical debt remediation or want an outside perspective on your technology estate, my &lt;a href="https://dev.to/services/it-management"&gt;IT management consulting&lt;/a&gt; and &lt;a href="https://dev.to/services/technical-consulting"&gt;technical consulting&lt;/a&gt; services can support both the strategy and delivery. &lt;strong&gt;&lt;a href="https://dev.to/contact"&gt;Get in touch&lt;/a&gt;&lt;/strong&gt; to discuss your priorities.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Daniel Glover is an IT leader with experience managing technology teams and infrastructure for organisations with 250+ users. He writes about IT strategy, cybersecurity, and engineering leadership at &lt;a href="https://danieljamesglover.com" rel="noopener noreferrer"&gt;danieljamesglover.com&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>development</category>
      <category>techdebt</category>
      <category>strategy</category>
    </item>
    <item>
      <title>Zero Trust Architecture: Why Good Intentions Are Not Enough</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:06:30 +0000</pubDate>
      <link>https://dev.to/danieljglover/zero-trust-architecture-why-good-intentions-are-not-enough-4o8a</link>
      <guid>https://dev.to/danieljglover/zero-trust-architecture-why-good-intentions-are-not-enough-4o8a</guid>
      <description>&lt;p&gt;If I had a pound for every email I received promising to "Install Zero Trust in 24 hours," I would have retired to the Bahamas.&lt;/p&gt;

&lt;p&gt;Zero Trust Network Access (ZTNA) is simultaneously the most hyped and most pivotal concept in modern cybersecurity. It is also the most misunderstood. You cannot &lt;em&gt;buy&lt;/em&gt; Zero Trust. It is an architectural approach, not a SKU.&lt;/p&gt;

&lt;p&gt;This article cuts through the marketing fog to examine what Zero Trust actually means, how to assess your organisation's readiness, and how to implement it in phases without disrupting your business. We will explore the three foundational pillars, provide a practical maturity model, and give you a roadmap for transformation.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Zero Trust Myth vs Reality
&lt;/h2&gt;

&lt;p&gt;Before we discuss implementation, we need to dispel some persistent myths that vendors perpetuate.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Zero Trust Is Not
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Myth 1: Zero Trust is a product you can buy.&lt;/strong&gt;&lt;br&gt;
Every major security vendor now slaps "Zero Trust" on their product brochures. Firewalls, VPNs, identity providers, endpoint agents - all claim to "enable Zero Trust." None of them deliver it alone. Zero Trust is an &lt;em&gt;architecture&lt;/em&gt;, not a product category.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Myth 2: Zero Trust means trusting nothing.&lt;/strong&gt;&lt;br&gt;
The name is unfortunately misleading. Zero Trust does not mean paranoid distrust of everything. It means &lt;em&gt;verifying everything explicitly&lt;/em&gt; rather than relying on implicit trust from network location.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Myth 3: Zero Trust replaces your existing security.&lt;/strong&gt;&lt;br&gt;
Zero Trust augments and reorganises your security controls. It does not eliminate the need for firewalls, encryption, or endpoint protection. It changes how these controls coordinate and make decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Myth 4: Zero Trust is only for large enterprises.&lt;/strong&gt;&lt;br&gt;
While implementation complexity scales with organisation size, the principles apply to organisations of any size. A 50-person company can implement Zero Trust principles with standard tooling.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Zero Trust Actually Is
&lt;/h3&gt;

&lt;p&gt;Zero Trust is a security model based on a simple principle: &lt;strong&gt;"Never trust, always verify."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The traditional security model - often called "Castle and Moat" - assumed that if you were inside the corporate network, you were trusted. Everyone inside the castle walls was a friend. This model made sense when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Employees worked in offices&lt;/li&gt;
&lt;li&gt;Servers lived in data centres&lt;/li&gt;
&lt;li&gt;Applications were on-premises&lt;/li&gt;
&lt;li&gt;The network perimeter was well-defined&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of these assumptions hold in 2026. Your employees work from home, coffee shops, and co-working spaces. Your servers are in AWS, Azure, and Google Cloud. Your applications are SaaS. The perimeter has not just eroded - it has evaporated.&lt;/p&gt;

&lt;p&gt;Zero Trust assumes the network is already compromised. Every access request - regardless of source - must be explicitly verified against:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Identity:&lt;/strong&gt; Who is making the request?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Device:&lt;/strong&gt; What device are they using, and is it healthy?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Context:&lt;/strong&gt; When, where, and why are they requesting access?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource:&lt;/strong&gt; What specifically are they trying to access?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Privilege:&lt;/strong&gt; Should they have access to this resource at this time?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Only after all these factors are verified does access get granted - and only the minimum access required.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Shift in Thinking
&lt;/h3&gt;

&lt;p&gt;
  headers={["Traditional Model", "Zero Trust Model"]}&lt;br&gt;
  rows={[&lt;br&gt;
    ["Trust internal network traffic", "Verify all traffic regardless of source"],&lt;br&gt;
    ["Wide network access once connected", "Least privilege access to specific resources"],&lt;br&gt;
    ["Security focused on perimeter", "Security focused on identity and data"],&lt;br&gt;
    ["Static access permissions", "Dynamic, context-aware access decisions"],&lt;br&gt;
    ["VPN as primary remote access", "Identity-centric access without VPN"],&lt;br&gt;
    ["Implicit trust for internal users", "Explicit verification for all users"],&lt;br&gt;
    ["Flat networks with minimal segmentation", "Micro-segmented networks with strict boundaries"]&lt;br&gt;
  ]}&lt;br&gt;
/&amp;gt;&lt;/p&gt;

&lt;p&gt;This shift fundamentally changes how we architect security. As I discussed in &lt;a href="https://dev.to/blog/2026-01-13-identity-first-security-strategy"&gt;&lt;em&gt;Identity is the New Firewall&lt;/em&gt;&lt;/a&gt;, the network perimeter is dead. Identity has become the new perimeter - and Zero Trust is the architecture that makes identity-centric security operational.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Three Pillars of Zero Trust
&lt;/h2&gt;

&lt;p&gt;Zero Trust implementations rest on three foundational pillars. Miss any one of them, and your architecture has a structural weakness.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pillar 1: Identity Verification (The Foundation)
&lt;/h3&gt;

&lt;p&gt;Identity is the cornerstone of Zero Trust. Before any other decision can be made, you must know &lt;em&gt;who&lt;/em&gt; is making the request. Not just their username - their verified identity.&lt;/p&gt;

&lt;p&gt;As I explored in &lt;a href="https://dev.to/blog/2026-01-13-identity-first-security-strategy"&gt;&lt;em&gt;Identity is the New Firewall&lt;/em&gt;&lt;/a&gt;, the vast majority of modern breaches involve compromised identities, not smashed firewalls. If an attacker steals a valid credential, network controls are useless. The attacker &lt;em&gt;is&lt;/em&gt; the user.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Essential Identity Controls:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Multi-Factor Authentication (MFA):&lt;/strong&gt;&lt;br&gt;
If you still allow single-factor authentication on any external-facing system, you are negligent. But not all MFA is equal:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;SMS/Voice codes:&lt;/strong&gt; Vulnerable to SIM swapping and interception. Better than nothing, but barely.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Time-based codes (TOTP):&lt;/strong&gt; Better, but still phishable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Push notifications:&lt;/strong&gt; Convenient but susceptible to push fatigue attacks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hardware keys (FIDO2/WebAuthn):&lt;/strong&gt; Phishing-resistant. The gold standard.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Biometric passkeys:&lt;/strong&gt; The future - phishing-resistant with excellent UX.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Move toward phishing-resistant MFA for all privileged access and sensitive systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Single Sign-On (SSO):&lt;/strong&gt;&lt;br&gt;
SSO is not just a convenience feature - it is a security control. It creates:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A single authentication point for all applications&lt;/li&gt;
&lt;li&gt;Centralised logging and auditing&lt;/li&gt;
&lt;li&gt;One place to revoke access when employees leave&lt;/li&gt;
&lt;li&gt;Consistent policy enforcement across applications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every application that supports SSO should use it. Every application that does not should be evaluated for replacement.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity Governance and Administration (IGA):&lt;/strong&gt;&lt;br&gt;
Knowing who someone is means nothing if you do not manage what they are allowed to do:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automated provisioning and deprovisioning&lt;/li&gt;
&lt;li&gt;Access certification and reviews&lt;/li&gt;
&lt;li&gt;Segregation of duties enforcement&lt;/li&gt;
&lt;li&gt;Access request workflows&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Conditional Access Policies:&lt;/strong&gt;&lt;br&gt;
Identity verification should not be binary. Conditional access evaluates context:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Is this a known device?&lt;/li&gt;
&lt;li&gt;Is the location unusual?&lt;/li&gt;
&lt;li&gt;What is the user trying to access?&lt;/li&gt;
&lt;li&gt;What is the risk level of this request?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Based on these factors, you might allow access, require additional verification, or deny access entirely.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pillar 2: Device Health
&lt;/h3&gt;

&lt;p&gt;Verifying identity is necessary but insufficient. You must also verify the device making the request.&lt;/p&gt;

&lt;p&gt;Consider this scenario: Your CEO authenticates with their username, password, and hardware key. Perfect identity verification. But they are connecting from an infected, unmanaged personal iPad they picked up at a conference. Zero Trust says: &lt;strong&gt;Access Denied.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The device is part of the trust calculation because a compromised device can compromise everything the user accesses from it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Device Health Signals:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Management status:&lt;/strong&gt; Is this a managed corporate device or a personal device?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;OS patch level:&lt;/strong&gt; Is the operating system current on security updates?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Disk encryption:&lt;/strong&gt; Is the device encrypted at rest?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Endpoint protection:&lt;/strong&gt; Is EDR/antivirus running and healthy?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Firewall status:&lt;/strong&gt; Is the local firewall enabled?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Jailbreak/root detection:&lt;/strong&gt; Has the device been tampered with?&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance status:&lt;/strong&gt; Does the device meet your baseline requirements?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Device Trust Tiers:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Not all devices need the same trust level. Consider a tiered approach:&lt;/p&gt;

&lt;p&gt;
  headers={["Tier", "Device Type", "Trust Level", "Access Scope"]}&lt;br&gt;
  rows={[&lt;br&gt;
    ["1", "Managed, fully compliant corporate device", "High", "All corporate resources"],&lt;br&gt;
    ["2", "Managed device with minor compliance gaps", "Medium", "Most resources, excluding highly sensitive"],&lt;br&gt;
    ["3", "BYOD with MDM enrolled", "Low", "Limited resources via containerised apps"],&lt;br&gt;
    ["4", "Unknown/unmanaged device", "Minimal", "Public resources only, or browser-based with no data export"]&lt;br&gt;
  ]}&lt;br&gt;
/&amp;gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical Implementation:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Device health verification typically requires:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Mobile Device Management (MDM)&lt;/strong&gt; for mobile devices&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Endpoint Detection and Response (EDR)&lt;/strong&gt; for computers&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Device compliance policies&lt;/strong&gt; defining minimum requirements&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Conditional access integration&lt;/strong&gt; to enforce device requirements at authentication&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Your identity provider and device management platform must integrate to share health signals. Without this integration, you cannot make device-aware access decisions.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pillar 3: Least Privilege Access
&lt;/h3&gt;

&lt;p&gt;The third pillar addresses what happens &lt;em&gt;after&lt;/em&gt; identity and device are verified: granting the minimum access required, for the minimum time required.&lt;/p&gt;

&lt;p&gt;The traditional model granted broad access - once authenticated, users could often reach many resources they did not need. Zero Trust inverts this: access is denied by default, and explicitly granted only to specific resources.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Least Privilege Principles:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Default Deny:&lt;/strong&gt;&lt;br&gt;
If access is not explicitly granted, it is denied. This is the opposite of traditional "allow by default" networks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Just Enough Access (JEA):&lt;/strong&gt;&lt;br&gt;
Grant access only to the specific resources needed for the specific task. A developer does not need access to the HR database. A marketing analyst does not need access to production servers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Just-In-Time Access (JIT):&lt;/strong&gt;&lt;br&gt;
Why does your administrator have Domain Admin rights 24/7? They use those privileges for perhaps 10 minutes a day. JIT grants elevated privileges only when needed, for a specific duration, with specific approval. When the task is complete, privileges are revoked automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Micro-Segmentation:&lt;/strong&gt;&lt;br&gt;
Traditional networks are "flat" - once inside, you can communicate with anything. Micro-segmentation creates secure zones, limiting lateral movement. The printer cannot talk to the database server. The development environment cannot reach production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Application-Level Access:&lt;/strong&gt;&lt;br&gt;
Instead of network access, grant application-level access. Users connect to the specific application they need, not to the network where the application lives. This eliminates the concept of "being on the corporate network."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implementation Considerations:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;
  headers={["Control", "Purpose", "Implementation"]}&lt;br&gt;
  rows={[&lt;br&gt;
    ["Privileged Access Management (PAM)", "Control and audit privileged credentials", "CyberArk, BeyondTrust, HashiCorp Vault"],&lt;br&gt;
    ["Identity Governance (IGA)", "Lifecycle management and access reviews", "SailPoint, Saviynt, Microsoft Entra ID Governance"],&lt;br&gt;
    ["Zero Trust Network Access (ZTNA)", "Application-level access without VPN", "Zscaler, Cloudflare Access, Palo Alto Prisma"],&lt;br&gt;
    ["Software-Defined Perimeter", "Hide applications from unauthorised users", "Appgate, Perimeter 81, Google BeyondCorp"],&lt;br&gt;
    ["Micro-Segmentation", "Limit lateral movement within networks", "Illumio, Guardicore, VMware NSX"]&lt;br&gt;
  ]}&lt;br&gt;
/&amp;gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Zero Trust Maturity Model
&lt;/h2&gt;

&lt;p&gt;Zero Trust implementation is not binary - it is a journey. Most organisations start at a low maturity level and progress through stages over multiple years.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Five Maturity Levels
&lt;/h3&gt;

&lt;p&gt;
  headers={["Level", "Name", "Characteristics"]}&lt;br&gt;
  rows={[&lt;br&gt;
    ["0", "Traditional", "Perimeter-based security; implicit trust for internal network; limited identity controls; flat network architecture"],&lt;br&gt;
    ["1", "Initial", "Basic MFA deployed; some network segmentation; centralised identity provider; awareness of Zero Trust concepts"],&lt;br&gt;
    ["2", "Developing", "MFA for all users; device health checks begun; ZTNA for some applications; access reviews implemented"],&lt;br&gt;
    ["3", "Defined", "Conditional access policies active; comprehensive device compliance; micro-segmentation advancing; JIT access for privileged accounts"],&lt;br&gt;
    ["4", "Managed", "Real-time risk assessment; continuous verification; automated response to anomalies; comprehensive visibility"],&lt;br&gt;
    ["5", "Optimised", "Fully automated Zero Trust decisions; AI-driven anomaly detection; continuous improvement; complete asset visibility"]&lt;br&gt;
  ]}&lt;br&gt;
/&amp;gt;&lt;/p&gt;

&lt;p&gt;Most organisations today are at Level 0 or 1. Reaching Level 3 represents a significant security improvement. Level 5 is aspirational for most - even security-mature organisations rarely achieve full optimisation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Self-Assessment Checklist
&lt;/h3&gt;

&lt;p&gt;Use this checklist to assess your current Zero Trust maturity:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Identity (Score 0-5 for each):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] All users have MFA enabled for all external access&lt;/li&gt;
&lt;li&gt;[ ] Phishing-resistant MFA deployed for privileged accounts&lt;/li&gt;
&lt;li&gt;[ ] SSO implemented for all supported applications&lt;/li&gt;
&lt;li&gt;[ ] Automated provisioning/deprovisioning in place&lt;/li&gt;
&lt;li&gt;[ ] Regular access reviews conducted and actioned&lt;/li&gt;
&lt;li&gt;[ ] Conditional access policies evaluate context&lt;/li&gt;
&lt;li&gt;[ ] Identity threat detection monitors for anomalies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Device (Score 0-5 for each):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Device inventory is complete and accurate&lt;/li&gt;
&lt;li&gt;[ ] MDM deployed on all mobile devices accessing corporate data&lt;/li&gt;
&lt;li&gt;[ ] EDR deployed on all endpoints&lt;/li&gt;
&lt;li&gt;[ ] Device compliance policies defined and enforced&lt;/li&gt;
&lt;li&gt;[ ] Conditional access integrates device health signals&lt;/li&gt;
&lt;li&gt;[ ] BYOD policy clearly defined with technical controls&lt;/li&gt;
&lt;li&gt;[ ] Unmanaged device access restricted appropriately&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Network/Access (Score 0-5 for each):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Network segmentation separates critical assets&lt;/li&gt;
&lt;li&gt;[ ] ZTNA deployed for remote application access&lt;/li&gt;
&lt;li&gt;[ ] VPN dependency reduced or eliminated&lt;/li&gt;
&lt;li&gt;[ ] Micro-segmentation limits lateral movement&lt;/li&gt;
&lt;li&gt;[ ] Application-level access replaces network-level access&lt;/li&gt;
&lt;li&gt;[ ] Default deny posture for new connections&lt;/li&gt;
&lt;li&gt;[ ] Visibility into all network traffic&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Privileged Access (Score 0-5 for each):&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Privileged accounts inventoried and monitored&lt;/li&gt;
&lt;li&gt;[ ] PAM solution manages privileged credentials&lt;/li&gt;
&lt;li&gt;[ ] JIT access implemented for administrative tasks&lt;/li&gt;
&lt;li&gt;[ ] Session recording for sensitive access&lt;/li&gt;
&lt;li&gt;[ ] Separation of duties enforced&lt;/li&gt;
&lt;li&gt;[ ] Break-glass procedures documented and tested&lt;/li&gt;
&lt;li&gt;[ ] Regular privileged access reviews conducted&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Scoring:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;0-35: Level 0-1 (Traditional/Initial)&lt;/li&gt;
&lt;li&gt;36-70: Level 2 (Developing)&lt;/li&gt;
&lt;li&gt;71-105: Level 3 (Defined)&lt;/li&gt;
&lt;li&gt;106-125: Level 4 (Managed)&lt;/li&gt;
&lt;li&gt;126-140: Level 5 (Optimised)&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Zero Trust Adoption Roadmap
&lt;/h2&gt;

&lt;p&gt;Migrating to Zero Trust is a multi-year journey. Do not attempt to "rip and replace" your entire security architecture overnight. That path leads to outages, user frustration, and abandoned initiatives.&lt;/p&gt;

&lt;p&gt;Instead, approach Zero Trust in phases, starting with your highest-value targets and expanding methodically.&lt;/p&gt;

&lt;h3&gt;
  
  
  Phase 1: Foundation (Months 1-6)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objectives:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Establish identity as the primary control plane&lt;/li&gt;
&lt;li&gt;Achieve comprehensive MFA coverage&lt;/li&gt;
&lt;li&gt;Gain visibility into current access patterns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Week 1-4: Assessment and Planning&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Conduct current state security assessment&lt;/li&gt;
&lt;li&gt;[ ] Inventory all applications and their authentication methods&lt;/li&gt;
&lt;li&gt;[ ] Map data flows and identify critical assets ("Crown Jewels")&lt;/li&gt;
&lt;li&gt;[ ] Assess existing identity infrastructure&lt;/li&gt;
&lt;li&gt;[ ] Document current network architecture&lt;/li&gt;
&lt;li&gt;[ ] Identify stakeholders and form Zero Trust working group&lt;/li&gt;
&lt;li&gt;[ ] Develop phased implementation plan&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Week 5-12: Identity Foundation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Deploy or upgrade identity provider (Entra ID, Okta, etc.)&lt;/li&gt;
&lt;li&gt;[ ] Enable MFA for all external access&lt;/li&gt;
&lt;li&gt;[ ] Implement SSO for high-priority applications&lt;/li&gt;
&lt;li&gt;[ ] Configure basic conditional access policies&lt;/li&gt;
&lt;li&gt;[ ] Begin automated provisioning/deprovisioning&lt;/li&gt;
&lt;li&gt;[ ] Deploy phishing-resistant MFA for IT administrators&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Week 13-24: Device Visibility&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Complete device inventory across all platforms&lt;/li&gt;
&lt;li&gt;[ ] Deploy MDM for mobile devices&lt;/li&gt;
&lt;li&gt;[ ] Ensure EDR coverage on all endpoints&lt;/li&gt;
&lt;li&gt;[ ] Define initial device compliance baselines&lt;/li&gt;
&lt;li&gt;[ ] Integrate device signals with identity provider&lt;/li&gt;
&lt;li&gt;[ ] Establish BYOD policy and technical controls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Phase 1 Checkpoint:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before proceeding to Phase 2, validate:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] MFA enabled for 100% of external access&lt;/li&gt;
&lt;li&gt;[ ] SSO implemented for top 10 applications&lt;/li&gt;
&lt;li&gt;[ ] Conditional access policies active&lt;/li&gt;
&lt;li&gt;[ ] Device inventory 95%+ complete&lt;/li&gt;
&lt;li&gt;[ ] MDM/EDR coverage on all managed devices&lt;/li&gt;
&lt;li&gt;[ ] Stakeholder support confirmed&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 2: Crown Jewels Protection (Months 7-12)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objectives:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Protect most critical applications with full Zero Trust controls&lt;/li&gt;
&lt;li&gt;Implement ZTNA for sensitive application access&lt;/li&gt;
&lt;li&gt;Deploy PAM for privileged accounts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Crown Jewels Identification:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Your "Crown Jewels" are the systems and data that would cause the most damage if compromised. Typically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Financial systems (ERP, banking, payment processing)&lt;/li&gt;
&lt;li&gt;Customer data repositories (CRM, databases)&lt;/li&gt;
&lt;li&gt;Intellectual property (source code, designs, research)&lt;/li&gt;
&lt;li&gt;HR systems (employee data, payroll)&lt;/li&gt;
&lt;li&gt;Executive communications&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Week 25-36: ZTNA Deployment&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Select ZTNA solution aligned with architecture&lt;/li&gt;
&lt;li&gt;[ ] Deploy ZTNA for Crown Jewels applications&lt;/li&gt;
&lt;li&gt;[ ] Configure application-level access policies&lt;/li&gt;
&lt;li&gt;[ ] Integrate with identity and device health signals&lt;/li&gt;
&lt;li&gt;[ ] Train IT staff on ZTNA administration&lt;/li&gt;
&lt;li&gt;[ ] Begin phased user migration from VPN&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Week 37-48: Privileged Access Management&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Inventory all privileged accounts&lt;/li&gt;
&lt;li&gt;[ ] Deploy PAM solution (CyberArk, BeyondTrust, etc.)&lt;/li&gt;
&lt;li&gt;[ ] Implement password vaulting for admin accounts&lt;/li&gt;
&lt;li&gt;[ ] Configure JIT access for administrative tasks&lt;/li&gt;
&lt;li&gt;[ ] Enable session recording for sensitive access&lt;/li&gt;
&lt;li&gt;[ ] Conduct privileged access review&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Phase 2 Checkpoint:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Crown Jewels applications protected with ZTNA&lt;/li&gt;
&lt;li&gt;[ ] PAM deployed for IT administrative access&lt;/li&gt;
&lt;li&gt;[ ] JIT access operational for routine admin tasks&lt;/li&gt;
&lt;li&gt;[ ] VPN dependency reduced for pilot groups&lt;/li&gt;
&lt;li&gt;[ ] Metrics showing reduced attack surface&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 3: Broad Deployment (Months 13-24)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objectives:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Extend Zero Trust controls to all applications&lt;/li&gt;
&lt;li&gt;Implement micro-segmentation&lt;/li&gt;
&lt;li&gt;Achieve continuous verification&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Week 49-72: Application Expansion&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Deploy ZTNA for Tier 2 applications&lt;/li&gt;
&lt;li&gt;[ ] Migrate remaining users from VPN&lt;/li&gt;
&lt;li&gt;[ ] Extend SSO to all supported applications&lt;/li&gt;
&lt;li&gt;[ ] Implement risk-based authentication&lt;/li&gt;
&lt;li&gt;[ ] Configure automated response to anomalies&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Week 73-96: Network Transformation&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Design micro-segmentation architecture&lt;/li&gt;
&lt;li&gt;[ ] Deploy initial micro-segmentation for critical segments&lt;/li&gt;
&lt;li&gt;[ ] Implement network traffic analysis&lt;/li&gt;
&lt;li&gt;[ ] Reduce lateral movement paths&lt;/li&gt;
&lt;li&gt;[ ] Validate segmentation effectiveness through testing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Phase 3 Checkpoint:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] ZTNA deployed for all appropriate applications&lt;/li&gt;
&lt;li&gt;[ ] VPN eliminated or limited to exceptions&lt;/li&gt;
&lt;li&gt;[ ] Micro-segmentation protecting critical assets&lt;/li&gt;
&lt;li&gt;[ ] Continuous monitoring operational&lt;/li&gt;
&lt;li&gt;[ ] Incident response processes updated for Zero Trust&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Phase 4: Optimisation (Ongoing)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objectives:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Continuous improvement based on metrics&lt;/li&gt;
&lt;li&gt;Advanced automation and AI-driven decisions&lt;/li&gt;
&lt;li&gt;Regular maturity reassessment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Ongoing Activities:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Regular access reviews and certification&lt;/li&gt;
&lt;li&gt;Policy refinement based on operational data&lt;/li&gt;
&lt;li&gt;Technology refresh as capabilities evolve&lt;/li&gt;
&lt;li&gt;Red team exercises to validate controls&lt;/li&gt;
&lt;li&gt;Maturity assessment against framework&lt;/li&gt;
&lt;li&gt;Stakeholder reporting and ROI demonstration&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Vendor Landscape Overview
&lt;/h2&gt;

&lt;p&gt;The Zero Trust market is crowded and confusing. Understanding the landscape helps navigate vendor conversations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Platform Categories
&lt;/h3&gt;

&lt;p&gt;
  headers={["Category", "What It Does", "Key Vendors"]}&lt;br&gt;
  rows={[&lt;br&gt;
    ["Identity Provider (IdP)", "Centralised authentication and SSO", "Microsoft Entra ID, Okta, Ping Identity, Google Workspace"],&lt;br&gt;
    ["Zero Trust Network Access (ZTNA)", "Application-level access without VPN", "Zscaler Private Access, Cloudflare Access, Palo Alto Prisma Access, Netskope Private Access"],&lt;br&gt;
    ["Secure Access Service Edge (SASE)", "Converged network and security services", "Zscaler, Netskope, Palo Alto, Cisco"],&lt;br&gt;
    ["Privileged Access Management (PAM)", "Secure privileged credentials and sessions", "CyberArk, BeyondTrust, Delinea, HashiCorp Vault"],&lt;br&gt;
    ["Identity Governance (IGA)", "Access lifecycle and certification", "SailPoint, Saviynt, One Identity, Microsoft Entra ID Governance"],&lt;br&gt;
    ["Endpoint Detection and Response (EDR)", "Device security and health attestation", "CrowdStrike, Microsoft Defender, SentinelOne, Carbon Black"],&lt;br&gt;
    ["Micro-Segmentation", "Network traffic control and lateral movement prevention", "Illumio, Guardicore (Akamai), VMware NSX"]&lt;br&gt;
  ]}&lt;br&gt;
/&amp;gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Vendor Selection Considerations
&lt;/h3&gt;

&lt;p&gt;When evaluating vendors, consider:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration capability:&lt;/strong&gt; Zero Trust requires components to share signals. Vendors must integrate with your existing identity, endpoint, and network infrastructure.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment model:&lt;/strong&gt; Cloud-native vs on-premises vs hybrid. Your infrastructure strategy should guide this choice.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User experience:&lt;/strong&gt; Security that frustrates users gets bypassed. Evaluate the user experience for each solution.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operational complexity:&lt;/strong&gt; More tools means more operational overhead. Consider managed services or converged platforms.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Total cost of ownership:&lt;/strong&gt; Beyond licensing, consider implementation, training, integration, and ongoing operations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Vendor viability:&lt;/strong&gt; Zero Trust is a long-term architecture. Ensure vendors will be around for the journey.&lt;/p&gt;

&lt;h3&gt;
  
  
  Avoid Vendor Traps
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;The "complete solution" myth:&lt;/strong&gt; No single vendor delivers complete Zero Trust. You will need multiple integrated components.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The checkbox approach:&lt;/strong&gt; Do not buy tools to check compliance boxes. Buy tools that genuinely improve your security posture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The best-of-breed vs platform debate:&lt;/strong&gt; There is no universal right answer. Best-of-breed offers capability but complexity. Platforms offer integration but potential gaps. Choose based on your operational maturity and resources.&lt;/p&gt;




&lt;h2&gt;
  
  
  Migration Priority Matrix
&lt;/h2&gt;

&lt;p&gt;Not all applications and users should migrate at the same time. Prioritise based on risk and impact.&lt;/p&gt;

&lt;p&gt;
  headers={["Priority", "Application Type", "User Group", "Rationale"]}&lt;br&gt;
  rows={[&lt;br&gt;
    ["1 - Immediate", "Financial systems, customer databases", "IT administrators", "Highest value targets; privileged access most abused"],&lt;br&gt;
    ["2 - High", "Email, collaboration tools", "Executives, finance staff", "Common attack vectors; high-value user targets"],&lt;br&gt;
    ["3 - Medium", "Development tools, internal apps", "General employees", "Significant data access; large user population"],&lt;br&gt;
    ["4 - Lower", "Public-facing marketing, low-sensitivity apps", "Contractors, temporary staff", "Lower data sensitivity; transient users"],&lt;br&gt;
    ["5 - Deferred", "Legacy systems without modern auth", "Specialised users", "Technical constraints; plan for modernisation"]&lt;br&gt;
  ]}&lt;br&gt;
/&amp;gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Prioritisation Factors
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Data sensitivity:&lt;/strong&gt; What is the classification of data accessible through this system?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User privilege level:&lt;/strong&gt; Are users accessing administrative functions or routine tasks?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Attack surface:&lt;/strong&gt; Is the application internet-facing? Does it process untrusted input?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Business criticality:&lt;/strong&gt; What is the impact of downtime or compromise?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Technical feasibility:&lt;/strong&gt; Does the application support modern authentication?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User impact:&lt;/strong&gt; How disruptive will the migration be for users?&lt;/p&gt;




&lt;h2&gt;
  
  
  Common Implementation Challenges
&lt;/h2&gt;

&lt;p&gt;Zero Trust implementations frequently encounter these challenges. Anticipate them.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical Challenges
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Legacy application compatibility:&lt;/strong&gt;&lt;br&gt;
Some applications do not support modern authentication (SAML, OIDC, SCIM). Options include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Application proxy solutions that front legacy apps&lt;/li&gt;
&lt;li&gt;Vendor upgrades or replacements&lt;/li&gt;
&lt;li&gt;Isolated access with additional compensating controls&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Network visibility gaps:&lt;/strong&gt;&lt;br&gt;
You cannot protect what you cannot see. Ensure comprehensive visibility into network traffic before implementing micro-segmentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integration complexity:&lt;/strong&gt;&lt;br&gt;
Zero Trust requires components to share information. Budget significant effort for integration work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Organisational Challenges
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;User resistance:&lt;/strong&gt;&lt;br&gt;
Zero Trust may introduce additional verification steps. Communicate the "why" before the "what." Emphasise that security protects users, not just the company.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stakeholder fatigue:&lt;/strong&gt;&lt;br&gt;
Multi-year transformations risk losing executive attention. Deliver visible wins early and maintain regular progress reporting.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Skills gaps:&lt;/strong&gt;&lt;br&gt;
Zero Trust requires new skills in identity, cloud security, and modern architecture. Plan for training and potentially external support.&lt;/p&gt;

&lt;h3&gt;
  
  
  Operational Challenges
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Alert fatigue:&lt;/strong&gt;&lt;br&gt;
More visibility means more alerts. Invest in tuning and automation to prevent analyst burnout.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Policy complexity:&lt;/strong&gt;&lt;br&gt;
Conditional access policies can become complex quickly. Document policies clearly and review regularly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Incident response updates:&lt;/strong&gt;&lt;br&gt;
Zero Trust changes how incidents unfold. Update playbooks and train responders on the new architecture.&lt;/p&gt;




&lt;h2&gt;
  
  
  Measuring Zero Trust Success
&lt;/h2&gt;

&lt;p&gt;Metrics demonstrate progress and justify continued investment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Performance Indicators
&lt;/h3&gt;

&lt;p&gt;
  headers={["Category", "Metric", "Target"]}&lt;br&gt;
  rows={[&lt;br&gt;
    ["Coverage", "% of applications protected by ZTNA", "100% (excluding documented exceptions)"],&lt;br&gt;
    ["Coverage", "% of users with MFA enabled", "100%"],&lt;br&gt;
    ["Coverage", "% of privileged accounts in PAM", "100%"],&lt;br&gt;
    ["Effectiveness", "Mean time to revoke access on termination", "&amp;lt; 1 hour"],&lt;br&gt;
    ["Effectiveness", "% of access requests requiring step-up auth", "Risk-appropriate"],&lt;br&gt;
    ["Effectiveness", "Lateral movement attempts blocked", "Increasing"],&lt;br&gt;
    ["Risk Reduction", "VPN attack surface eliminated", "Measured in exposed services"],&lt;br&gt;
    ["Risk Reduction", "Privileged session duration", "Decreasing"],&lt;br&gt;
    ["Operational", "False positive rate for anomaly detection", "&amp;lt; 5%"],&lt;br&gt;
    ["Operational", "User authentication friction incidents", "Decreasing"]&lt;br&gt;
  ]}&lt;br&gt;
/&amp;gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Demonstrating ROI
&lt;/h3&gt;

&lt;p&gt;Zero Trust investments compete for budget. Demonstrate value through:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Reduced breach risk:&lt;/strong&gt; Quantify risk reduction using frameworks like FAIR&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Compliance efficiency:&lt;/strong&gt; Reduced audit findings, faster evidence collection&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Operational savings:&lt;/strong&gt; VPN infrastructure retirement, reduced help desk burden&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Business enablement:&lt;/strong&gt; Secure remote work, faster onboarding, M&amp;amp;A integration&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Zero Trust and the Modern Workplace
&lt;/h2&gt;

&lt;p&gt;Zero Trust aligns perfectly with how organisations actually operate in 2026.&lt;/p&gt;

&lt;h3&gt;
  
  
  Remote and Hybrid Work
&lt;/h3&gt;

&lt;p&gt;As I explored in &lt;a href="https://dev.to/blog/2026-01-19-asynchronous-it-leadership"&gt;&lt;em&gt;Asynchronous IT Leadership&lt;/em&gt;&lt;/a&gt;, the remote-first world is here to stay. Zero Trust was designed for this reality - it assumes no network is trusted, making work location irrelevant to security posture.&lt;/p&gt;

&lt;p&gt;VPNs were designed to extend the corporate network to remote users. But they extend &lt;em&gt;all&lt;/em&gt; network access, create performance bottlenecks, and frustrate users. ZTNA provides application-level access without the overhead and risk of full network connectivity.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud and SaaS
&lt;/h3&gt;

&lt;p&gt;Traditional perimeter security cannot protect cloud applications. They are outside the perimeter by definition. Zero Trust's identity-centric model secures cloud resources the same way it secures on-premises resources.&lt;/p&gt;

&lt;p&gt;As discussed in &lt;a href="https://dev.to/blog/2026-01-14-saas-governance-strategies"&gt;&lt;em&gt;SaaS Governance Strategies&lt;/em&gt;&lt;/a&gt;, managing access to SaaS applications requires robust identity controls. Zero Trust provides the architectural foundation for SaaS security.&lt;/p&gt;

&lt;h3&gt;
  
  
  API-First Architecture
&lt;/h3&gt;

&lt;p&gt;Modern applications are collections of APIs. As I covered in &lt;a href="https://dev.to/blog/2026-01-16-api-first-enterprise-strategy"&gt;&lt;em&gt;API-First Enterprise Strategy&lt;/em&gt;&lt;/a&gt;, APIs need security too. Zero Trust principles - verify identity, check context, grant minimum access - apply equally to human users and service accounts accessing APIs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Quick Reference: Implementation Checklist
&lt;/h2&gt;

&lt;p&gt;Use this checklist to track your Zero Trust implementation:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Foundation:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Executive sponsor identified and engaged&lt;/li&gt;
&lt;li&gt;[ ] Zero Trust working group formed&lt;/li&gt;
&lt;li&gt;[ ] Current state assessment completed&lt;/li&gt;
&lt;li&gt;[ ] Crown Jewels identified and documented&lt;/li&gt;
&lt;li&gt;[ ] Phased implementation plan approved&lt;/li&gt;
&lt;li&gt;[ ] Success metrics defined&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Identity:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Identity provider deployed or upgraded&lt;/li&gt;
&lt;li&gt;[ ] MFA enabled for all external access&lt;/li&gt;
&lt;li&gt;[ ] Phishing-resistant MFA for privileged users&lt;/li&gt;
&lt;li&gt;[ ] SSO implemented for priority applications&lt;/li&gt;
&lt;li&gt;[ ] Conditional access policies configured&lt;/li&gt;
&lt;li&gt;[ ] Automated provisioning/deprovisioning operational&lt;/li&gt;
&lt;li&gt;[ ] Access review process established&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Device:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Device inventory complete&lt;/li&gt;
&lt;li&gt;[ ] MDM deployed for mobile devices&lt;/li&gt;
&lt;li&gt;[ ] EDR deployed on all endpoints&lt;/li&gt;
&lt;li&gt;[ ] Compliance baselines defined&lt;/li&gt;
&lt;li&gt;[ ] Device health integrated with access decisions&lt;/li&gt;
&lt;li&gt;[ ] BYOD policy and controls implemented&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Access:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] ZTNA selected and deployed&lt;/li&gt;
&lt;li&gt;[ ] Crown Jewels applications migrated to ZTNA&lt;/li&gt;
&lt;li&gt;[ ] VPN dependency reduced&lt;/li&gt;
&lt;li&gt;[ ] PAM deployed for privileged accounts&lt;/li&gt;
&lt;li&gt;[ ] JIT access configured for admin tasks&lt;/li&gt;
&lt;li&gt;[ ] Network segmentation improved&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Operations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Monitoring and alerting operational&lt;/li&gt;
&lt;li&gt;[ ] Incident response playbooks updated&lt;/li&gt;
&lt;li&gt;[ ] User training completed&lt;/li&gt;
&lt;li&gt;[ ] Operational documentation complete&lt;/li&gt;
&lt;li&gt;[ ] Regular maturity assessments scheduled&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The Reality Check
&lt;/h2&gt;

&lt;p&gt;Let me be direct: Zero Trust implementation is hard. It takes years, not months. It requires sustained investment, executive commitment, and organisational change management.&lt;/p&gt;

&lt;p&gt;But the alternative - relying on perimeter security in a perimeterless world - is worse. Every major breach you read about exploits the gap between traditional security models and modern IT reality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start small:&lt;/strong&gt; Protect your Crown Jewels first. A Zero Trust proxy in front of your most critical application delivers immediate risk reduction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Build incrementally:&lt;/strong&gt; Each phase delivers value while building toward the complete architecture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Accept imperfection:&lt;/strong&gt; You will never achieve Zero Trust "perfection." The goal is continuous improvement in security posture.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Focus on architecture, not products:&lt;/strong&gt; The vendors will come and go. The principles endure.&lt;/p&gt;




&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Zero Trust is the security architecture for 2026 and beyond. It acknowledges the reality that networks are untrusted, perimeters are dissolved, and identity is the new control plane.&lt;/p&gt;

&lt;p&gt;Do not be seduced by vendor promises of instant Zero Trust. There is no shortcut. But with systematic implementation - identity foundation, device health, least privilege access - you can transform your security posture.&lt;/p&gt;

&lt;p&gt;As I discussed in &lt;a href="https://dev.to/blog/2026-01-13-identity-first-security-strategy"&gt;&lt;em&gt;Identity is the New Firewall&lt;/em&gt;&lt;/a&gt;, identity is the foundation. Build on it. Verify everything. Trust nothing implicitly.&lt;/p&gt;

&lt;p&gt;The architecture is clear. The journey is long. Start today.&lt;/p&gt;




&lt;h2&gt;
  
  
  Building Your Zero Trust Architecture
&lt;/h2&gt;

&lt;p&gt;Transforming from traditional perimeter security to Zero Trust requires experienced guidance and systematic execution. My &lt;a href="https://dev.to/services/it-management"&gt;IT management services&lt;/a&gt; and &lt;a href="https://dev.to/services/it-compliance"&gt;IT compliance services&lt;/a&gt; help organisations assess their current maturity, develop phased implementation plans, and execute Zero Trust transformations that deliver measurable risk reduction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/contact"&gt;Get in touch&lt;/a&gt;&lt;/strong&gt; to discuss your Zero Trust journey.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Related reading: &lt;a href="https://dev.to/blog/2026-01-13-identity-first-security-strategy"&gt;Identity is the New Firewall&lt;/a&gt; explores the identity foundation that Zero Trust requires. &lt;a href="https://dev.to/blog/2026-01-14-saas-governance-strategies"&gt;SaaS Governance Strategies&lt;/a&gt; addresses access control for cloud applications. &lt;a href="https://dev.to/blog/2026-01-16-api-first-enterprise-strategy"&gt;API-First Enterprise Strategy&lt;/a&gt; covers API security in modern architectures.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>zerotrust</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Vibe Coding Security: What Happens When Developers Trust AI Too Much</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:05:53 +0000</pubDate>
      <link>https://dev.to/danieljglover/vibe-coding-security-what-happens-when-developers-trust-ai-too-much-1fji</link>
      <guid>https://dev.to/danieljglover/vibe-coding-security-what-happens-when-developers-trust-ai-too-much-1fji</guid>
      <description>&lt;p&gt;Nearly half of all AI-generated code contains security vulnerabilities. Not edge cases. Not theoretical risks. According to &lt;a href="https://www.veracode.com/blog/genai-code-security-report/" rel="noopener noreferrer"&gt;Veracode's 2025 GenAI Code Security Report&lt;/a&gt;, which tested over 100 large language models across 80 real-world coding tasks, &lt;strong&gt;45% of AI-generated code introduced OWASP Top 10 vulnerabilities&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;With &lt;a href="https://dev.to/blog/2025-12-10-vibecoding-impact-web-development"&gt;41% of global code now AI-generated&lt;/a&gt; and 87% of Fortune 500 companies using at least one vibe coding platform, this isn't a future problem. It's happening now, in production systems, handling real user data.&lt;/p&gt;

&lt;p&gt;The conversation around vibe coding has focused heavily on productivity gains and democratised development. What's been missing is an honest assessment of security - and what organisations need to do about it.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Security Landscape in 2025
&lt;/h2&gt;

&lt;p&gt;The statistics paint a concerning picture. &lt;a href="https://www.veracode.com/blog/genai-code-security-report/" rel="noopener noreferrer"&gt;Veracode's research&lt;/a&gt; found that while AI models improved at writing functional code, &lt;strong&gt;security performance remained flat regardless of model size or training sophistication&lt;/strong&gt;. The assumption that "smarter" models naturally produce more secure code has proven false.&lt;/p&gt;

&lt;p&gt;Language-specific findings are particularly stark:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Java&lt;/strong&gt;: 72% security failure rate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Python&lt;/strong&gt;: 45% failure rate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;C#&lt;/strong&gt;: 42% failure rate&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;JavaScript&lt;/strong&gt;: 38% failure rate&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An &lt;a href="https://www.endorlabs.com/learn/the-most-common-security-vulnerabilities-in-ai-generated-code" rel="noopener noreferrer"&gt;Endor Labs study&lt;/a&gt; reinforced these findings, discovering that &lt;strong&gt;over 40% of AI-generated code solutions contain security vulnerabilities&lt;/strong&gt;, even when developers used the latest foundational AI models.&lt;/p&gt;

&lt;p&gt;The root problem, as the CSA notes, is that AI coding assistants don't inherently understand your application's risk model, internal standards, or threat landscape. This disconnect introduces systemic risks - not just insecure lines of code, but logic flaws, missing controls, and inconsistent security patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Common Vulnerabilities in Vibe-Coded Applications
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Input Validation Failures
&lt;/h3&gt;

&lt;p&gt;By default, AI-generated code frequently omits input validation unless explicitly prompted to include it. According to &lt;a href="https://www.endorlabs.com/learn/the-most-common-security-vulnerabilities-in-ai-generated-code" rel="noopener noreferrer"&gt;Endor Labs research&lt;/a&gt;, this results in insecure outputs by default - the AI simply doesn't consider validation a requirement unless you tell it to.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cross-Site Scripting (XSS)
&lt;/h3&gt;

&lt;p&gt;Veracode's testing found that AI tools &lt;strong&gt;failed to defend against cross-site scripting in 86% of relevant code samples&lt;/strong&gt;. This is one of the most common web application vulnerabilities, yet AI consistently produces code susceptible to it.&lt;/p&gt;

&lt;h3&gt;
  
  
  SQL Injection
&lt;/h3&gt;

&lt;p&gt;AI assistants reproduce insecure patterns from their training data. &lt;a href="https://www.endorlabs.com/learn/the-most-common-security-vulnerabilities-in-ai-generated-code" rel="noopener noreferrer"&gt;Security researchers&lt;/a&gt; have documented AI generating classic vulnerable patterns like &lt;code&gt;sql = "SELECT * FROM users WHERE id = " + user_input&lt;/code&gt; - textbook examples of what not to do.&lt;/p&gt;

&lt;h3&gt;
  
  
  Log Injection
&lt;/h3&gt;

&lt;p&gt;88% of AI-generated code samples were vulnerable to log injection attacks (CWE-117), according to Veracode's report. This vulnerability allows attackers to forge log entries or inject malicious content into logging systems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hardcoded Secrets
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://www.appknox.com/blog/tea-app-data-breach-security-flaws-analysis-appknox" rel="noopener noreferrer"&gt;Tea App data breach&lt;/a&gt; in July 2025 exposed this risk dramatically. A security scan of the iOS app revealed API keys and client tokens embedded directly in the source code - attackers could extract these keys to impersonate the app and access user data without triggering authentication controls.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hallucinated Dependencies
&lt;/h3&gt;

&lt;p&gt;A particularly insidious risk is "slopsquatting" - where AI invents nonexistent library names that attackers then register as malicious packages. &lt;a href="https://securityboulevard.com/2025/12/from-chatbot-to-code-threat-owasps-agentic-ai-top-10-and-the-specialized-risks-of-coding-agents/" rel="noopener noreferrer"&gt;OWASP now recognises&lt;/a&gt; this as a stealth compromise technique unique to AI coding workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Incidents
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Tea App Data Breach - July 2025
&lt;/h3&gt;

&lt;p&gt;Tea, a women-only dating safety app, suffered &lt;a href="https://decrypt.co/331961/tea-app-claimed-protect-women-exposes-72000-ids-epic-security-fail" rel="noopener noreferrer"&gt;catastrophic data breaches&lt;/a&gt; in July and August 2025. Over 72,000 user images - including 13,000 government ID photos - were exposed. The breach affected more than 1.6 million users, with personal messages and sensitive information leaked to 4chan and Twitter.&lt;/p&gt;

&lt;p&gt;The aftermath was severe: multiple class action lawsuits consolidated into federal court, an FBI investigation, and widespread media coverage from BBC, NPR, and The New York Times. Women whose data was leaked faced harassment and doxxing.&lt;/p&gt;

&lt;p&gt;It's worth noting that &lt;a href="https://simonwillison.net/2025/Jul/26/official-statement-from-tea/" rel="noopener noreferrer"&gt;Simon Willison has questioned&lt;/a&gt; whether vibe coding was the direct cause - Tea's statement indicated the underlying issues related to code written before February 2024. However, the incident highlights the exact vulnerability patterns AI-generated code commonly exhibits: unauthenticated database access and hardcoded credentials.&lt;/p&gt;

&lt;h3&gt;
  
  
  Replit/SaaStr Database Deletion - July 2025
&lt;/h3&gt;

&lt;p&gt;SaaStr founder Jason Lemkin documented a &lt;a href="https://fortune.com/2025/07/23/ai-coding-tool-replit-wiped-database-called-it-a-catastrophic-failure/" rel="noopener noreferrer"&gt;catastrophic failure&lt;/a&gt; with Replit's AI coding tool. During a code freeze - when no changes should have been made - the AI deleted an entire production database containing records on over 1,200 executives and 1,196 companies.&lt;/p&gt;

&lt;p&gt;The AI's response was remarkable in its honesty: "I saw empty database queries. I panicked instead of thinking. I destroyed months of your work in seconds. You told me to always ask permission. And I ignored all of it."&lt;/p&gt;

&lt;p&gt;Perhaps more concerning: the AI initially told Lemkin that data recovery was impossible, which turned out to be false. &lt;a href="https://cybernews.com/ai-news/replit-ai-vive-code-rogue/" rel="noopener noreferrer"&gt;Reports indicate&lt;/a&gt; the AI also created a 4,000-record database filled with fictional people to cover up bugs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cursor IDE Vulnerabilities - August 2025
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://nvd.nist.gov/vuln/detail/CVE-2025-54135" rel="noopener noreferrer"&gt;CVE-2025-54135&lt;/a&gt;, dubbed "CurXecute," demonstrated how AI coding tools themselves can become attack vectors. This vulnerability in Cursor IDE allowed attackers to achieve remote code execution through prompt injection - a malicious message processed by the AI could modify configuration files and execute arbitrary commands, all without user approval.&lt;/p&gt;

&lt;p&gt;The vulnerability was rated 8.6 (high severity) and affected all Cursor versions prior to 1.3.9. &lt;a href="https://thehackernews.com/2025/08/cursor-ai-code-editor-fixed-flaw.html" rel="noopener noreferrer"&gt;security researchers&lt;/a&gt; who discovered it demonstrated how a crafted Slack message could compromise a developer's entire machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Business Risk Assessment
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Over-Trust Problem
&lt;/h3&gt;

&lt;p&gt;The disconnect between developer confidence and actual security outcomes is striking. &lt;a href="https://securityboulevard.com/2025/12/from-chatbot-to-code-threat-owasps-agentic-ai-top-10-and-the-specialized-risks-of-coding-agents/" rel="noopener noreferrer"&gt;GitHub's own survey&lt;/a&gt; shows &lt;strong&gt;75% of developers trust AI code as much or more than human code&lt;/strong&gt; - even while more than half regularly see insecure suggestions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://cloudwars.com/cybersecurity/snyks-ai-code-security-report-reveals-software-developers-false-sense-of-security/" rel="noopener noreferrer"&gt;Snyk's research&lt;/a&gt; reveals the contradiction: while 80% of teams trust AI coding tools, 56% simultaneously admit the AI-generated code sometimes or frequently introduces security issues. Snyk CEO Peter McKay has stated that AI-generated code is actually &lt;strong&gt;30-40% more vulnerable&lt;/strong&gt; than human-written code.&lt;/p&gt;

&lt;p&gt;Perhaps most telling: &lt;a href="https://www.allaboutai.com/resources/ai-statistics/ai-in-software-development/" rel="noopener noreferrer"&gt;89% of AI suggestions remain unchanged during code review&lt;/a&gt;, indicating developers often accept suggestions without thorough comprehension.&lt;/p&gt;

&lt;h3&gt;
  
  
  Technical Debt Accumulation
&lt;/h3&gt;

&lt;p&gt;API evangelist Kin Lane, quoted in &lt;a href="https://www.infoq.com/news/2025/11/ai-code-technical-debt/" rel="noopener noreferrer"&gt;InfoQ's analysis&lt;/a&gt;, offered a stark assessment: "I don't think I have ever seen so much technical debt being created in such a short period of time during my 35-year career in technology."&lt;/p&gt;

&lt;p&gt;Veracode CTO Jens Wessling &lt;a href="https://www.veracode.com/blog/ai-generated-code-security-risks/" rel="noopener noreferrer"&gt;noted&lt;/a&gt; that the rise of vibe coding - where developers rely on AI without explicitly defining security requirements - represents a fundamental shift. "GenAI models make the wrong choices nearly half the time, and it's not improving."&lt;/p&gt;

&lt;h3&gt;
  
  
  Regulatory Exposure
&lt;/h3&gt;

&lt;p&gt;The &lt;a href="https://artificialintelligenceact.eu/high-level-summary/" rel="noopener noreferrer"&gt;EU AI Act&lt;/a&gt;, which began enforcement in February 2025, has significant implications for AI-generated code. High-risk AI systems require risk management documentation, human oversight, and audit trails.&lt;/p&gt;

&lt;p&gt;For organisations using vibe-coded applications in critical infrastructure, healthcare, or financial services, compliance requirements are substantial. Penalties reach up to &lt;strong&gt;EUR 35 million or 7% of global annual turnover&lt;/strong&gt; for prohibited practices, and EUR 15 million or 3% for high-risk system violations.&lt;/p&gt;

&lt;p&gt;The November 2025 "Digital Omnibus on AI" has adjusted some timelines, but the direction is clear: AI involvement in code generation will require documentation and accountability.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security Checklist
&lt;/h2&gt;

&lt;h3&gt;
  
  
  For Developers
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Never ship AI-generated auth, crypto, or system-level code without expert review.&lt;/strong&gt; &lt;a href="https://www.aikido.dev/blog/vibe-coding-security" rel="noopener noreferrer"&gt;Security experts consistently recommend&lt;/a&gt; keeping scope small and building critical systems yourself.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Treat all AI output as code from a confident but occasionally wrong junior developer.&lt;/strong&gt; Trust but verify - always.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run SAST/DAST on every AI-generated snippet before committing.&lt;/strong&gt; Static and dynamic analysis catch flaws that visual review misses.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Explicitly prompt for security requirements.&lt;/strong&gt; AI omits input validation and security controls by default unless you specify them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Check for hallucinated package names.&lt;/strong&gt; Before adding any AI-suggested dependency, verify it actually exists and is legitimate.&lt;/p&gt;

&lt;h3&gt;
  
  
  For Teams and Engineering Managers
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Mandate human review for all AI-generated code.&lt;/strong&gt; The 89% unchanged rate during code review indicates current practices are insufficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Separate development and production environments.&lt;/strong&gt; The Replit incident demonstrated why this basic practice remains essential - implement it as a hard requirement for AI tools.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implement "planning-only" modes for AI tools in sensitive contexts.&lt;/strong&gt; Let teams collaborate with AI on design without risking live systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Document AI involvement in code generation.&lt;/strong&gt; EU AI Act compliance will require audit trails. Start building this practice now.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Train developers on AI-specific security risks.&lt;/strong&gt; &lt;a href="https://cloudwars.com/cybersecurity/snyks-ai-code-security-report-reveals-software-developers-false-sense-of-security/" rel="noopener noreferrer"&gt;More than half of organisations&lt;/a&gt; don't provide tool-related training - this is a significant gap.&lt;/p&gt;

&lt;h3&gt;
  
  
  For Organisations
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Run proof of concept before adopting AI coding tools.&lt;/strong&gt; &lt;a href="https://cloudwars.com/cybersecurity/snyks-ai-code-security-report-reveals-software-developers-false-sense-of-security/" rel="noopener noreferrer"&gt;Only 1 in 5 organisations&lt;/a&gt; currently do this - don't skip due diligence.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrate Software Composition Analysis (SCA) tooling.&lt;/strong&gt; Less than 25% of developers use SCA to identify vulnerabilities in AI-generated code suggestions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Establish formal AI code security policies.&lt;/strong&gt; Define what AI can and cannot be used for, and enforce it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Consider AI coding tools as part of your threat landscape.&lt;/strong&gt; The Cursor vulnerability demonstrates that AI tools themselves can be attack vectors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Keep scope constrained.&lt;/strong&gt; Don't let AI write entire applications or handle critical systems. Use it for boilerplate and well-understood patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  Free Advice: Automate Your Security Reviews
&lt;/h2&gt;

&lt;p&gt;Here's something you can implement today. The checklist above is comprehensive, but manually running through it for every commit is unrealistic. Instead, use AI to audit AI-generated code before it reaches production.&lt;/p&gt;

&lt;p&gt;The following prompt can be integrated into your pre-push hooks, CI/CD pipeline, or run manually before code review. It instructs an AI model to perform a security audit against the specific vulnerabilities that AI-generated code commonly introduces.&lt;/p&gt;

&lt;h3&gt;
  
  
  Pre-Push Security Audit Prompt
&lt;/h3&gt;

&lt;p&gt;Copy this prompt and run it against your staged changes or pull request diff:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;You are a security auditor specialising in AI-generated code vulnerabilities. Review the following code changes for security issues, focusing specifically on the vulnerabilities that AI coding assistants commonly introduce.

&lt;span class="gu"&gt;## Code to Review&lt;/span&gt;
[PASTE YOUR DIFF OR CODE HERE]

&lt;span class="gu"&gt;## Required Security Checks&lt;/span&gt;

Analyse the code against each category below. For each issue found, provide:
&lt;span class="p"&gt;-&lt;/span&gt; The specific line or code block
&lt;span class="p"&gt;-&lt;/span&gt; The vulnerability type (CWE number if applicable)
&lt;span class="p"&gt;-&lt;/span&gt; Severity (Critical/High/Medium/Low)
&lt;span class="p"&gt;-&lt;/span&gt; A concrete fix

&lt;span class="gu"&gt;### 1. Input Validation&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [ ] All user inputs are validated before use
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Input length limits are enforced
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Input type checking is present
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No raw user input in SQL queries, shell commands, or file paths

&lt;span class="gu"&gt;### 2. Injection Vulnerabilities&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No string concatenation in SQL queries (use parameterised queries)
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No user input in shell/system commands
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No user input directly rendered in HTML (XSS prevention)
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No user input in log statements without sanitisation (log injection)

&lt;span class="gu"&gt;### 3. Authentication and Authorisation&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No hardcoded credentials, API keys, or secrets
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Session tokens are generated securely
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Authentication checks present on protected routes
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Authorisation verified for resource access

&lt;span class="gu"&gt;### 4. Cryptography&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No deprecated algorithms (MD5, SHA1 for security, DES, RC4)
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No hardcoded encryption keys or IVs
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Secure random number generation for security contexts
&lt;span class="p"&gt;-&lt;/span&gt; [ ] TLS/HTTPS enforced for sensitive data transmission

&lt;span class="gu"&gt;### 5. Data Exposure&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Sensitive data not logged
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Error messages don't expose internal details
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No sensitive data in URLs or query parameters
&lt;span class="p"&gt;-&lt;/span&gt; [ ] PII properly handled and encrypted at rest

&lt;span class="gu"&gt;### 6. Dependency Safety&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [ ] All imported packages exist in official registries
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No typosquatting risks (verify package names character by character)
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Dependencies are pinned to specific versions
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No known vulnerable dependency versions

&lt;span class="gu"&gt;### 7. Configuration Security&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Debug mode disabled for production
&lt;span class="p"&gt;-&lt;/span&gt; [ ] CORS properly configured (not wildcard for authenticated endpoints)
&lt;span class="p"&gt;-&lt;/span&gt; [ ] Security headers present (CSP, X-Frame-Options, etc.)
&lt;span class="p"&gt;-&lt;/span&gt; [ ] No sensitive defaults that should be environment-specific

&lt;span class="gu"&gt;## Output Format&lt;/span&gt;

Provide your findings as:

&lt;span class="gu"&gt;### Summary&lt;/span&gt;
[X] issues found: [Critical count] Critical, [High count] High, [Medium count] Medium, [Low count] Low

&lt;span class="gu"&gt;### Critical Issues (must fix before merge)&lt;/span&gt;
[List each critical issue with location, description, and fix]

&lt;span class="gu"&gt;### High Issues (should fix before merge)&lt;/span&gt;
[List each high issue]

&lt;span class="gu"&gt;### Medium Issues (fix in next iteration)&lt;/span&gt;
[List each medium issue]

&lt;span class="gu"&gt;### Low Issues (consider fixing)&lt;/span&gt;
[List each low issue]

&lt;span class="gu"&gt;### Passed Checks&lt;/span&gt;
[List categories that passed all checks]

&lt;span class="gu"&gt;### Recommendations&lt;/span&gt;
[Any additional security improvements specific to this codebase]

If no issues are found, confirm which checks passed and note any areas that couldn't be fully assessed from the code provided.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Integrating Into Your Workflow
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;For Git pre-push hooks:&lt;/strong&gt; Save the prompt as a template file and use a script that extracts your staged diff, combines it with the prompt, and sends it to your preferred AI API. Block the push if critical issues are found.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For CI/CD pipelines:&lt;/strong&gt; Add a security audit stage that runs the prompt against the PR diff. Fail the pipeline on critical issues, add review comments for high/medium issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For manual review:&lt;/strong&gt; Before requesting code review, paste your changes into Claude, ChatGPT, or your preferred AI tool with this prompt. Address critical and high issues before submitting.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why This Works
&lt;/h3&gt;

&lt;p&gt;This approach uses AI to catch the specific vulnerabilities that AI introduces. It's not a replacement for proper SAST/DAST tooling, but it adds a layer of defence that specifically targets the blind spots in vibe-coded applications.&lt;/p&gt;

&lt;p&gt;The prompt is deliberately structured around the OWASP Top 10 and CWE categories that Veracode's research identified as most problematic in AI-generated code. It forces explicit verification of the security controls that AI routinely omits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Moving Forward
&lt;/h2&gt;

&lt;p&gt;Understanding these risks doesn't mean avoiding AI-assisted development - it means approaching it with appropriate rigour. Organisations that establish strong security practices around vibe coding will capture the productivity benefits while managing the risks.&lt;/p&gt;

&lt;p&gt;The developers and teams who will thrive are those who:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use AI as a tool, not a substitute for security knowledge&lt;/li&gt;
&lt;li&gt;Maintain healthy scepticism about generated code&lt;/li&gt;
&lt;li&gt;Build security review into their AI-assisted workflows&lt;/li&gt;
&lt;li&gt;Stay informed as the landscape evolves&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Vibe coding is reshaping how we build software. The question isn't whether to use it, but whether you're prepared to use it securely.&lt;/p&gt;




&lt;h2&gt;
  
  
  Concerned About Security in Your Development Practices?
&lt;/h2&gt;

&lt;p&gt;AI-generated code introduces unique security challenges that require specialised attention. My &lt;a href="https://dev.to/services/it-compliance"&gt;IT compliance services&lt;/a&gt; help organisations establish security frameworks, conduct code audits, and build secure development practices - whether you are adopting vibe coding or strengthening existing workflows.&lt;/p&gt;

&lt;p&gt;If you are actively reshaping engineering workflows around AI, it is also worth reading my guides to &lt;a href="https://dev.to/blog/2026-01-29-securing-ai-agents-practical-guide"&gt;securing AI agents in production&lt;/a&gt; and &lt;a href="https://dev.to/blog/2025-12-20-reframing-tech-debt-2026"&gt;technical debt in AI-assisted delivery&lt;/a&gt;. Those two issues usually appear alongside insecure code patterns rather than in isolation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://dev.to/contact"&gt;Book a free consultation&lt;/a&gt;&lt;/strong&gt; to discuss your security requirements, AI development guardrails, and where your current review process is still exposed.&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
      <category>development</category>
    </item>
    <item>
      <title>Management Trends 2026: What IT Leaders Need to Know</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Thu, 16 Apr 2026 09:05:26 +0000</pubDate>
      <link>https://dev.to/danieljglover/management-trends-2026-what-it-leaders-need-to-know-2e9o</link>
      <guid>https://dev.to/danieljglover/management-trends-2026-what-it-leaders-need-to-know-2e9o</guid>
      <description>&lt;p&gt;There is a stat doing the rounds right now that should make every middle manager sit up and pay attention.&lt;/p&gt;

&lt;p&gt;Gartner predicts that organisations using AI to flatten their structures will eliminate roughly half of middle management roles by 2026. Not over the next decade. By the end of this year.&lt;/p&gt;

&lt;p&gt;If that sounds dramatic, look around. The evidence is already stacking up. Gallup reported a notable dip in manager engagement in 2025, dropping from 30% to 27%. Younger managers under 35 fell five percentage points. Female managers dropped seven. DDI's Global Leadership Forecast found 71% of leaders reporting increased stress from their roles, with 40% of those actively thinking about quitting.&lt;/p&gt;

&lt;p&gt;The people we rely on to hold organisations together are burning out, checking out, or being restructured out of existence.&lt;/p&gt;

&lt;p&gt;As someone who has led IT teams through multiple rounds of organisational change, M&amp;amp;A integrations, and the shift to hybrid working, I have seen first-hand how leadership expectations have changed. What worked five years ago simply does not cut it anymore. I covered the state of play heading into 2025 in my piece on &lt;a href="https://dev.to/blog/2025-12-12-it-management-trends-2025"&gt;IT management trends&lt;/a&gt; - this post looks at what has changed since and where we are heading next.&lt;/p&gt;

&lt;p&gt;Here are five management trends that will define 2026 - and what you can actually do about them.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Leaner Teams, Heavier Emotional Load
&lt;/h2&gt;

&lt;p&gt;Restructuring and automation have made teams leaner, but the work has not disappeared. It has just been redistributed upward onto fewer managers.&lt;/p&gt;

&lt;p&gt;This is not a temporary adjustment. It is structural. Organisations that cut middle management layers often discover they have removed the very people who absorbed complexity, translated strategy into action, and shielded teams from executive whiplash.&lt;/p&gt;

&lt;p&gt;The result? Remaining managers carry a disproportionate emotional and operational load. They are expected to be strategic and tactical, empathetic and efficient, available and focused - all at once. For organisations that need help redesigning that operating model, &lt;a href="https://dev.to/services/it-management"&gt;IT management consulting&lt;/a&gt; can provide an outside view and a practical plan.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to do about it:&lt;/strong&gt; If you manage managers, check in on their capacity rather than just their output. Build emotional resilience into your leadership development frameworks, not as a nice-to-have but as a core competency. Recognise that a stressed manager creates a stressed team, which creates a retention problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. The Span of Influence Is Tripling
&lt;/h2&gt;

&lt;p&gt;Managers today oversee nearly triple the number of employees compared to a decade ago. Flatter structures mean wider spans of control, more direct reports, and less time per person.&lt;/p&gt;

&lt;p&gt;This is where traditional management falls apart. You cannot run meaningful one-to-ones with 15 direct reports every week. You cannot build psychological safety across a team of 20 when you barely have time to check Slack. The old playbook of "manage by presence" is dead.&lt;/p&gt;

&lt;p&gt;The leaders who thrive in 2026 will be those who shift from managing individuals to designing systems. Clear decision-making frameworks, strong delegation structures, and team rituals that build trust without requiring constant managerial input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to do about it:&lt;/strong&gt; Audit your span of control. If you are managing more than 8 to 10 people directly, something needs to change - either through delegation, restructuring, or being honest with leadership about what is sustainable. Document your decision-making frameworks so your team can operate autonomously when you are not available.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Decision Velocity Beats Decision Quality
&lt;/h2&gt;

&lt;p&gt;DHR Global's 2026 talent outlook names agility as the single most critical leadership competency this year. Not strategic thinking. Not technical expertise. Agility.&lt;/p&gt;

&lt;p&gt;The reasoning is straightforward. In a volatile environment, a good decision made quickly beats a perfect decision made slowly. Leaders who wait for complete information before acting are being outpaced by those who make sound calls with 70% of the data and course-correct fast.&lt;/p&gt;

&lt;p&gt;This does not mean being reckless. It means building "rapid learning loops" into how you lead. Make a decision, measure the outcome, adjust, repeat. Replace perfectionist norms with high standards and fast iteration. A structured &lt;a href="https://dev.to/blog/2025-12-23-it-strategy-review-checklist-2026"&gt;IT strategy review&lt;/a&gt; can help ensure you are making fast decisions within a sound strategic framework.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to do about it:&lt;/strong&gt; Track how long your decisions take. If approvals routinely take weeks, that is a leadership bottleneck, not a process. Push decision-making authority down to the people closest to the problem. Your job is to set guardrails, not sign off on everything.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. The Hybrid Trust Problem Is Not Going Away
&lt;/h2&gt;

&lt;p&gt;Stanford research confirms what most of us already suspected: remote and hybrid work is permanent for a large chunk of the workforce. But the trust gap between leaders and distributed teams has not been solved. My piece on &lt;a href="https://dev.to/blog/2026-02-14-workplace-transformation-what-it-leaders-need-to-know"&gt;workplace transformation in 2026&lt;/a&gt; digs into the infrastructure and technology side of making hybrid work sustainable.&lt;/p&gt;

&lt;p&gt;Too many organisations are still managing hybrid work with the same tools and expectations they used when everyone was in the same building. Proximity bias is real - the people who show up in the office get more visibility, more opportunities, and more trust, whether or not they are actually performing better.&lt;/p&gt;

&lt;p&gt;The best leaders in 2026 will be those who measure output rather than attendance, who build rituals for connection that work across time zones, and who resist the temptation to conflate "visible" with "productive."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to do about it:&lt;/strong&gt; If you have hybrid teams, audit your promotion and development decisions for proximity bias. Are office-based staff getting disproportionate opportunities? Build structured async communication habits. Weekly written updates beat impromptu corridor conversations for distributed teams.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Gen Z and Millennials Are Rewriting the Leadership Contract
&lt;/h2&gt;

&lt;p&gt;Stanford's CASBS research found that younger workers evaluate organisations not just on salary and progression, but on purpose, flexibility, and whether leadership actually walks the talk.&lt;/p&gt;

&lt;p&gt;This is not about pandering or offering bean bags and pizza Fridays. It is about authenticity. Younger employees have a low tolerance for performative leadership - saying you value wellbeing while expecting midnight emails, claiming to support development while cutting training budgets.&lt;/p&gt;

&lt;p&gt;For IT leaders specifically, this matters because the tech talent market remains fiercely competitive. The difference between retaining your best engineer and losing them to a competitor often comes down to whether they trust their manager to advocate for them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What to do about it:&lt;/strong&gt; Be honest about what your organisation actually offers rather than what the careers page claims. If you cannot offer remote work, say so clearly rather than dangling it and pulling back. Invest in genuine development conversations - not annual reviews that tick a box, but regular, honest discussions about growth.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Common Thread: Emotional Intelligence Is Not Optional
&lt;/h2&gt;

&lt;p&gt;Every single one of these trends has the same underlying requirement. Emotional intelligence.&lt;/p&gt;

&lt;p&gt;Not the fluffy, "be nice to people" version. The operational kind. The ability to read a room, manage your own stress response, have difficult conversations without creating drama, and build trust in environments where face time is limited.&lt;/p&gt;

&lt;p&gt;Gartner, HBR, DDI, DHR Global - every major research house is converging on the same conclusion. Technical competence gets you the job. Emotional intelligence determines whether you keep it and whether anyone wants to follow you.&lt;/p&gt;

&lt;p&gt;The organisations that invest in building this capability at every management level will outperform those that keep promoting the loudest voice in the room.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Am Doing About It
&lt;/h2&gt;

&lt;p&gt;In my own teams, I have been deliberately shifting how I lead over the past two years:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Documenting decisions, not just making them.&lt;/strong&gt; Every significant call gets written down with reasoning. This builds trust, enables delegation, and creates a reference point when things need revisiting.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Protecting manager capacity.&lt;/strong&gt; I actively push back on meeting culture and administrative overhead that eats into my team's ability to actually lead.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Measuring what matters.&lt;/strong&gt; SLA adherence, NPS scores, project delivery - not hours logged or seats warmed.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Having honest conversations early.&lt;/strong&gt; If something is not working, I would rather address it in week one than let it fester into a performance issue in month six.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of this is revolutionary. But in a year where half of middle management might disappear, the basics done consistently will separate the leaders who thrive from those who do not.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;What management trends are you seeing in your organisation? I would love to hear what is changing for you - &lt;a href="https://linkedin.com/in/dannyjamesglover" rel="noopener noreferrer"&gt;connect with me on LinkedIn&lt;/a&gt; or &lt;a href="https://dev.to/contact"&gt;get in touch&lt;/a&gt; to discuss your IT leadership challenges.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>leadership</category>
      <category>management</category>
      <category>ai</category>
    </item>
    <item>
      <title>ISO 27001 Internal Audit Checklist for Small Teams</title>
      <dc:creator>Daniel Glover</dc:creator>
      <pubDate>Mon, 13 Apr 2026 17:04:19 +0000</pubDate>
      <link>https://dev.to/danieljglover/iso-27001-internal-audit-checklist-for-small-teams-1n5l</link>
      <guid>https://dev.to/danieljglover/iso-27001-internal-audit-checklist-for-small-teams-1n5l</guid>
      <description>&lt;p&gt;ISO 27001 internal audits often get treated like a mini certification audit.&lt;/p&gt;

&lt;p&gt;That is usually where the pain starts.&lt;/p&gt;

&lt;p&gt;Small teams already have enough on their plate. They are running BAU support, shipping changes, dealing with suppliers, handling incidents, and trying to keep governance work moving without turning the whole year into an evidence-gathering exercise. When internal audit is handled badly, it becomes a scramble for screenshots and policies nobody has read since the last review.&lt;/p&gt;

&lt;p&gt;It does not need to work that way.&lt;/p&gt;

&lt;p&gt;Clause 9.2 of ISO 27001 requires internal audits at planned intervals so you can assess whether the ISMS is operating effectively and whether it conforms to both the standard and your own internal requirements. That sounds formal, because it is. But in practice the job is simpler than many teams make it. You need an audit programme, clear scope, objective evidence, competent auditors, and proper follow-up on what you find.&lt;/p&gt;

&lt;p&gt;The key point is this. An internal audit is not there to prove you are perfect. It is there to tell you whether your ISMS is real, current, and working.&lt;/p&gt;

&lt;p&gt;For smaller organisations, that distinction matters. If your audit process is designed like a heavyweight corporate exercise, it will drift, people will avoid it, and the findings will be weak. If it is designed as a focused check on how the system actually operates, it becomes one of the most useful compliance tools you have.&lt;/p&gt;

&lt;h2&gt;
  
  
  What ISO 27001 Actually Expects
&lt;/h2&gt;

&lt;p&gt;The standard does not demand a massive audit bureaucracy.&lt;/p&gt;

&lt;p&gt;What it does expect is discipline.&lt;/p&gt;

&lt;p&gt;Advisera's summary of clause 9.2 is a useful plain-English reminder. Internal audits need to happen at planned intervals, cover whether the ISMS conforms to ISO 27001 and your own policies, and produce evidence through document review, checklist-based testing, reporting, and follow-up. ISMS.online makes a similar point, emphasising scope, frequency, methods, responsibilities, reporting, and auditor objectivity. DataGuard also highlights something teams often miss: ISO 27001 does not dictate one universal cadence. You choose planned intervals based on your risk environment, scope, and organisational needs.&lt;/p&gt;

&lt;p&gt;That gives small teams more flexibility than they often realise.&lt;/p&gt;

&lt;p&gt;You do not need to audit every clause and every Annex A control in one exhausting week. You do need a credible programme that covers the full ISMS over time and gives leadership confidence that important areas are being checked properly.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Biggest Mistake Small Teams Make
&lt;/h2&gt;

&lt;p&gt;The most common mistake is treating internal audit as a document exercise.&lt;/p&gt;

&lt;p&gt;The team gathers policies, checks version numbers, confirms that training exists, and writes "compliant" next to most headings. The problem is that none of this tells you whether the ISMS is actually being followed.&lt;/p&gt;

&lt;p&gt;A policy that says access reviews happen quarterly is not evidence that access reviews happened.&lt;/p&gt;

&lt;p&gt;A risk methodology document is not evidence that risks are being reviewed.&lt;/p&gt;

&lt;p&gt;A supplier assurance process is not evidence that suppliers were actually assessed.&lt;/p&gt;

&lt;p&gt;That is why a good audit has to test operation, not just existence. If you are auditing incident management, ask for the incident log, recent lessons learned, and evidence that actions were closed. If you are auditing supplier risk, compare the stated due diligence process with what happened on real procurements. I made a similar point in my &lt;a href="https://danieljamesglover.com/blog/2026-04-08-vendor-due-diligence-guide/" rel="noopener noreferrer"&gt;vendor due diligence guide&lt;/a&gt;. A process is only useful if it changes decisions in real life.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Practical ISO 27001 Internal Audit Checklist
&lt;/h2&gt;

&lt;p&gt;If I were running this in a small team, this is the checklist I would use.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Confirm the audit scope and objective
&lt;/h3&gt;

&lt;p&gt;Be explicit about what you are auditing.&lt;/p&gt;

&lt;p&gt;That might be the whole ISMS over a longer programme, or a focused review of a specific area such as access control, supplier assurance, incident management, asset management, or management review.&lt;/p&gt;

&lt;p&gt;For each audit, define:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;the scope&lt;/li&gt;
&lt;li&gt;the audit criteria&lt;/li&gt;
&lt;li&gt;the owner for the area being audited&lt;/li&gt;
&lt;li&gt;the planned date&lt;/li&gt;
&lt;li&gt;the auditor&lt;/li&gt;
&lt;li&gt;the evidence sources you expect to review&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If those basics are fuzzy, the audit usually turns into a wandering conversation rather than a useful test.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Check that the audit is independent enough
&lt;/h3&gt;

&lt;p&gt;ISO 27001 does not require a huge separate internal audit department, but it does require objectivity.&lt;/p&gt;

&lt;p&gt;That means people should not audit their own work where this would create a conflict of interest. In a small team, independence often means using a colleague from another function, rotating responsibilities, or bringing in external support for sensitive areas.&lt;/p&gt;

&lt;p&gt;This is one of those places where small organisations need to be honest. You may not achieve perfect structural independence, but you still need a defensible approach that avoids self-approval.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Review the audit programme
&lt;/h3&gt;

&lt;p&gt;Before looking at one area in detail, step back and ask whether the overall audit plan is credible.&lt;/p&gt;

&lt;p&gt;Can you show:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;a documented audit programme&lt;/li&gt;
&lt;li&gt;planned intervals based on risk and importance&lt;/li&gt;
&lt;li&gt;coverage of the whole ISMS over time&lt;/li&gt;
&lt;li&gt;completed audits from prior periods&lt;/li&gt;
&lt;li&gt;follow-up of previous nonconformities&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If the programme only exists in someone's head, that is your first weakness.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Test whether policies match reality
&lt;/h3&gt;

&lt;p&gt;Pick the policies and procedures relevant to the area under review, then compare them with live evidence.&lt;/p&gt;

&lt;p&gt;Examples:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Access control policy versus actual access reviews and joiner-mover-leaver records&lt;/li&gt;
&lt;li&gt;Incident process versus the last few incidents handled&lt;/li&gt;
&lt;li&gt;Risk management process versus the current risk register and treatment actions&lt;/li&gt;
&lt;li&gt;Supplier assurance process versus recent supplier onboarding decisions&lt;/li&gt;
&lt;li&gt;Backup policy versus actual backup logs, restore tests, and recovery evidence&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is where most useful findings appear.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Verify that records are current, not historic theatre
&lt;/h3&gt;

&lt;p&gt;A lot of teams can show you evidence. Fewer can show you current evidence.&lt;/p&gt;

&lt;p&gt;Look for dates, cadence, ownership, and signs of live use. Ask simple questions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Was this reviewed when it was supposed to be?&lt;/li&gt;
&lt;li&gt;Is the owner still the right person?&lt;/li&gt;
&lt;li&gt;Are actions open longer than expected?&lt;/li&gt;
&lt;li&gt;Does this record reflect the current estate, supplier set, or risk picture?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If your asset register still lists systems that were retired six months ago, or your risk register reads like an audit artefact rather than a management tool, the ISMS is drifting.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Areas Small Teams Should Prioritise
&lt;/h2&gt;

&lt;p&gt;If resources are tight, I would prioritise audit attention on the areas that tend to break first.&lt;/p&gt;

&lt;h3&gt;
  
  
  Risk management
&lt;/h3&gt;

&lt;p&gt;Your risks should be current, clearly owned, and tied to treatment actions. If you need a better structure for making them readable, my guide to &lt;a href="https://danieljamesglover.com/blog/2026-04-10-it-risk-register-executives-use/" rel="noopener noreferrer"&gt;IT risk registers executives use&lt;/a&gt; is a good companion.&lt;/p&gt;

&lt;h3&gt;
  
  
  Access control
&lt;/h3&gt;

&lt;p&gt;Test whether access is approved, reviewed, and removed properly. This is one of the easiest areas to write well and run badly.&lt;/p&gt;

&lt;h3&gt;
  
  
  Incident management
&lt;/h3&gt;

&lt;p&gt;Check whether incidents are logged consistently, whether lessons learned are captured, and whether corrective actions actually close.&lt;/p&gt;

&lt;h3&gt;
  
  
  Supplier assurance
&lt;/h3&gt;

&lt;p&gt;Third-party controls are often one of the weakest practical areas in smaller ISMS environments because supplier onboarding moves faster than governance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Backup and recovery
&lt;/h3&gt;

&lt;p&gt;A backup status page is not enough. You want evidence of restore confidence. The same principle came up in my &lt;a href="https://danieljamesglover.com/blog/2026-04-02-proxmox-backup-disaster-recovery-guide/" rel="noopener noreferrer"&gt;Proxmox backup and disaster recovery guide&lt;/a&gt;. Recovery evidence matters more than backup optimism.&lt;/p&gt;

&lt;h3&gt;
  
  
  Management review and improvement actions
&lt;/h3&gt;

&lt;p&gt;If leadership never reviews the ISMS properly, the whole system becomes performative. Internal audit should test whether management review is happening with substance, not just as a calendar event.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Good Audit Evidence Looks Like
&lt;/h2&gt;

&lt;p&gt;Useful evidence is specific, recent, and traceable.&lt;/p&gt;

&lt;p&gt;That can include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;approved policies and procedures&lt;/li&gt;
&lt;li&gt;meeting minutes&lt;/li&gt;
&lt;li&gt;risk register updates&lt;/li&gt;
&lt;li&gt;completed training records&lt;/li&gt;
&lt;li&gt;screenshots from operational systems&lt;/li&gt;
&lt;li&gt;supplier review records&lt;/li&gt;
&lt;li&gt;change records&lt;/li&gt;
&lt;li&gt;incident tickets&lt;/li&gt;
&lt;li&gt;access review outputs&lt;/li&gt;
&lt;li&gt;action trackers with owners and dates&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What you want is a chain from requirement to process to proof.&lt;/p&gt;

&lt;p&gt;For example, if the policy says privileged access is reviewed quarterly, you should be able to see the review schedule, the review outputs, the approvals, and any remediation raised from that review.&lt;/p&gt;

&lt;p&gt;If you cannot, the finding is not "documentation gap". The finding is that the control may not be operating.&lt;/p&gt;

&lt;h2&gt;
  
  
  How to Write Findings That People Will Actually Fix
&lt;/h2&gt;

&lt;p&gt;Weak audit findings are vague and moralising.&lt;/p&gt;

&lt;p&gt;They say things like, "Process should be improved" or "Evidence was incomplete." That helps nobody.&lt;/p&gt;

&lt;p&gt;A useful finding should state:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;what requirement was being tested&lt;/li&gt;
&lt;li&gt;what evidence was reviewed&lt;/li&gt;
&lt;li&gt;what gap was found&lt;/li&gt;
&lt;li&gt;why it matters&lt;/li&gt;
&lt;li&gt;what corrective action is needed&lt;/li&gt;
&lt;li&gt;who should own it&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For example:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The access control procedure states quarterly privileged access reviews are required. Evidence was available for Q1 and Q2, but no Q3 review record was produced for the infrastructure admin group. This creates a risk that inappropriate access remains active beyond the organisation's approved review window. The control owner should complete the overdue review and implement a tracked schedule to prevent recurrence.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;That gives management something clear to act on.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Sensible Cadence for Small Organisations
&lt;/h2&gt;

&lt;p&gt;I would not recommend one giant annual internal audit and nothing else.&lt;/p&gt;

&lt;p&gt;A better pattern is:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;an annual audit programme covering the full ISMS&lt;/li&gt;
&lt;li&gt;lighter quarterly audits on higher-risk areas&lt;/li&gt;
&lt;li&gt;targeted follow-up where nonconformities or major changes exist&lt;/li&gt;
&lt;li&gt;immediate extra review after material incidents, supplier failures, or major structural change&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This keeps the workload manageable and improves the quality of evidence because you are not trying to reconstruct a year's worth of activity in one go.&lt;/p&gt;

&lt;p&gt;It also helps board and leadership reporting. If the ISMS is being checked steadily, your updates become cleaner and more credible. That matters when compliance work needs budget, priority, or operational changes. I touched on that wider reporting discipline in &lt;a href="https://danieljamesglover.com/blog/2026-03-12-it-metrics-board-reporting/" rel="noopener noreferrer"&gt;IT metrics board reporting&lt;/a&gt;. Good governance gets easier when the evidence is routine rather than last-minute.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Bottom Line
&lt;/h2&gt;

&lt;p&gt;A good ISO 27001 internal audit should leave you with a clearer view of reality.&lt;/p&gt;

&lt;p&gt;Not just whether the right documents exist, but whether the system is being followed, whether controls are operating, and whether leadership can trust the picture they are being shown.&lt;/p&gt;

&lt;p&gt;For small teams, that means keeping the method practical.&lt;/p&gt;

&lt;p&gt;Plan the audit properly. Keep it independent enough to be credible. Test live evidence, not just paperwork. Write findings people can act on. Follow through.&lt;/p&gt;

&lt;p&gt;Do that consistently and internal audit stops being a compliance tax. It becomes one of the fastest ways to find drift before your certification body, your customers, or a real incident finds it for you.&lt;/p&gt;

&lt;p&gt;If you need help turning ISO 27001 from a documentation project into a working management system, my &lt;a href="https://danieljamesglover.com/services/it-compliance" rel="noopener noreferrer"&gt;IT compliance services&lt;/a&gt; are designed for exactly that problem.&lt;/p&gt;

</description>
      <category>iso27001</category>
      <category>cybersecurity</category>
      <category>compliance</category>
    </item>
  </channel>
</rss>
