The promise was simple: AI coding tools would make developers faster, more productive, and capable of shipping at unprecedented speed. That part came true. What nobody advertised was the other side of the equation: AI-generated code is now creating security vulnerabilities faster than any team can remediate them.
Veracode's 2026 State of Software Security report, analysing 1.6 million applications, confirms what many of us in IT leadership have suspected for months. Security debt, defined as known vulnerabilities left unresolved for more than a year, now affects 82 percent of companies, up from 74 percent just twelve months ago. High-risk vulnerabilities have jumped from 8.3 percent to 11.3 percent.
The report's conclusion is stark: "The velocity of development in the AI era makes comprehensive security unattainable."
As an IT leader, that sentence should stop you in your tracks.
The Remediation Gap Is Real
The core problem is not that AI writes bad code. Sometimes it does, sometimes it does not. The real issue is throughput asymmetry. AI tools allow developers to generate and ship code at a pace that completely outstrips the capacity of security teams to review, test, and remediate.
Think of it like a factory that tripled its production output without adding any quality inspectors. The defect rate per unit might stay the same or even improve slightly, but the absolute number of defects reaching customers explodes.
Veracode's data shows this pattern clearly. While open source vulnerabilities actually decreased from 70 percent to 62 percent of applications, and overall flaw prevalence dipped marginally from 80 to 78 percent, the sheer volume of new code means more total vulnerabilities are being introduced than resolved. The backlog grows every sprint.
This is not a theoretical risk. It is a compounding debt with interest.
Why Traditional AppSec Cannot Keep Up
Most application security programmes were designed for a world where human developers wrote code at human speed. The typical workflow involves static analysis scans in CI/CD pipelines, periodic penetration testing, and security reviews for major releases.
That model assumed certain constraints:
- Developers produced a finite, reviewable volume of code per sprint
- Security teams could triage findings within a reasonable timeframe
- Major architectural changes happened infrequently enough to review thoroughly
AI-assisted development breaks every one of these assumptions. When a single developer can generate thousands of lines of code in an afternoon, the review bottleneck shifts entirely to the security function, which has not scaled proportionally.
I have seen this firsthand in teams adopting AI coding assistants. Pull request volume doubles or triples. Each PR might be individually reasonable, but the cumulative review burden overwhelms security reviewers. Findings pile up in the backlog. Critical vulnerabilities get lost in the noise.
The False Promise of AI Fixing AI
The technology industry's reflexive answer to this problem is predictably circular: use more AI to find and fix the vulnerabilities that AI created. Automated remediation tools, AI-powered code review, intelligent triage systems. There is a certain logic to fighting fire with fire.
But the evidence does not support the optimism. Despite widespread adoption of AI-assisted security tools, the remediation gap widened over the past year, not narrowed. The same Veracode report notes that AI tools generate false positives at rates that create additional burden for human reviewers, sometimes making the problem worse.
This does not mean AI security tools are useless. They have genuine value for catching common vulnerability patterns and reducing mean time to detection. But positioning them as a complete solution to a problem partly caused by AI is wishful thinking.
What IT Leaders Should Actually Do
If you are responsible for technology strategy, the answer is not to ban AI coding tools. That ship has sailed, and the productivity benefits are real. Instead, the challenge is building security frameworks that match the new development velocity.
Redefine Security Metrics for the AI Era
Stop measuring security posture solely by vulnerability count. In an era of exponentially increasing code output, absolute numbers will always look worse. Instead, track:
- Remediation velocity - how quickly are critical vulnerabilities being resolved?
- Security debt ratio - what percentage of known vulnerabilities are older than 30, 60, 90 days?
- Risk-adjusted backlog - are high-severity findings being prioritised over noise?
These metrics tell you whether your security programme is keeping pace with development, which is what actually matters.
Shift Security Left, Then Shift It Earlier
The traditional "shift left" advice of integrating security into the development pipeline is necessary but insufficient. When AI generates code, security controls need to operate at the point of generation, not just at commit time.
This means:
- Configuring AI coding assistants with security-aware prompts and guardrails
- Implementing pre-commit hooks that catch high-severity patterns before code enters the repository
- Using AI-specific static analysis rules that understand common patterns in generated code
The goal is to prevent security debt from accumulating rather than trying to pay it down after the fact.
Invest in Developer Security Literacy
A recent Economist Impact study found that only 16 percent of organisations provide structured internal security training, despite nearly every executive claiming they are developing AI skills. This gap between rhetoric and investment is dangerous.
Developers using AI coding tools need specific training on:
- Recognising insecure patterns in AI-generated output
- Understanding when AI suggestions introduce dependency risks
- Knowing which types of code should never be generated without manual review (authentication, cryptography, access control)
This is not about making developers into security experts. It is about giving them enough context to be effective first-line defenders when AI accelerates their output.
Establish AI Code Governance
If your organisation does not have a governance framework for AI-generated code, you are flying blind. At minimum, this should include:
- Provenance tracking - knowing which code was AI-generated versus human-written
- Review requirements - mandatory security review for AI-generated code in sensitive areas
- Approved tool policies - which AI coding assistants are sanctioned and how they are configured
- Audit trails - the ability to trace a vulnerability back to its generation method
Without governance, you cannot measure the problem, and you certainly cannot manage it.
Right-Size Your Security Investment
Here is the uncomfortable truth: if your development velocity has doubled or tripled thanks to AI, but your security budget and headcount have stayed flat, you have implicitly decided to accept more risk. That might be a reasonable business decision, but it should be an explicit one.
The ratio of security investment to development velocity needs recalibrating. This does not necessarily mean doubling your AppSec team. It might mean investing in better tooling, automating triage of low-risk findings, or embedding security champions within development squads. But the investment needs to increase proportionally to the code output.
The Board Conversation
This is ultimately a risk management discussion, which means it belongs in the boardroom. Shadow AI usage makes the challenge even more complex, as developers adopting unapproved AI tools create security exposure that leadership cannot even see.
The framing should be straightforward: AI coding tools deliver genuine productivity gains. Those gains come with a measurable increase in security risk. Managing that risk requires proportional investment. The alternative is accumulating security debt that compounds until it triggers a breach, a compliance failure, or both.
IT leaders who can articulate this trade-off clearly, with data, will earn the trust and budget needed to address it. Those who cannot will find themselves explaining after the fact why the remediation gap became a security incident.
Looking Forward
The Veracode report describes the current situation as requiring "transformational change" rather than incremental improvement. I agree with the diagnosis, even if the prescription remains unclear.
What is clear is that the organisations navigating this successfully will be those that treat AI-generated code security as a strategic priority rather than a tactical afterthought. They will invest in governance, training, and tooling in proportion to their adoption of AI development tools. They will measure security outcomes, not just security activities.
The velocity genie is not going back in the bottle. The question for every IT leader is whether your security programme can evolve as fast as your development pipeline already has.
The data suggests most organisations have not made that leap yet. The 82 percent security debt figure is evidence enough. But recognising the problem is the first step toward solving it, and the window for proactive action is narrowing with every AI-generated commit.
Top comments (0)