AI-assisted development is very much the norm at this point. I’ve written at length about how AI Agents, while improving overall deployment velocity, have created a whole new attack surface for the software supply chain. Those simple AI Agent Markdown (.md) files that were supposed to make our lives so much easier by taking away the burden of manual process through streamlined, around-the-clock autonomous agent activities, turned out to be pretty insecure. Excessive Agency coupled with Base64-encoded payloads hidden in publicly-sourced markdown files turned out to be yet another security nightmare.
According to Cloudsmith’s 2026 artifact management report, which surveyed over 500 engineering and security professionals, 93% of organisations have integrated AI-generated code into their production workflows - whether that’s through Anthropic’s Claude or via an autonomous agent. This simple shift towards automated code generation is already altering the threat landscape, specifically regarding how users source software libraries from public registries - like npm, PyPI and more.
The primary technical risk associated with LLMs in the SDLC today is the phenomenon of dependency hallucinations, where essentially the LLM just fabricates a package name that does not exist, but sounds plausible. Developers trust the AI-generated output, and proceed to push this into the build process - what that library does, nobody knows. Adversaries are aware of this inconsistent flaw in how LLMs generate package names, so in the same vein as traditional typosquatting incidents in registries like npm and PyPI, they register the fabricated package name. This attack, known comically as “Slopsquatting”, was first coined by the security researcher Bar Lanyado back in 2023. This sort of risk is amplified by the sheer volume of modern dependencies, with the average application now exceeding 1,200 dependencies, according to Cloudsmith’s recent report.
Despite what Dario Amodei and Sam Altman say about the 10x productivity gains that development teams are now seeing, the data suggests a substantial shift towards a fairly painful validation process. The report indicates that 58% of engineering teams dedicate between 11 and 40+ hours per month solely to securing and validating AI-generated code - since the number of PRs has also significantly increased. These report findings represent a significant diversion of engineering resources toward remediating security debt created by these AI technologies. While the OWASP Top 10 control “Misinformation” might seem like the biggest concern for average consumers of LLMs, public or private - the “Data & Model Poisoning”-style attacks are the most urgent security concerns for software devs today.
I’m not trying to fear-monger that the public LLM models are badly poisoned, but we do seriously need to treat the outputs of these models with a high-level of scrutiny. The reliance on reputation security, according to the recent Cloudsmith report, where models from major providers are assumed to be safe, creates a serious trust issue. There are of course publicly-accessible APIs that can be used within the CI/CD pipeline, such as Open Source Vulnerabilities (OSV) for checking if those suggested upstream dependencies contain known vulnerabilities, as well as OpenSSF’s Malicious Packages API, which tells you if any of those open-source upstream libraries are known to contain malicious code. Dev teams would be strongly-advised to query those industry-recognised data sources before blindly pulling publicly-sourced software dependencies into your build process.
This industry transition to AI-driven development requires a somewhat corresponding evolution in software artifact management. As the boundary between human-written and machine-generated code blurs, the only viable security posture is one rooted in automated, policy-driven verification and control.
Top comments (2)
The markdown problem only exists when you don't verify them. That is the same as adding libraries without any scrutiny.
The find skills agent skill is one of the most dangerous skills that exists.
At the moment I'm working on a project I would never have started without AI because It would have taken too long just to get started. Now it took me five days. And yes it involved a lot of AI code nannying, but that is a consequence of the improved speed.
Of course AI service CEO's are going to claim better productivity numbers. But they don't take in account the AI budget. And they are working with very smart people. Not all companies have that kind of budget and talent pool.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.