There has always been something uniquely harsh about smart contracts.
The obvious reason is that mistakes are extremely costly. Once deployed, bugs are not just bugs—they are financial vulnerabilities that hurts a lot!
Smart contracts operate in one of the most hostile environments in software engineering. From the moment they go live, they are exposed to a global pool of adversaries with unlimited time and strong incentives to break them. Every line of code is continuously tested by attackers. There is no gradual feedback loop. You do not get gentle warnings. You only find out when it is too late.
This creates a form of evolutionary pressure on software. Contracts that survive in the wild, unexploited over time, gain credibility and a bit of fame (think Uniswap contracts). Those that fail disappear quickly. In that sense, they follow something close to a security Lindy effect: survival becomes a proxy for correctness.
It is a brutal way to build reliable systems, but it works.
What is changing now is that this kind of pressure is no longer limited to smart contracts or a small subset of critical open-source systems.
With the rise of AI, the attack surface of software is being fundamentally transformed. Systems are no longer probed only by human actors. Instead, they are increasingly exposed to automated, persistent, and scalable analysis. Autonomous agents can scan codebases, test APIs, explore edge cases, and search for vulnerabilities continuously. What used to require time, skill, and coordination can now be executed at scale.
This effectively turns most software into something closer to a smart contract environment. Constant exposure. Continuous probing. No safe assumptions.
This shift understandably creates concern. It increases the likelihood that weak systems will be discovered and exploited faster. It raises the cost of building and maintaining secure software. It removes the illusion that systems can remain partially broken and still function indefinitely.
However, there is another side to this transformation.
The same capabilities that enable offensive pressure can also be used defensively. AI does not only scale attacks; it also scales review, analysis, and improvement. It allows teams to continuously inspect their own systems with a level of persistence and breadth that was previously unrealistic.
In practice, this means that hardening can become an ongoing process rather than a one-time effort.
In my own workflow, I already treat AI as a defensive layer. Each night, using the remaining quota of daily tokens, I run a set of autonomous agents over our codebase. These agents scan for bugs, inconsistencies, and potential security vulnerabilities. They also compare the implementation against known best practices and architectural patterns, and generate concrete suggestions for improvement. The output is collected in a dedicated location, effectively creating a continuous stream of recommendations for hardening the system.
This is not perfect, and it does not replace human judgment. But it changes the baseline. Instead of occasional reviews, the system is subjected to constant internal pressure that mirrors, in a controlled way, the external pressure it will face in production.
Seen from this perspective, what we are entering is not only a more dangerous environment, but also a more honest one.
Weak systems will fail faster. Strong systems will emerge more clearly. Over time, this leads to a kind of large-scale hardening of software. Poor assumptions, fragile architectures, and hidden inconsistencies are less likely to survive prolonged exposure.
Smart contracts and open source are the first widely adopted systems forced to operate under continuous adversarial pressure. They showed that this pressure, while costly, leads to a different standard of thinking about correctness and security.
AI is now extending that pressure to the rest of the software world—surfacing vulnerabilities and bugs that have been hidden in plain sight, sometimes for decades.
The result will not be a collapse in security. More probable scenario, painful hardening. On the other side of it? Possibly better software.
Top comments (0)