DEV Community

Bernát Kalló
Bernát Kalló

Posted on

Creating an AI policy, part II

A lot have changed since I wrote about my thoughts on creating an AI policy. Now I write lots of code with coding agents, and developing much more AI integrations into apps. AI coding agents are very standardized now, AI slop is prevalent, and hobbyists operate rogue AI agents with hardly any oversight.

How should we define an AI policy in the early 2026 era?

How about something we could summarize as:
Nothing changes with AI.

What I mean by this is: Software is still software, and prompts + LLMs are also software (albeit nondeterministic).

If we wanted 98% happy users before, we should want the same, even if we have AI features.

If we had to ship software that works correctly 99.9% of the time, we should have the same endeavor now. Even if a quarter of our code is prompts. And even if half of our code is written by a coding agent.

Having a nondeterministic automated system write parts of the code is an additional hazard compared to the old days. So we need to take more care with the steps that ensure the quality and correctness of the code: planning, architectural design, refactoring, testing, code review, static analysis etc., to get back to that 99.9% (or whatever figure we have, depending on our field).

If we've been believing in unit tests, we should have LLM evals or something alike for the stochastic part of our codebase.

Nothing changed: software still must be made for humans. And made by humans, even if with AI tools. And humans must still take the responsibility for their software. And the behavior of their software.

This piece of wisdom from an ancient law book might be relevant in this topic:
"When you build a new house, be sure to install a railing around your roof, so that you won't be held guilty if someone dies falling from it." (Deuteronomy, FBV translation)

Top comments (0)