DEV Community

Cover image for Claude's Law
Doug Arcuri
Doug Arcuri

Posted on

Claude's Law

I came across the post on Hacker News titled Laws of Software Engineering. Within, Dr. Milan Milanović outlined 56 laws of software engineering practices, many of which have originated from old men known as "greybeards."

These laws are familiar to software engineers; they are regularly discussed and battle-tested. But now there is this weird gap in coding practices, lacking laws: managing agents using context files and skills, accepting edits from the genie, trusting what it's spitting out, and sometimes paying attention to its edits.

So, I posted to the HN thread. Reflecting on a recent addition to a codebase by my executor and agreed to by my evaluator, two goto statements were committed. It felt weird as I remembered another "greybeard" engaged my allergic reaction: a pseudo-law called Go To Statement Considered Harmful.

I relented in the HN comment:

Today, I was presented with Claude's decision to include numerous goto statements in a new implementation. I thought deeply about their manual removal; years of software laws went against what I saw. But then, I realized it wouldn't matter anymore. I committed the code and let the second AI review it. It too had no problem with goto's.

A short time later, a terse follow-up from voiceofunreason posted:

Render therefore unto Caesar the things which are Caesar's; and unto Claude the things that are Claude's.

This gave me the idea. I updated my comment, and invented the 57th law:

Claude's Law: The code that is written by the agent is the most correct way to write it.

Dr. Milan Milanović, if you see this, please consider adding it to your law library. And yeah, I updated my style context file so the agents wouldn't do that ever again.

Top comments (1)

Collapse
 
peacebinflow profile image
PEACEBINFLOW

There's something almost theological buried in that "render unto Claude" line, and I don't think it's just a joke.

I've noticed a pattern in my own work: the moment I start editing AI-generated code line-by-line, I enter a weird cognitive sinkhole where I'm no longer thinking about the problem—I'm just negotiating with the output. The agent wrote a thing, I disagree with a stylistic choice, I fix it, and then twenty minutes later I've lost the thread of what I was actually trying to accomplish. The stylistic "correction" cost me more attention than the architectural decision that actually mattered.

So there's a pragmatic reading of Claude's Law that isn't about deference to the model at all. It's about attention economics. Every time you override the agent on something that doesn't affect correctness or maintainability in any measurable way, you're spending a finite cognitive resource—your ability to care about details—on the wrong thing. The goto is fine. The weird variable name is fine. The slightly-too-nested closure is fine. Save the scrutiny for the things the agent can't judge: whether this code belongs in the system at all, whether the abstraction boundary makes sense, whether the test actually tests the right behavior.

The twist at the end—"I updated my style context file so the agents wouldn't do that ever again"—suggests you don't fully buy your own law, and that's the interesting part. You rendered unto Claude this time, but you changed the rules so there wouldn't be a next time. So maybe the real law is a corollary: render unto Claude what Claude wrote, but only if you haven't already told Claude what you want.

Where do you draw the line between "this is fine, commit it" and "I need to fix my context files so this stops happening"? Or do you find the line keeps moving?