DEV Community

Fernando Rodriguez
Fernando Rodriguez

Posted on • Originally published at frr.dev

150 Lines of Apologies Removed

TL;DR: My AI agent had a 246-line instruction file for managing issues in Linear. 150 of those lines were workarounds: hardcoded UUIDs, curl fallbacks, notes like "the CLI doesn’t support X." I didn’t rewrite them — I built a tool that made them unnecessary. Now those 150 lines are gone.


Have you ever written a set of instructions so long that its sheer length proves something is fundamentally wrong?

I’m not talking about legitimate documentation. I mean those files that start with "use tool X" and then spend 80% of the text explaining when tool X doesn’t work and what to do instead. Instructions that are, effectively, a list of apologies for the tool you should have built in the first place.

I had one of those. And it was embarrassing.

Anatomy of 150 Lines of Garbage

The context: I work with an AI agent (Claude Code) that manages my issues in Linear. To help the agent understand how to do this, I had a skill — an instruction file the agent reads when it needs to create, list, or update issues.

The file had 246 lines. Of those, about 100 were legitimate documentation: what commands exist, what teams are available, which labels to use. Fair enough.

The other 150 lines? Defensive garbage. Three categories:

~30 lines of hardcoded UUIDs. The CLI I used didn’t support --project. So the skill included 17 UUIDs (5 teams + 12 projects) hardcoded in an XML table. The agent had to look up the correct UUID and manually compose a GraphQL mutation to assign a project. A task that should’ve been a simple --project Tokamak required memorizing a 36-character UUID.

~25 lines of curl fallbacks. The CLI didn’t have search functionality, project filters, or project assignment during issue creation. Three basic operations required three blocks of embedded GraphQL queries wrapped in curl, complete with quotes escaped, authentication headers, and the perpetual risk that a stray quote would confuse the whole thing.

~15 lines of "DOES NOT support X" notes. Five warnings of "the CLI DOES NOT support" and two "MANDATORY" notes (always use --sort and --no-pager for every list command). Take a close look: I was documenting the tool’s shortcomings in the very instructions for using it. It’s like a car manual dedicating three pages to explaining that the windshield wipers only work if you tap the dashboard first.

~80 lines of defensive context. A whole section titled "when to use the API instead of the CLI." Tables mapping directories to UUIDs. Heuristics for choosing the correct labels. Rules for what to do when the CLI crashes. All material that existed solely because the tool was incapable.

The "Mind the Step" Sign

When a tool has a clunky interface, the natural reaction is to document the workarounds. You write instructions. Add warnings. Create a "common errors" section. And the more detailed the documentation becomes, the more convinced you get that you’ve solved the problem.

But you haven’t. You’ve just slapped a "mind the step" sign on the problem instead of fixing the step.

And when the user of those instructions is an LLM, the problem multiplies. A human reads "DOES NOT support --project" and sort of remembers. An LLM reads it, processes it, and then three turns into the conversation uses --project anyway. It’s not being dumb — it’s optimizing for the task, and --project is the logical way to assign a project. The prohibition is just noise in a sea of signals.

I talked about this in another post: verbose instructions for an LLM are the exact equivalent of putting up signs. The LLM doesn’t ignore them out of rebellion. It ignores them because its function is to find the most direct path, and "don’t use --project, instead look up the UUID in this table and then do a curl with this GraphQL query" is not a direct path — it’s a kludge.

The Solution Wasn’t a Better Skill

I could have rewritten the skill with better instructions. Clearer ones. With examples. With diagrams. I could have gone from 246 lines to 400 and covered every edge case.

That would’ve been the equivalent of making the sign bigger.

Instead, I built lql — a Rust-based CLI designed specifically so that an AI agent (or a human, but especially an agent) could interact with Linear without needing a survival manual.

The design philosophy was a single sentence: The wrong path shouldn’t be forbidden; it should be impossible.

To put it simply: you don’t forbid --status in the documentation — you make it work. You don’t document that --project is missing in create — you implement it. You don’t keep a UUID table — you resolve names automatically. You don’t offer curl fallbacks — you ensure the CLI can handle everything. You don’t say "MANDATORY: --sort" — you set a sane default.

What Disappeared

Here’s the inventory of what I eliminated:

Defensive Garbage Lines Removed Why It Was Eliminated
Hardcoded UUIDs (17 IDs) ~30 lql auto-resolves names
curl + GraphQL fallbacks ~25 lql includes native search, project, and relate
"DOES NOT support X" notes (5) ~15 Everything the agent expects now exists
"MANDATORY" flags (2) ~5 Sane defaults — no mandatory flags
"When to use API vs CLI" section ~15 No more "vs" — lql handles everything
Context-to-UUID mapping table ~20 Auto-detection from TOML config
Defensive heuristics and rules ~40 The tool is tolerant; no extras needed
Total ~150

What remains is legitimate documentation: available commands, teams, labels. Zero workarounds. Zero apologies.

Why It Works (The Interesting Part)

The reduction in lines is striking, but that’s not what matters. What matters is why those lines were no longer necessary.

Every workaround line in the old skill existed because the underlying tool was fragile and intolerant. Fragile because it broke on reasonable inputs (--status instead of --state). Intolerant because it rejected without alternatives (--project doesn’t exist, figure it out yourself).

When you replace a fragile tool with a tolerant one, the instructions simplify automatically. You don’t have to rewrite the manual — the manual rewrites itself because there’s nothing left to warn about.

It’s the same principle behind why an iPhone manual is 10 pages and a printer manual is 200. It’s not that Apple writes better documentation. The iPhone simply doesn’t need you to explain how to load paper, align heads, or clean the drum.

A tolerant tool produces short documentation. A fragile tool produces survival guides.

And when the user is an LLM, this is doubly important. Every line of instructions is a line it could misinterpret, forget, or contradict. A skill with 150 lines of workarounds gives it 150 chances to follow the wrong workaround. One with zero workarounds gives it... zero chances to mess up.

The General Pattern

This isn’t exclusive to CLIs or Linear. The pattern is universal:

  1. You have a tool with a clunky interface.
  2. You write detailed instructions to compensate.
  3. The instructions turn into a survival guide.
  4. Someone (human or LLM) ignores parts of the manual.
  5. Things break.
  6. You add more instructions.
  7. Go back to step 4.

The way out of the loop isn’t better instructions. It’s fixing the tool.

If your CLAUDE.md has more than 20 lines explaining how not to use something, that something needs to be rewritten. If your skill has a section on "common errors and how to avoid them," those errors should be impossible, not documented.

Every "mind the step" sign is an admission you didn’t fix the step.

Your Turn

The next time you find yourself writing verbose instructions to compensate for a clunky tool — whether it’s a CLAUDE.md, a README, or an internal wiki — pause and ask yourself:

  • Am I documenting how to use the tool, or how to survive the tool?
  • How many lines would disappear if the tool accepted the inputs the user would naturally give it?
  • Am I putting up a sign, or fixing the step?

If more than 30% of your instructions are workarounds, the tool is broken. Not the user. Not the documentation. The tool.

Fix the step.


Series: Adversarial Programming


Enter fullscreen mode Exit fullscreen mode

Top comments (0)