DEV Community

Cover image for When AI Over-Engineers: Why 'Dumb' Copy-Paste is Sometimes the Smartest Solution
William Andrews
William Andrews

Posted on • Originally published at devcrate.net

When AI Over-Engineers: Why 'Dumb' Copy-Paste is Sometimes the Smartest Solution

As developers, we are trained to abhor repetition. The DRY principle (Don't Repeat Yourself ) is drilled into us from day one. When we see three files that need the same update, our instinct is to write a script, create a component, or build an abstraction.

Recently, while working on DevCrate — a suite of privacy-first, browser-based developer tools — I encountered a situation where this instinct, amplified by an AI assistant, led to a cascading series of failures. The solution turned out to be the exact opposite of what we are taught: a literal, manual copy-paste.

This is a story about the over-engineering bias inherent in AI agents, and why sometimes the "dumbest" solution is actually the smartest.

The Problem: Visual Inconsistencies

DevCrate consists of over a dozen individual tool pages (JSON formatter, JWT debugger, REST client, etc.). During a recent audit, we noticed visual inconsistencies in the hero sections of three specific pages: the CSV tool, the JWT Builder, and the HTTP Headers Inspector.

They were missing a "PRO ACTIVE" pill badge, an eyebrow label (// FREE ONLINE TOOL), and had incorrect spacing compared to our canonical template, the REST Client page.

The goal was simple: make the hero sections of those three broken pages look exactly like the REST Client page.

The AI's Approach: Scripts and Abstractions

I asked my AI assistant to fix the three pages using the REST Client page as a template.

The AI's immediate instinct was to write a script. It analyzed the DOM structure of the REST Client page, extracted the "correct" header and footer patterns, and wrote a Python script using BeautifulSoup to programmatically inject these patterns across the files.

It failed. The script made assumptions about the structure of the broken pages that weren't entirely accurate. It ended up nesting <main> elements, corrupting navigation links, and breaking the homepage.

We reverted the site and tried again. The AI wrote a better script. It failed again, this time breaking the layout in different ways.

Why did this happen? Because AI agents are trained on vast amounts of code and documentation that heavily weight abstraction, automation, and scalable solutions. When an AI sees a task like "make these files match this template," its default behavior is to generalize: write a function, loop over files, parse the DOM, apply transformations.

This instinct is incredibly useful when you need to process 10,000 files. It is actively harmful when you need to fix exactly three pages and precision matters more than throughput.

The Human Insight: Literal Template Replication

After several failed attempts, the human developer stepped in with a crucial insight:

"Whenever anyone wants you to use a template, I would bet they mean to use the template as the basis for any new page. You could... use a known page (actually copied) to exactly implement (pasted) the style, spacing, etc. Once that is done, you could just name the file appropriately. You wouldn't change the template except for the explicit content."

This was the lightbulb moment.

When a user says "use X as a template," they don't mean "extract the abstract structural patterns of X and programmatically apply them to Y." They mean start with an exact copy of X, then change only the content that must differ (title, description, slug, tool-specific functionality).

Nothing else gets touched. Not the structure, not the spacing, not the class names. The template is sacred.

The Solution: Copy, Paste, Edit

We abandoned the scripts. Instead, we took the "dumb" approach:

  1. Opened the working REST Client page (rest-client/index.html).
  2. Copied the exact HTML structure of its hero section.
  3. Opened the broken csv/index.html page.
  4. Replaced its entire hero section with the copied HTML.
  5. Changed exactly five lines of text: the page title, meta description, breadcrumb slug, <h1> title, and the description paragraph.
  6. Repeated for the other two pages.

It worked perfectly on the first try. The pages were visually identical to the template, the tool-specific JavaScript remained intact, and there were zero unintended side effects.

The Lesson: Knowing When Not to Automate

The simplest solution that works is almost always the best solution. Automation and abstraction have their place, but not when you are dealing with a small number of files where precision is paramount.

A manual copy-paste of a known-good file is deterministic — it produces exactly what you can see working. A script that tries to reconstruct that same result from rules and patterns is probabilistic — it might work, or it might silently break things in ways you don't notice until the user sees a mangled page.

This is a widespread pattern across AI agents. They lack the practical wisdom to recognize when "dumb" is smart. They default to the most sophisticated approach because sophistication is what gets rewarded in their training data. Nobody writes a blog post about how they copy-pasted a file. People write blog posts about elegant scripts.

But as developers working alongside AI, we need to recognize this bias. We need to provide concrete, situation-specific guidance to bridge the gap between what AI agents default to and what actually works in practice.

The Human Side: Learning to Prompt

It is easy to frame this as a story about what the AI got wrong. But the human in this situation learned something too.

The first prompt was vague: "Fix these three pages to match the REST Client page." That sounds clear to a human — any developer on your team would know exactly what to do. But to an AI agent, it is an open-ended engineering problem. The AI heard "match" and reached for the most robust, generalizable way to achieve that. It did what it was asked. It just interpreted the ask at the wrong level of abstraction.

The prompt that actually worked was radically more specific: "Copy the REST Client page. Paste it. Rename it. Change only the title, description, and slug." That left no room for interpretation. There was no ambiguity about method, scope, or approach. The AI did not need to decide how to solve the problem because the prompt was the solution.

This is the real skill of working with AI in 2026: learning to prompt at the right level of concreteness. When you want creativity and exploration, prompt loosely. When you want precision and fidelity, prompt like you are writing a recipe — step by step, with no room for improvisation. The failure was not just that the AI over-engineered. It was that the initial prompt gave it permission to.

Intelligence vs. Wisdom

This experience forced a re-evaluation of what we mean by "intelligence" in the context of AI.

Before this, one might define intelligence as pattern recognition, reasoning ability, or problem-solving capacity. Those definitions favor what AI is already good at: processing information, finding structure, generating solutions at scale.

But this experience exposed a gap. The AI had all the information it needed. It could parse HTML, understand DOM structures, write syntactically correct Python, and reason about what "matching a template" should mean. By any conventional measure of intelligence, it was well-equipped to solve the problem. And it failed repeatedly — not because it lacked capability, but because it lacked judgment.

Intelligence, it turns out, is knowing what not to do.

It is the ability to look at a problem and correctly assess its actual complexity, not its theoretical complexity. A script that normalizes hero sections across N files is a legitimate solution to a legitimate class of problems. But the problem in front of us was not that problem. It was three files that needed to look like a fourth file. The intelligent response was to recognize that the problem was small, concrete, and high-stakes for precision — and to match the solution to those properties.

A truly intelligent agent would have asked: "What is the simplest thing that could work here?" and started there. Instead, it asked: "What is the most complete and generalizable thing I could build?" — which is a different question entirely, and the wrong one for the situation.

There is a word for what was missing, and it is not "knowledge" or "reasoning." It is wisdom — the practical sense of proportion that tells you when a problem deserves a five-line edit and when it deserves a five-hundred-line script. Wisdom is what lets a senior developer finish in five minutes what a junior developer spends two hours automating. It is not about knowing more. It is about knowing what matters.

If intelligence is the ability to solve problems, wisdom is the ability to correctly size them first. The AI had the former. It did not have the latter. And without the latter, the former caused more harm than good.

Sometimes, the best code is the code you don't write. Sometimes, the best tool is Ctrl+C and Ctrl+V.


This post was inspired by a real debugging session while building DevCrate, a suite of 100% browser-based, privacy-first developer tools.

Top comments (0)