<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: KKK Dev</title>
    <description>The latest articles on DEV Community by KKK Dev (@kkk_dev_1b0a00f5047cb4de6).</description>
    <link>https://dev.to/kkk_dev_1b0a00f5047cb4de6</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kkk_dev_1b0a00f5047cb4de6"/>
    <language>en</language>
    <item>
      <title>Why AI Coding Tools Over-engineer Your MVP — And the One Fix</title>
      <dc:creator>KKK Dev</dc:creator>
      <pubDate>Sat, 16 May 2026 07:01:53 +0000</pubDate>
      <link>https://dev.to/kkk_dev_1b0a00f5047cb4de6/why-ai-coding-tools-over-engineer-your-mvp-and-the-one-fix-p11</link>
      <guid>https://dev.to/kkk_dev_1b0a00f5047cb4de6/why-ai-coding-tools-over-engineer-your-mvp-and-the-one-fix-p11</guid>
      <description>&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt; — For &lt;strong&gt;reversible, stage-sensitive&lt;/strong&gt; engineering decisions, AI assistants default to production-grade advice unless you specify business context. This isn't a model intelligence problem you can wait out. It's an objective-function problem you can fix in the next prompt. Below: the mechanism (with appropriate hedging), a before/after example, and a concrete taxonomy of what "context" actually means.&lt;/p&gt;




&lt;h2&gt;
  
  
  1. A Scene You've Probably Seen
&lt;/h2&gt;

&lt;p&gt;Fifty users. MVP stage. Hypothesis validation is the only thing that matters.&lt;/p&gt;

&lt;p&gt;You ask Claude Code to do a security review. You get back:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;"Move the database into a separate VPC and use VPC Peering or PrivateLink."&lt;/li&gt;
&lt;li&gt;"Wrap every external call in mTLS with automated cert rotation."&lt;/li&gt;
&lt;li&gt;"Stream audit logs to a separate AWS account for SOC2 readiness."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;None of it is &lt;em&gt;wrong&lt;/em&gt;. But if you do all of it at this stage, you'll burn your runway on infra migration before validating whether anyone wants the product.&lt;/p&gt;

&lt;p&gt;The same pattern shows up outside security:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Redis cluster with read replicas recommended for a 30 RPS service&lt;/li&gt;
&lt;li&gt;Hexagonal architecture proposed for a 100-line script&lt;/li&gt;
&lt;li&gt;GitOps + ArgoCD + Terraform module separation recommended for a 3-person team&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The usual conclusion follows:&lt;br&gt;
&lt;strong&gt;"AI can't do trade-offs. Humans need to decide."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The direction is right. The diagnosis is too vague to act on — so let's narrow it.&lt;/p&gt;


&lt;h2&gt;
  
  
  2. Scoping the Claim
&lt;/h2&gt;

&lt;p&gt;This article is about a &lt;strong&gt;specific&lt;/strong&gt; class of decisions: &lt;strong&gt;reversible&lt;/strong&gt; and &lt;strong&gt;stage-sensitive&lt;/strong&gt;.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;em&gt;Reversible&lt;/em&gt;: you can undo it without losing data, breaking contracts with users, or rewriting half the codebase. (Adding Redis is reversible. Changing your primary key strategy is not.)&lt;/li&gt;
&lt;li&gt;
&lt;em&gt;Stage-sensitive&lt;/em&gt;: the right answer depends on where you are (MVP / growth / scale), not on universal best practice. (Caching layers, auth hardening depth, observability depth, infra topology.)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For &lt;strong&gt;irreversible&lt;/strong&gt; or &lt;strong&gt;stage-insensitive&lt;/strong&gt; decisions — DB engine, public API contracts, auth model, multi-tenancy boundaries, anything touching PII or payments — AI's conservative reflex is closer to right. That's covered in Section 6.&lt;/p&gt;

&lt;p&gt;Within scope, the claim is:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;AI assistants default to production-grade advice. You can change that, but only by stating your stage, scale, and trade-off weights explicitly.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;


&lt;h2&gt;
  
  
  3. Why the Common Diagnosis Falls Short
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;"AI only has the codebase as context, so it can't reason about trade-offs."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Three problems with this framing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(a) Trade-offs are value judgments, not capability tests.&lt;/strong&gt;&lt;br&gt;
Risk preference, time preference, capital allocation — these are &lt;strong&gt;objective-function definitions&lt;/strong&gt;, not things a model can derive from code alone. Two senior engineers reading the same code reach opposite conclusions. The CTO says "ship," the security lead says "no" — neither is smarter. They optimize different functions. Asking AI to "make the trade-off" without naming the objective is asking it to optimize without a loss function.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(b) You're not unable to give it context. You're choosing not to.&lt;/strong&gt;&lt;br&gt;
Every major coding assistant has a context-injection slot: &lt;code&gt;CLAUDE.md&lt;/code&gt;, &lt;code&gt;.cursorrules&lt;/code&gt;, &lt;code&gt;AGENTS.md&lt;/code&gt;. Most "AI gave a bad recommendation" stories are at least partially "the user didn't specify context" stories — not all of them (models also hallucinate, miss repo state, or carry generic safety bias), but more than people admit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;(c) Humans over-engineer too.&lt;/strong&gt;&lt;br&gt;
Senior engineers carry trauma from past outages and pre-armor their code. AI's over-engineering and human over-engineering have &lt;em&gt;different failure modes&lt;/em&gt; — AI tends toward boilerplate hardening, humans toward sticky abstractions — but neither is automatically easier to undo. The real axis isn't &lt;em&gt;AI vs. human&lt;/em&gt;. It's &lt;strong&gt;context-aware vs. context-blind decisions&lt;/strong&gt;.&lt;/p&gt;


&lt;h2&gt;
  
  
  4. A Hypothesis: The Production-Mature Prior
&lt;/h2&gt;

&lt;p&gt;Here I have to hedge, because nobody outside the labs knows training mixes for sure. But the publicly visible candidates for &lt;em&gt;what feeds these models&lt;/em&gt; are skewed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;High-star GitHub repos (already scaled, already hardened)&lt;/li&gt;
&lt;li&gt;Vendor docs (AWS, GCP, k8s) — best-practice prose, not MVP code&lt;/li&gt;
&lt;li&gt;High-vote Stack Overflow answers (often "the robust way")&lt;/li&gt;
&lt;li&gt;Tech blog post-mortems ("here's what we should have done")&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What these share: they document code that &lt;strong&gt;survived long enough to need scale, reliability, and compliance vocabulary&lt;/strong&gt;. MVP-shaped code — single-file Flask apps, a &lt;code&gt;docker-compose.yml&lt;/code&gt; with ten environment variables, a single RDS instance, hand-rolled session cookies — exists in training data too, but the &lt;em&gt;advice prose&lt;/em&gt; attached to it is rare. The model has read a lot more "you should harden this" than "this is fine for now."&lt;/p&gt;

&lt;p&gt;This is a &lt;strong&gt;hypothesis&lt;/strong&gt; about a contributing factor, not a proven mechanism. Outputs also reflect instruction tuning, RLHF, system prompts, and safety policies — any of which can independently push toward caution. But it lines up with the observed behavior, and it's testable: try the before/after in Section 5 yourself.&lt;/p&gt;

&lt;p&gt;Security review amplifies the effect. Threat catalogs (OWASP, CVE) are by construction lists of &lt;em&gt;things that went wrong&lt;/em&gt;. A model trained heavily on those, when asked "is this secure?" without a threat model, hedges toward more findings — false positives increase when the model can't price the cost of being wrong in either direction.&lt;/p&gt;


&lt;h2&gt;
  
  
  5. Before / After — Same Model, Different Context
&lt;/h2&gt;

&lt;p&gt;Here is the actual move. Same model (Claude Sonnet 4.6), same prompt, same code. The only difference is whether business context was supplied.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code under review:&lt;/strong&gt; a 60-line Express endpoint that accepts a JSON payload, looks up a user by email in Postgres, and returns a JWT.&lt;/p&gt;
&lt;h3&gt;
  
  
  Before — no context
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Prompt:&lt;/em&gt; "Review the security of this endpoint."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Response (paraphrased, typical shape):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Add rate limiting with Redis token bucket&lt;/li&gt;
&lt;li&gt;Move JWT secret to AWS Secrets Manager&lt;/li&gt;
&lt;li&gt;Enforce mTLS between service and database&lt;/li&gt;
&lt;li&gt;Audit-log every auth attempt to a separate AWS account&lt;/li&gt;
&lt;li&gt;Add a WAF in front of the load balancer&lt;/li&gt;
&lt;li&gt;Implement refresh token rotation with revocation list&lt;/li&gt;
&lt;li&gt;Add CAPTCHA on repeated failures&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Seven recommendations. None wrong in absolute terms. All sized for a company past Series A.&lt;/p&gt;
&lt;h3&gt;
  
  
  After — with context
&lt;/h3&gt;

&lt;p&gt;Add this paragraph to the prompt (or to &lt;code&gt;CLAUDE.md&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Stage: MVP, 50 users, solo dev, $200/month infra budget, 8-month runway.
Threat model in scope: of the OWASP Top 10, prioritize Injection (A03),
Broken Authentication (A07), and Sensitive Data Exposure (A02). Out of scope:
nation-state attackers, insider threats, denial-of-service.
Anti-goals: do NOT recommend mTLS, WAF, separate AWS accounts, or anything
requiring a dedicated infra hire.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Response (paraphrased, typical shape):&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Confirm parameterized queries (looks fine)&lt;/li&gt;
&lt;li&gt;Hash passwords with bcrypt cost ≥12 — current code uses cost 8, raise it&lt;/li&gt;
&lt;li&gt;Validate JWT signing alg explicitly; reject &lt;code&gt;alg: none&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Put the JWT signing secret in an env var; document a rotation procedure (manual is fine at this stage)&lt;/li&gt;
&lt;li&gt;Log auth failures to existing application logs; add a counter for "review at 5k MAU"&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Five items, all things a solo dev can do this afternoon. Same model, same code.&lt;/p&gt;

&lt;p&gt;That's the entire claim of this article, demonstrated. Try it on your own code; the shape of the answer changes.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. The Reframe
&lt;/h2&gt;

&lt;p&gt;The loose version:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"AI can't do trade-offs. Humans must decide."&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The sharper version:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;For reversible, stage-sensitive decisions, AI defaults to production-grade advice. The intervention point is supplying business context — stage, scale, trade-off weights, anti-goals.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This framing is more useful than the original because:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;It points to an action.&lt;/strong&gt; The responsibility moves from "AI is limited" to "I haven't told it where I am." Whether or not that's the &lt;em&gt;whole&lt;/em&gt; story, it's the part you can fix in the next prompt.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It's durable, but not "permanently true."&lt;/strong&gt; Future tooling will surely improve at inferring stage from repo shape, asking clarifying questions, and pulling org context from product telemetry. But humans will remain &lt;em&gt;accountable&lt;/em&gt; for the objective function, even when they delegate parts of stating it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;It admits scope.&lt;/strong&gt; It's a claim about reversible × stage-sensitive decisions, not all decisions.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  7. What "Context" Actually Means — a Taxonomy
&lt;/h2&gt;

&lt;p&gt;"Give the AI more context" is vague advice. Useful context has four layers:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Layer&lt;/th&gt;
&lt;th&gt;What it answers&lt;/th&gt;
&lt;th&gt;Where it lives&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Stage&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;MVP / growth / scale. Reversibility budget.&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;CLAUDE.md&lt;/code&gt; top section&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Constraints&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Runway, team size, infra budget, latency targets&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;CLAUDE.md&lt;/code&gt; or per-prompt&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Trade-off weights&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Ship speed vs. quality vs. scalability ordering&lt;/td&gt;
&lt;td&gt;&lt;code&gt;CLAUDE.md&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Anti-goals&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Explicit list of recommendations to skip&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;CLAUDE.md&lt;/code&gt; "do NOT" list&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;A working &lt;code&gt;CLAUDE.md&lt;/code&gt; snippet for an MVP:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gu"&gt;## Project Context&lt;/span&gt;
&lt;span class="p"&gt;
-&lt;/span&gt; &lt;span class="gs"&gt;**Stage**&lt;/span&gt;: MVP — validating hypothesis. 50 users, target 200 MAU.
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Team**&lt;/span&gt;: Solo developer. No ops headcount.
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Constraints**&lt;/span&gt;: 8-month runway. Infra budget under $200/month.
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Trade-off weights**&lt;/span&gt; (highest to lowest): ship speed, code clarity,
  scalability. Latency p95 under 1s is fine.
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Security scope**&lt;/span&gt;: OWASP Top 10, prioritized — Injection, Broken Auth,
  Sensitive Data Exposure. Out of scope: nation-state, insider threats, DoS.
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Anti-goals — do NOT recommend**&lt;/span&gt;:
&lt;span class="p"&gt;  -&lt;/span&gt; Microservices, k8s, service mesh, mTLS
&lt;span class="p"&gt;  -&lt;/span&gt; VPC Peering / PrivateLink / separate AWS accounts
&lt;span class="p"&gt;  -&lt;/span&gt; Architecture patterns for files under 200 LOC
&lt;span class="p"&gt;  -&lt;/span&gt; Caching, queues, or workers for features without measured load
&lt;span class="p"&gt;-&lt;/span&gt; &lt;span class="gs"&gt;**Re-evaluation trigger**&lt;/span&gt;: revisit these weights at 5,000 MAU or when
  payments ship.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The anti-goals list is the unusual part. Most people skip it. It's the highest-leverage line in the file: it removes a class of recommendations the model would otherwise default to.&lt;/p&gt;




&lt;h2&gt;
  
  
  8. When AI's Default Is Actually Right
&lt;/h2&gt;

&lt;p&gt;The thesis is scoped to reversible × stage-sensitive decisions. Outside that scope, AI's conservative bias is an &lt;em&gt;asset&lt;/em&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Non-recoverable downside&lt;/strong&gt; — anything touching PII, payment data, health data, customer secrets, auth tokens. Also: tenant isolation, key management, backup/restore, audit logs, retention/deletion compliance, breach notification readiness, secrets handling, vendor and supply-chain risk.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Regulated industries&lt;/strong&gt; — finance, healthcare, government, ed-tech with minors. The default prior may even &lt;em&gt;underestimate&lt;/em&gt; what's required.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Expensive-to-reverse decisions&lt;/strong&gt; — primary DB engine, auth model, multi-tenancy boundaries, public API contracts, event schemas, ID strategy, billing model, permission model, observability foundations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For these, lean into the conservative recommendation. Override only with a written reason.&lt;/p&gt;

&lt;p&gt;The honest summary: this article is a heuristic for one specific quadrant of decisions, not a universal law.&lt;/p&gt;




&lt;h2&gt;
  
  
  9. What Would Change This?
&lt;/h2&gt;

&lt;p&gt;It's tempting to add "at least for now" at the end and move on. Worth a beat instead.&lt;/p&gt;

&lt;p&gt;Things that would partially close the gap:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Stage inference from repo shape&lt;/strong&gt; — a meta-layer that looks at commit cadence, test coverage, observability stack, and recommends differently for "solo founder Express app" vs. "Series B microservice fleet."&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Org-aware agents&lt;/strong&gt; — read access to product metrics, infra spend, roadmap, risk policy. So the model can reason about cost the way a senior engineer does.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Policy profiles&lt;/strong&gt; — "MVP mode" / "scale mode" / "compliance mode" as first-class settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;What won't change: humans remain accountable for the objective function. Even if the tool &lt;em&gt;infers&lt;/em&gt; your stage, you still own the decision of whether to accept the inference. So the practical claim — &lt;em&gt;supply context, or don't be surprised by the defaults&lt;/em&gt; — survives most plausible improvements.&lt;/p&gt;




&lt;h2&gt;
  
  
  10. Summary
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Loose framing&lt;/th&gt;
&lt;th&gt;Sharper framing&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;AI can't do trade-offs&lt;/td&gt;
&lt;td&gt;AI optimizes the objective you give it; default objective is production-grade&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Humans must decide&lt;/td&gt;
&lt;td&gt;Humans must specify stage, constraints, trade-off weights, anti-goals&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;"AI's limitation"&lt;/td&gt;
&lt;td&gt;"Missing intervention at the context layer"&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Time-bounded ("for now")&lt;/td&gt;
&lt;td&gt;Humans stay accountable for objectives regardless of model progress&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;&lt;strong&gt;One actionable rule&lt;/strong&gt;: before your next "review this" prompt, write four lines — &lt;em&gt;stage, constraints, trade-off weights, anti-goals&lt;/em&gt; — into the prompt or into &lt;code&gt;CLAUDE.md&lt;/code&gt;. Re-run. If the recommendations don't shift, your context is probably still too thin or you've hit a different failure mode (the model ignoring the file, generic safety bias, or a genuine capability limit). At that point, you have a real problem to debug instead of a vague "AI gave bad advice."&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Open to feedback. Especially curious: (a) does the before/after replicate cleanly on your codebase? (b) where does the "Production-Mature Prior" hypothesis break — concrete counter-examples wanted. (c) is the same pattern visible in adjacent tooling — data engineering, MLOps, security scanners?&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>claude</category>
      <category>productivity</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Why every Claude Code-built site looks the same — and the image layer that breaks it</title>
      <dc:creator>KKK Dev</dc:creator>
      <pubDate>Sat, 16 May 2026 05:18:06 +0000</pubDate>
      <link>https://dev.to/kkk_dev_1b0a00f5047cb4de6/why-every-claude-code-built-site-looks-the-same-and-the-image-layer-that-breaks-it-37jp</link>
      <guid>https://dev.to/kkk_dev_1b0a00f5047cb4de6/why-every-claude-code-built-site-looks-the-same-and-the-image-layer-that-breaks-it-37jp</guid>
      <description>&lt;h2&gt;
  
  
  TL;DR:
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AI-built sites look uncannily similar because they share the same defaults — Tailwind + shadcn/ui + Lucide + the same gradients. It's not a placeholder problem; it's a visual-stack problem. Real, project-specific images are the cheapest way out.&lt;/li&gt;
&lt;li&gt;I wrote a small Claude Code skill that wraps Codex CLI's &lt;code&gt;gpt-image-2&lt;/code&gt; and triggers on natural-language asks. Drop a &lt;code&gt;DESIGN.md&lt;/code&gt; at the project root, tell Claude to insert images, and you get a coherent, on-brand set across the site.&lt;/li&gt;
&lt;li&gt;Biggest win for solo developers shipping without a designer. Repo: &lt;a href="https://github.com/JunSeo99/claude-skill-codex-imagegen" rel="noopener noreferrer"&gt;github.com/JunSeo99/claude-skill-codex-imagegen&lt;/a&gt; — install takes 30 seconds (or just hand the URL to Claude Code itself).&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;Claude Code can build a working site in one session. The structure, the routing, the component library — it all comes together fine on the first pass. The problem is more subtle: most of these sites end up looking like each other.&lt;/p&gt;

&lt;p&gt;The reason is the stack. Claude reaches for the same defaults every time — Tailwind, shadcn/ui, Lucide icons, a slate-or-zinc palette, a hero with a soft purple-to-blue gradient, cards with a 1px border and &lt;code&gt;rounded-2xl&lt;/code&gt; corners, an abstract SVG blob somewhere in the header. None of those choices are bad. But across hundreds of vibe-coded sites, the cumulative effect is that someone landing on one feels like they've been on this site before — even when they haven't. Visitors don't say "this is shadcn." They say "this feels AI-generated." And the surface they're reacting to is mostly visual: the same component library, the same icon language, the same illustration-less spaces.&lt;/p&gt;

&lt;p&gt;The cheapest way out of that uniformity, I've found, is real images. Not stock. Not Unsplash. Project-specific, style-consistent images generated to match a brand voice. Three or four of them placed where default vibe-coded sites would have left a Lucide icon over a gradient, and the "feels AI-generated" reaction collapses. The site stops reading as a template.&lt;/p&gt;

&lt;p&gt;I wanted that to stop being a manual step.&lt;/p&gt;

&lt;p&gt;In April, OpenAI shipped gpt-image-2 and bundled an &lt;code&gt;$imagegen&lt;/code&gt; skill into Codex CLI. That gave me what I needed: a real image model I could shell out to from inside Claude Code. So I wrote a Claude Code skill that triggers on natural-language asks like "make a hero image for this landing page" and dispatches the actual generation to Codex.&lt;/p&gt;

&lt;p&gt;Then I spent a weekend learning why nobody had a clean solution yet.&lt;/p&gt;

&lt;h2&gt;
  
  
  gpt-image-2 has three sharp edges and none of them are documented loudly
&lt;/h2&gt;

&lt;p&gt;These are the things I hit, in order, on the first day:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Size requests are advisory, not enforced.&lt;/strong&gt; I asked for 256×256. Got 1254×1254. Asked for 1024×1024 — also 1254×1254. The model picks its own dimensions based on what it thinks the prompt needs. If you actually need a specific size for a CSS slot, you resize &lt;em&gt;after&lt;/em&gt;, not before. You can't prompt your way out of it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transparent PNGs aren't supported.&lt;/strong&gt; gpt-image-2 will not emit alpha. Only &lt;code&gt;gpt-image-1.5&lt;/code&gt; does. This is buried in the OpenAI image-generation guide. The first time I asked for an icon "on transparent background," I got a perfectly nice icon sitting on a solid white square. The workaround is to generate on a flat removable background — green or pure white — and chroma-key it out locally. Fine, but you need to know that going in.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The PNG doesn't land where you asked.&lt;/strong&gt; It lands at &lt;code&gt;~/.codex/generated_images/&amp;lt;session-uuid&amp;gt;/ig_*.png&lt;/code&gt;. Telling Codex "save to &lt;code&gt;assets/hero.png&lt;/code&gt;" doesn't move the file there. You move it yourself afterwards.&lt;/p&gt;

&lt;p&gt;Each of those is a 20-minute debug session if you don't know them. Stacked, they make image generation feel "kind of broken" when it's actually working as designed, just badly documented.&lt;/p&gt;

&lt;h2&gt;
  
  
  And then there's the prompt itself
&lt;/h2&gt;

&lt;p&gt;Even if you handle all three edges above, your output is only as good as your prompt. And gpt-image-2 punishes keyword soup.&lt;/p&gt;

&lt;p&gt;The "stunning cinematic 8K masterpiece volumetric lighting" energy that worked on Midjourney v5 produces visibly worse output here. The OpenAI cookbook recommends a five-part structure — &lt;code&gt;Scene → Subject → Details → Use case → Constraints&lt;/code&gt; — and front-loading the first 50 words because the model weights the opening more heavily. This is real. I A/B'd it. The five-part one wins every time.&lt;/p&gt;

&lt;p&gt;For text in images (logos, banners, posters), wrap the literal text in double quotes or ALL CAPS so the model knows what's literal vs. descriptive. gpt-image-2 is genuinely strong here — short labels, signs, and UI mockups land at near-perfect spelling across Latin and CJK scripts, which is a meaningful jump from older models. Where it still wobbles is (a) long multi-line paragraphs baked into the image, (b) brand names and uncommon spellings, and (c) very small text inside dense layouts. For brand names, the OpenAI prompting guide recommends spelling the tricky word out letter-by-letter in the prompt ("the word ACME spelled A-C-M-E"). For paragraph-length text, render it as an HTML/CSS overlay over the generated image instead of asking the model to bake it in — that's the workflow gpt-image-2's own docs recommend.&lt;/p&gt;

&lt;p&gt;For edits, the trick is "change only X, keep everything else identical." The model preserves what you don't mention vaguely — but it preserves what you explicitly tell it to keep very well.&lt;/p&gt;

&lt;p&gt;None of this lives in the recipes that just say "run &lt;code&gt;codex exec&lt;/code&gt; and you're done." So I baked all of it into the skill's playbook.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the skill actually does
&lt;/h2&gt;

&lt;p&gt;One &lt;code&gt;SKILL.md&lt;/code&gt; plus two reference files (&lt;code&gt;prompting-guide.md&lt;/code&gt;, &lt;code&gt;cli-reference.md&lt;/code&gt;) that Claude Code auto-loads from &lt;code&gt;~/.claude/skills/codex-imagegen/&lt;/code&gt;. No Node, no install step beyond &lt;code&gt;git clone &amp;amp;&amp;amp; ln -s&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;When you say something like "make a hero image of an origami crane for the landing page, save to &lt;code&gt;assets/hero.png&lt;/code&gt; at 1600×900," the skill:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Rewrites your request into the five-part structure (Scene → Subject → Details → Use case → Constraints), front-loaded.&lt;/li&gt;
&lt;li&gt;Runs &lt;code&gt;codex exec --sandbox workspace-write '$imagegen &amp;lt;prompt&amp;gt;. Print only the absolute path on the last line.'&lt;/code&gt; — Codex generates, doesn't move.&lt;/li&gt;
&lt;li&gt;Parses the path from stdout. Runs &lt;code&gt;cp&lt;/code&gt; and &lt;code&gt;sips -z 900 1600&lt;/code&gt; (macOS) or &lt;code&gt;convert -resize 1600x900&lt;/code&gt; (Linux) to land the file where you actually asked.&lt;/li&gt;
&lt;li&gt;Prints the final path. Done.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The natural-language trigger is the part that matters most to my actual goal. I want Claude Code, mid-build, to decide &lt;em&gt;on its own&lt;/em&gt; that this &lt;code&gt;&amp;lt;section&amp;gt;&lt;/code&gt; needs a hero image, and just generate one. Not "user types a special slash command." The skill fires from phrases like "generate an image," "make an icon," "create a banner," "OG image," "hero illustration." Claude calls it the same way it calls anything else in its toolkit.&lt;/p&gt;

&lt;p&gt;That's the whole point. The site shouldn't end up looking like every other vibe-coded site because the agent never broke out of its default visual stack. The agent building the site should be reaching for project-specific imagery on its own.&lt;/p&gt;

&lt;h2&gt;
  
  
  The trick that changes everything: DESIGN.md
&lt;/h2&gt;

&lt;p&gt;Here's the bit I didn't expect to matter as much as it does.&lt;/p&gt;

&lt;p&gt;If you drop a &lt;code&gt;DESIGN.md&lt;/code&gt; at the root of your project — palette, type, illustration style, tone — and then ask Claude Code:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Using DESIGN.md as the style reference, insert images that fit the site.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;…it just works. Really well.&lt;/p&gt;

&lt;p&gt;Claude reads DESIGN.md, decides which slots in the codebase need imagery, writes prompts that incorporate the palette and tone, calls the skill, and inserts the resulting paths into the right &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; tags. The hero image, the empty-state illustration, the OG card, and the favicon all end up looking like they belong to the same product. Without DESIGN.md it still works, but each image drifts a little — palette, mood, lighting are all slightly off across slots, and you can feel it even if you can't immediately name what's wrong.&lt;/p&gt;

&lt;p&gt;DESIGN.md doesn't have to be fancy. Here's a trimmed version of one I'm using right now:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight markdown"&gt;&lt;code&gt;&lt;span class="gh"&gt;# Design&lt;/span&gt;

&lt;span class="gu"&gt;## Concept&lt;/span&gt;
Calm, considered, modern. The kind of feel that gets out of the user's
way instead of demanding attention.

&lt;span class="gu"&gt;## Palette&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Surface (main):  #F4F1ED  — warm off-white
&lt;span class="p"&gt;-&lt;/span&gt; Surface (cards): #FFFFFF
&lt;span class="p"&gt;-&lt;/span&gt; Text:            #1A1A1A  — near-black, not pure
&lt;span class="p"&gt;-&lt;/span&gt; Accent / CTA:    #C46A4E  — soft terracotta, used sparingly

&lt;span class="gu"&gt;## Typography&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Inter, system-ui sans-serif

&lt;span class="gu"&gt;## Illustration style&lt;/span&gt;
&lt;span class="p"&gt;-&lt;/span&gt; Single subject, plenty of whitespace, no busy backgrounds
&lt;span class="p"&gt;-&lt;/span&gt; Soft natural light from upper left, gentle shadows
&lt;span class="p"&gt;-&lt;/span&gt; Hand-folded paper / origami feel where applicable
&lt;span class="p"&gt;-&lt;/span&gt; No text inside images unless explicitly asked
&lt;span class="p"&gt;-&lt;/span&gt; Avoid stock-photo vibes and over-saturated colors
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's 20-ish lines. But Claude treats it as a hard constraint when writing prompts, and the visual consistency across a 4–5 page site is night and day vs. asking for each image cold. The "Illustration style" block is doing about 80% of the work — palette obviously matters, but the qualitative instructions ("hand-folded paper feel," "no busy backgrounds") are what stop each image from feeling like it came from a different stock-image library.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why this is the year vibe-coded sites stop looking vibe-coded
&lt;/h2&gt;

&lt;p&gt;A year ago this would have been a different post. Back then, even if you wanted to break out of the shadcn-default look, generated images weren't the answer. The available models produced output that screamed AI louder than the layout did — slightly melted typography, off-axis lighting, the same handful of obvious tells. So the fastest path was usually "just don't add an image," and the result was a sea of sites that all leaned on the same component library to do all the visual work.&lt;/p&gt;

&lt;p&gt;gpt-image-2 changes the math. With a tight DESIGN.md and a five-part prompt, generated images now look like they came from a brand, not from a model. Text spells correctly. Light angles agree across slots. Subject framing is intentional. They're not hand-crafted illustrations from an agency, but they no longer carry the "AI tell" that earlier generations did. And once those images sit alongside the shadcn cards and the Lucide icons, they shift where the eye lands. A visitor reads the hero illustration, the OG card, the empty-state graphic — slots that on a default vibe-coded site were either missing or generic — and the site registers as a product instead of a template.&lt;/p&gt;

&lt;p&gt;The interesting part isn't any individual image. It's that the gap between "site built by a small team with a designer on call" and "site built solo with Claude Code overnight" is mostly carried by image quality and visual specificity. The structure is solved. The components are solved. What's left, and what was carrying most of the "feels AI-generated" signal, was the image layer — and that's the slot this skill fills.&lt;/p&gt;

&lt;p&gt;If you ship as a solo developer — no designer on call, no illustration budget, no Figma file from a teammate — this is the part of the workflow that used to force a compromise. Either you paid for stock images that didn't quite match the rest of the site, or you pulled an SVG from Heroicons and called it a hero. With gpt-image-2 plus a DESIGN.md, that compromise mostly goes away. The same person who writes the code can produce custom, on-brand visuals in the same session, without leaving the editor and without commissioning anyone. That's the audience I built this skill for, and the audience it changes the most for. Designers will always have an edge on intentional taste — I'm not pretending otherwise — but for the long tail of side projects, landing pages, and internal tools that were never going to get a designer in the first place, the bar just moved.&lt;/p&gt;

&lt;p&gt;Once you have this loop — Claude builds the site, reads DESIGN.md, decides where images belong, generates them with consistent style, drops them in place — visitors stop registering that AI built the site. Which is the bar.&lt;/p&gt;

&lt;h2&gt;
  
  
  Caveats
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;gpt-image-2 turns burn 3–5× your Codex usage limit vs. a plain text turn. If you're iterating a lot, set &lt;code&gt;OPENAI_API_KEY&lt;/code&gt; and switch to per-image API billing.&lt;/li&gt;
&lt;li&gt;macOS is primary. Linux works via ImageMagick. Windows is not on the roadmap.&lt;/li&gt;
&lt;li&gt;The skill is around 200 lines of markdown plus a small shell helper. If you don't like a default, edit it. There's no framework to wrestle.&lt;/li&gt;
&lt;li&gt;For small text or dense multi-font layouts, bump quality to medium or high — gpt-image-2 is honest about which slots benefit from extra compute.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Repo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/JunSeo99/claude-skill-codex-imagegen" rel="noopener noreferrer"&gt;github.com/JunSeo99/claude-skill-codex-imagegen&lt;/a&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git clone https://github.com/JunSeo99/claude-skill-codex-imagegen &lt;span class="se"&gt;\&lt;/span&gt;
  ~/.claude/skills/codex-imagegen
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If even that feels like effort, just hand the repo URL to Claude Code itself and tell it to install the skill — something like &lt;em&gt;"install this Claude Code skill: &lt;a href="https://github.com/JunSeo99/claude-skill-codex-imagegen" rel="noopener noreferrer"&gt;https://github.com/JunSeo99/claude-skill-codex-imagegen&lt;/a&gt;"&lt;/em&gt;. It'll read the README, run the clone-and-symlink, and the next session will just have it. Mildly recursive — using Claude Code to install something Claude Code is going to use — but it works, and honestly it's how I install most of my own skills these days.&lt;/p&gt;

&lt;p&gt;Once it's installed, restart Claude Code. Drop a DESIGN.md at the root of your project. Build your site. Then say: &lt;em&gt;"Using DESIGN.md as the style reference, insert images that fit the site."&lt;/em&gt;&lt;/p&gt;




&lt;p&gt;Curious if anyone else is doing the DESIGN.md-as-style-anchor pattern for AI-generated assets — I'd love to compare notes on which fields actually move the needle and which are noise. The "Illustration style" block is doing 80% of the work in my setup, but I haven't tested it across enough projects to call it.&lt;/p&gt;

&lt;p&gt;And feedback on the skill itself is genuinely welcome — issues, PRs, "this default is wrong," "this caveat is missing," "this prompt pattern didn't work for me." It's still early, and I plan to keep iterating on it as people actually run it in their own projects. If you try it and something breaks or feels off, please tell me — that's the fastest way I'll make it better.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>showdev</category>
      <category>claude</category>
      <category>codex</category>
    </item>
  </channel>
</rss>
