DEV Community

Cover image for The AI model is a commodity. Taste isn't.
Raunak Kathuria
Raunak Kathuria

Posted on • Originally published at raunakkathuria.substack.com

The AI model is a commodity. Taste isn't.

When everyone has access to the same AI, your edge is the judgment behind the output.

The first comment on my Reddit post wasn't a question.

I don't think I've ever seen so many stupid ideas and em-dashes in such a short amount of text.

No counter-argument. No nuance. Just a precise observation about punctuation I couldn't shake, because they were right. Not about em-dashes, but about the generic text that was presented.

The post had been written with AI assistance. I'd used em-dashes in almost every sentence, in a rhythm that wasn't mine. The Reddit commenter knew something was off. That's the thing about taste — you recognise the absence of it before you can name it.

Everyone has the same AI

Here's the uncomfortable truth: the model is no longer the differentiator.

Give ten capable people a reasonable prompt and you'll get ten tidy versions of the same answer. Same sentence balance. Same transitions. Same polished neutrality.

The AI optimises for readability, and readable turns out to mean forgettable.

What nobody's discussing: AI was trained to be useful to everyone. That means it has no taste. It has capability without judgment, fluency without instinct. It can write anything, which means it sounds like nothing in particular.

Taste is judgment you can teach

Everyone who's spent years thinking about something develops a taste for it. Not just preferences: a finely tuned sense of what belongs and what doesn't. What to say. What to leave out. What sounds true and what sounds performed.

That taste isn't in the model. It's in you.

Taste is the personalisation you bring to AI — the part that can't be trained on someone else's data.

A doctor has it in diagnosis, the pattern recognition from thousands of cases that flags something wrong before the tests confirm it. A great editor has it for sentences, whether a line is doing the work it thinks it's doing.

AI can generate the words. It can't generate the judgment behind them.

The question is whether you can give it yours.

What I built to do exactly that

After the Reddit comment, I didn't just edit the post. I tried to fix the root cause.

The problem wasn't the output. It was that I'd given the AI no real constraint to work with. Write like me is not a constraint. It's a wish. The AI interpreted it as: write clearly, write professionally, write in a way that nobody will object to. And nobody would object to it, because it had no point of view.

So I spent a few sessions trying to articulate what my voice actually was. Not in general terms, specific ones.

I asked the agent to read through my Substack posts, my LinkedIn writing, our conversation history. Identify the patterns. Push back on what it got wrong. Refine until the description felt accurate enough to be useful.

The output was a file I called TASTE.md.

It's not a style guide. Style guides tell you how to format things. This was closer to a brief for what kind of mind should be speaking: what it cares about, what it would never say, the things that would sound wrong even if they were technically correct.

The difference in the writing was immediate. Not perfect — it took several passes and a proper audit loop to get the AI detection score from 78% down to 11%. The first draft with TASTE.md loaded still scored 62%. Three rounds to get it to 11%. But the direction was right from the start.

Why this matters beyond writing

Writing is just the most visible place where taste shows up.

In code review, the taste is your sense of what good architecture feels like in your specific system, not in the benchmark, not in general, in yours. In product decisions, it's the feel for which trade-off your team can absorb right now and which one will cost you a quarter.

These aren't prompting techniques. They're the accumulated judgment you've built over years, finally expressible in a form something else can use.

The teams that get the most out of AI aren't the ones with the best prompts. They're the ones who've spent time writing it down, something like TASTE.md, but for their domain, specifically enough that they can give it to something that has no beliefs of its own.

The inversion worth sitting with

AI is trained to be a generalist. That framing is holding people back.

The same model with your specific taste, the things that would sound wrong to you even if they're technically correct, produces something different from the same model without it. Each piece builds on the last. The output starts to sound like it came from someone.

And the part I didn't expect: building TASTE.md didn't just make the AI more useful. It made me clearer about what I actually believe. You can't articulate your taste to something else without first understanding it yourself.

The Reddit comment that stung on a Sunday evening was useful. Not because it was about the em-dashes, they were right about the generic text that was presented.

Everyone has access to the same AI. What differentiates you is what you bring to it.

How clearly have you articulated your taste?


This is part one of a two-part series. Part two covers the exact process: how to build your own TASTE.md, the audit loop I use, and what the before/after looks like.

Originally published at raunakkathuria.substack.com

Top comments (0)