DEV Community

Raunak Kathuria
Raunak Kathuria

Posted on • Originally published at raunakkathuria.substack.com

How to make AI sound more like you, not more like AI

In the first post, I made a simple point: The AI model is a commodity. Taste isn’t.

https://dev.to/raunakkathuria/the-ai-model-is-a-commodity-taste-isnt-5177

A few people then asked the obvious next question: “If taste is the real differentiator, how do you actually build it into an AI workflow?”.

This post is my answer to that. Not in theory. In practice.

This is the exact process I used:

  • the prompt I used to build my TASTE.md
  • the questions that mattered most
  • the audit loop I now use to catch AI-sounding writing
  • and why this worked better than just saying “write in my voice”

I started doing this because I kept running into the same problem. I could get AI to produce clean writing. Sometimes even impressive writing. But it still did not quite feel like me. That was the frustrating part.

It was often clear, structured, and polished. But something in it felt slightly off. Too smooth. Too balanced. A bit too eager to sound like “good writing.”

For a while, I thought this was mainly a prompting problem from my side.

Maybe I needed better instructions:

  • “Write like me.”
  • “Match my tone.”
  • “Be more natural.”

That helped a little. But not enough. It still felt like I was describing the surface of my writing, not the thing underneath it.

That was the shift for me.

AI does not need more adjectives about your style. It needs a better understanding of your taste.

The goal was not to sound like me. It was to think closer to me.

At first, I assumed I mainly needed AI to copy my tone, sentence style, or structure. But that is only 10% of it.

What actually matters more is:

  • what I notice
  • what I simplify
  • what I reject
  • what I find engaging
  • what I never want to sound like

In other words, taste. That is what makes something feel like you even before it becomes stylish.

It is also why two pieces can say similar things, but only one feels like it came from you.

The first prompt I used

I started broad on purpose.

I wanted the model to look at my public writing, my older blog posts, and the way I think in conversations before trying to define my voice.

Here is the prompt I used:

I want you to go through my Substack and LinkedIn
- https://www.linkedin.com/in/raunakkathuria/
- https://raunakkathuria.substack.com/
- https://raunakkathuria.github.io/blog/ (though a little old now)

And also analyze all the chat history with me to develop a taste that can define my taste for AI to write and behave like me.

You need to create a taste of me based on the above information, ask me questions that will be needed to create this and ask till you are not satisfied that we have 95% of information to write the taste for defining me, my writing style etc that I will use to give it to my agent AI so that it can sound and write like me

Also read https://alisabelmas.substack.com/p/the-age-of-good-taste that explains how the taste has evolved over the centuries
Enter fullscreen mode Exit fullscreen mode

Two things about this were useful.

First, it forced the model to start from evidence instead of guessing.

Second, it made the process iterative. Not “generate a style guide in one shot,” but “keep asking until the signal is strong enough.”

That part matters more than it sounds.Most people stop too early.

They show the model two or three pieces of writing, ask for a style guide, and then wonder why the result sounds generic.

A generic input usually produces a generic version of you.

The next step was contrast, not description

Once the model had enough material, the most useful questions were not “what is my tone?”

They were questions like:

  • what writing do I admire?
  • what writing do I dislike, even if it is popular?
  • what should AI never do when writing like me?
  • how opinionated should it be?
  • what should readers feel?
  • what kind of openings feel natural to me?
  • what kind of endings feel earned?

That is where the real shape started to emerge.

I shared references I genuinely like:

  • how Karpathy presents information
  • how Arthur Hayes can be engaging
  • how Anthropic Engineering explains technical ideas in a way that is easy to digest

And I was also explicit about the emotional shape I wanted:

  • calm clarity first
  • playful insight second
  • not loud
  • not boastful
  • not spammy

That narrowed the space quickly.

Because good taste is not just preference. It is also rejection.

Sometimes the clearest signal is not what you love. It is what you instantly do not want to sound like.

The most useful realization was what I did not want

This turned out to be as important as what I liked.

I did not want the writing to sound:

  • like consultant-speak
  • like LinkedIn self-congratulation
  • like a Twitter-thread guru
  • like dramatic storytelling trying too hard to land
  • like polished AI neatness

That gave the model sharper guardrails.

A lot of weak AI writing does not fail because it is wrong. It fails because it feels slightly manufactured.

  • Too polished
  • Too balanced
  • Too eager to sound like “good writing”

That was the part I kept reacting to.

Not that the draft was poor. Just that it felt constructed.

At some point, the process gave me a simple filter that ended up being useful everywhere after that:

not writing that grabs attention but writing that earns attention

That is still one of the best tests I have found.

What my TASTE.md ended up

The final TASTE.md was not really a list of tone adjectives.

It was closer to a decision system.

It encoded things like:

  • make complex ideas easy to understand
  • use simple language with technical precision when needed
  • prefer logic first, then story, then examples
  • be engaging, but never performative
  • use subtle humor, not forced wit
  • allow paradox when it reveals something true
  • stay low-ego
  • avoid sounding like a personal brand machine

That was the real difference.

I was no longer asking AI to imitate my writing.

I was giving it a better model of my judgment.

And that is a much stronger foundation.

The second part of the system: audit, then humanize

Once I had the base taste defined, a different problem showed up.

Even when the idea was right, some drafts still sounded a bit too smooth. Again, not bad. Just slightly synthetic.

So I started using a separate audit prompt after drafting. That separation helped a lot.

Writing and critiquing are different jobs. A model that can produce a smooth paragraph is not always good at noticing where that same paragraph sounds artificial.

You have to ask for that explicitly.

Here is the audit prompt I use.

Universal AI-audit-check prompt

Audit this draft for AI-sounding writing.

Check for:
- predictable phrasing
- generic abstractions
- over-balanced sentence rhythm
- repeated syntactic patterns
- too many rhetorical devices
- over-explaining
- vague claims without concrete grounding
- polished-but-forgettable wording
- anything that sounds constructed instead of true

Return:
1. AI-sound risk score (0–100)
2. Top 5 flagged lines
3. Why each line feels artificial
4. A tighter alternative for each
5. Final verdict: pass / revise / rewrite

Important:
- preserve my meaning
- preserve my calm, grounded voice
- do not make it more dramatic
- do not add fake personality
- do not add emojis, hype, or cleverness for its own sake
Enter fullscreen mode Exit fullscreen mode

The key thing here is that I am not asking it, “make this better.”

That is too vague.

I am asking it to tell me where the writing sounds artificial, and why.

That usually leads to much better edits.

Universal humanizer prompt

Then, only if the audit flags real issues, I use this:

Humanize this draft without changing the idea.

Goals:
- make it sound more natural, specific, and human
- reduce visible AI smoothness
- keep the writing calm, clear, grounded, and low-ego
- preserve my original judgment and intent

Rules:
- edit only where needed
- prefer concrete words over abstract ones
- remove over-signposting
- vary sentence rhythm naturally
- keep one sharp line if it feels earned
- do not force anecdotes
- do not add fake struggle or fake vulnerability
- do not make it sound like a LinkedIn guru
- do not over-polish

Return:
1. revised version
2. 3 biggest changes made
3. why those changes improved the voice
Enter fullscreen mode Exit fullscreen mode

This matters too. A lot of “humanizing” prompts make writing worse because they add fake personality.

That is not what I want. I do not want the writing to sound louder, more dramatic, or more quirky than it needs to be.

I just want it to stop sounding like polished autocomplete.

The actual loop I use

In practice, the workflow is simple.

1. Build the base taste

Use examples, dislikes, audience, tone, openings, endings, and constraints to create a real TASTE.md.

2. Draft with that taste

Have the model write with the shared taste as the base.

3. Audit the draft

Use a separate prompt to identify where it sounds AI-generated.

4. Humanize only if needed

Do not “improve” everything. Fix only what is flattening the voice.

5. Keep refining the taste file

Every time something feels off, the fix is often not in the draft. It is in the taste definition.

That last point is probably the most important.

A weak prompt can produce a weak output once. A weak taste file produces weak outputs repeatedly.

Before and after

Before this process, I was asking AI to imitate my writing. After this process, I was giving AI a better model of my judgment.

That changed the quality of the output more than any clever prompt did.

If you want to build your own

Start with three things.

1. Give it your real data

Not just one post. Give it enough signal:

  • blogs
  • notes
  • old writing
  • chats
  • public posts

2. Give it contrast

Tell it what you like and what you never want to sound like. This is where a lot of the real taste signal comes from.

3. Separate writing from auditing

  • Use one pass to generate.
  • Use another pass to critique.
  • Use a third pass, only when needed, to humanize.

Do not merge all three into one vague instruction and expect the result to be sharp.

Final thought

In the first post, I argued that taste will matter more as models become abundant.

This is what that looks like in practice.

The easiest mistake with AI writing is to focus on output too early.

You keep editing sentences when the real issue is that the model does not yet understand your judgment.

That is why I think TASTE.md matters.

It is not just a style file. It is a way of making your preferences reusable.

And once that exists, the writing gets better because the decisions underneath it get better.

AI gets cheaper every year. Your judgment does not.

That is why the real asset is not the model. It is the taste you train it to follow.

Top comments (0)