DEV Community

Cover image for Teaching AI to Write Like Me
Andrew Eddie
Andrew Eddie

Posted on

Teaching AI to Write Like Me

Every time you ask an AI to help you write, it sounds like an AI. Not because the AI is bad at writing, but because it has no idea who you are. It defaults to training data: a blend of every Medium post, technical blog, and corporate copywriter it has ever ingested. The result is competent, generic, and nothing like you.

And here's the thing. You already have the solution. It's sitting in your git history, your old blog posts, your archived articles. Years of writing that is unmistakably, provably yours.

I recently sat down with Claude and shared 169 of my own blog posts spanning 2007 to 2014. The goal was to build a detailed voice profile that I could include in any writing project to override the AI's default voice. Not a vague "write in a casual tone" instruction. A binding, specific, line-by-line reference document that tells the AI exactly how I write, and exactly how I don't.

The result was effective. And the process is something anyone with a writing history can replicate.

What I Learned About My Own Voice

It turns out that my voice was pretty consistent across seven years and very different content types (tutorials, opinion pieces, product announcements, conference talks, and community commentary). The AI identified solid and consistent patterns:

  • I almost always explain why before how
  • My sentence rhythm follows a short-medium-medium-short pattern
  • I use "we" when teaching and "I" when taking a position
  • I reach for developer-native metaphors over literary ones
  • My enthusiasm shows through depth and detail, never through superlatives
  • I have signature phrases I use without thinking ("Let's have a look at...", "Fair question.", "And here's the thing")

What was absent? No exclamation marks. No "amazing" or "incredible". No "dive deep" or "leverage". No corporate jargon. These are all things AI defaults to constantly, and none of them are me.

The Voice Evolved

I also compared my 2007-2014 writing against something I'd written recently: a formal retrospective document from 2026. The voice had clearly matured.

The playful parenthetical asides had mostly disappeared. "I have to be honest and say..." had become "My experience has been that..." The collaborative "we" had shifted toward a more authoritative "I" that carried accountability rather than just anecdote. The humour hadn't gone away, but it had gone quieter: structural irony instead of wordplay.

Same person. Same core directness. But the texture had changed. This matters because a voice profile that only captures how you wrote ten years ago will sound dated.

Testing Against AI-Assisted Writing

The test was comparing the profile against four blog posts I'd already published on dev.to, posts that were heavily 'vibe-written' with AI assistance. The question: does my voice come through, and where does the AI leak?

The AI fingerprints were easy to spot once I knew what to look for:

  • "Now go build something cool" as a closing (appeared in multiple posts, not my voice at all)
  • "Chef's kiss" (internet-speak I'd never use unprompted)
  • TL;DR blocks at the top (a dev.to convention, not my opening style)
  • Over-parallel scaffolding where every section followed the exact same template
  • Grandiose closings like "For the first time in history..."

But there were also moments where my real voice punched through the AI assistance. An Australian minced oath ("Pluck a duck"). A dry observation about "re-explaining your life story to a goldfish with a very expensive education". Punctuation used as percussion ("Just. Diving. Please."). These were the gems, the moments no AI would generate unprompted.

The Document

The voice profile ended up as a 22-section markdown file. It covers, for my profile:

  • Australian English spelling rules (non-negotiable)
  • Sentence construction patterns with specific word counts
  • A forbidden words list of 20+ terms that LLMs overuse
  • Signature phrases catalogued by context
  • Structural patterns for different content types
  • Two independent calibration dials: Formality (1-5, from stakeholder retro to personal blog) and Depth (A-D, from practical to transcendent/philosophical)
  • An LLM co-writing hygiene checklist for catching AI leakage before publishing
  • Title guidelines (no clickbait, ever)

The two-dial system turned out to be the most useful part. My voice doesn't change between a formal incident report and a dev.to blog post. The core is the same. What changes is how much personality, warmth, and philosophical reach comes through. The dials let me specify that precisely.

How to Build Your Own

If you have a body of writing (blog posts, articles, documentation, even long-form emails or Slack messages you've saved), you can do this. Here's the process distilled into a prompt you can adapt.

The Super-Prompt

Give this to Claude (or your AI of choice) along with as many of your writing samples as you can fit in the context. More samples across more content types gives better results.

Analyse the following writing samples thoroughly. They are all by the same author. I need you to build a detailed voice profile that captures HOW this person writes, not WHAT they write about. The profile must be specific enough to override your default writing tendencies when used as a reference.

For each of these dimensions, provide specific observations with quoted examples:

  1. Sentence construction: Average length, rhythm patterns, characteristic openers, how they vary sentence structure
  2. Vocabulary: Register level, words they favour, words they never use, technical vs plain language balance
  3. Spelling and dialect: Which English variant (American, British, Australian, etc.)
  4. Tone: Where they sit on the formal/informal spectrum, how they handle warmth vs authority
  5. Reader address: How they use "I", "we", "you". When each pronoun appears and why
  6. Teaching patterns: How they introduce concepts, handle code/examples, transition between explanation and demonstration
  7. Opinion and argument: How they express disagreement, signal opinion vs fact, handle nuance
  8. Humour: What kind they use, what kind they avoid, how much and when
  9. Enthusiasm: How they show excitement (superlatives? depth? detail?)
  10. Openings and closings: How they start pieces, how they end them, patterns across content types
  11. Punctuation habits: Em dashes, parentheses, semicolons, exclamation marks, rhetorical questions
  12. Signature phrases: Recurring expressions, verbal tics, characteristic transitions
  13. What's absent: Words, patterns, or constructions this author never uses, especially ones that AI commonly defaults to

After the analysis, produce a complete voice profile document in markdown that I can include in future writing projects. The document should:

  • State rules directly ("Never use..." not "The author tends to avoid...")
  • Include a forbidden words/phrases list
  • Provide correct and incorrect examples for key patterns
  • Be structured so an AI can follow it as binding instructions
  • Include a calibration checklist for verifying output matches the voice

Tips for Better Results

Volume matters. I was lucky enough to have 169 posts laying around. You don't need that many, but more is better. 20-30 substantial pieces across different content types should give solid results.

Variety matters more. A mix of tutorials, opinion pieces, announcements, and informal writing reveals different facets of the same voice. If you only analyse one type, the profile will be incomplete.

If your oldest samples are years old, include something recent. Your voice evolves. The profile should capture who you are now, not just who you were. Mine shifted noticeably between 2007 and 2026: less playful, more precise, same directness.

Once you have a first draft of the profile, test it. If you've already published AI-assisted content, compare the profile against it. Where does your voice come through? Where does the AI leak? Add what you find to the profile. The first draft of mine included em dashes everywhere because my old writing used them heavily. I don't anymore. Tell the AI your current preferences and update accordingly. It's a living document.

You might also consider adding audience calibration. A single fixed voice profile works, but a dial system works better. Most people write differently for a stakeholder report than for a blog post. The core voice stays the same. The texture changes.

Why This Matters

LLMs are trained on billions of words, and those billions of words have a voice. It's polished, helpful, and completely anonymous. Every "let's dive in", every "powerful and robust", every "in today's rapidly evolving landscape" is the training data talking, not you.

The only way to push back against that gravitational pull is to be specific. Not "write casually" or "match my tone". Specific. "Never use em dashes. Always use oxford commas. Australian English. No profanity. Reach for developer-native metaphors. Show enthusiasm through detail, not superlatives. When expressing opinion, anchor it in professional experience, not assertion."

Your voice is the product of every piece you've ever written. It's already documented. It's sitting in your git history, your blog archive, your old articles. The AI just needs you to show it.

Fine Tuning the Experiment

This article is drafted by AI according to my voice profile and the history in this now very, very long context window. It's a summary of a conversation. It's insight into the back-and-forth I had with a learned colleague.

We, and I say that deliberately, found things the voice missed. Patterns that could be tightened further. The first draft of this article used "surprisingly effective" where I would just say "effective". It used "the first surprise was" where I would say "it turns out that". It used "the AI found patterns I'd never consciously noticed" where I would say "the AI identified solid and consistent patterns". Each one was a small moment where the AI's instinct to dramatise overrode my instinct to flatten.

So part of this protocol involves iteration. Read the output carefully. Adjust the voice profile. Then do another pass.

This article went through that loop. I opened a fresh conversation, shared the article and the voice profile, and asked for a review of what the previous session overlooked. It caught more to fix: an idiom that wouldn't land for ESL readers, a sentence that treated the AI as a machine instead of a colleague, and over-parallel scaffolding in the tips section. Small things. The kind you miss when you've stared at the screen for too long.

But there were other small things. The kind you only notice after pairing with an AI over months and years. The kind you can't codify. Human review is still a necessary part of the process.

In the end, think of it as code review for prose. A second or third or fourth pass catches what the previous normalised.

Enjoy!

Top comments (0)