Two years of daily Claude + ChatGPT. They've seen probably a million tokens of my writing. Every response still opens with "Certainly!" or "Great question!" and closes with "In conclusion…".
Nobody writes like that. The model has no idea who you are — you're just another session.
So I built chatlectify. Point it at your exported chat history (Claude / ChatGPT / Gemini JSON, or a folder of your own writing — blog posts, emails, notes). It outputs a SKILL.md + system_prompt.txt that makes the model write like you.
How it works
- Extracts ~20 stylometric features from your messages — sentence-length distribution, contraction rate, bullet usage, hedge words, typo rate, punctuation histograms, question-vs-imperative ratio, top sentence starters
- Picks a stratified sample of your messages across length buckets as exemplars
- One LLM call distills it all into a portable style file
Privacy
Runs locally. Exactly one outbound LLM call to your configured model — the synth step that writes the style file. That call includes your feature summary and ~40 exemplar messages (the stratified sample). Nothing else leaves your machine. No telemetry, no cloud backend, no account.
Usage
pip install chatlectify
chatlectify all ./conversations.json --out-dir ./my_skill
Drop the folder into ~/.claude/skills/ or paste system_prompt.txt into any model that takes one.
Repo: https://github.com/0x1Adi/chatlectify
Curious what people think. Also — which export formats should I add next? Slack, iMessage, email, Discord, Obsidian vault?
Top comments (0)