DEV Community

Cover image for Claude rewrote my resume and I couldn’t send it, so I built unslop.
Mohamed Abdallah
Mohamed Abdallah

Posted on • Originally published at Medium

Claude rewrote my resume and I couldn’t send it, so I built unslop.

I built unslop because Claude rewrote my resume and I couldn't send it

I was updating my CV. New experience, new role, the usual. I'd written it myself a few years back. I asked Claude to update the wording with my latest projects.

I read the output. It was AI.

You know the feeling. You read a paragraph, and the first thing your brain says is this is written with AI. It doesn't matter how careful the person was. It doesn't matter that they reviewed every line. The moment the reader thinks "AI", the work gets discounted before they finish the sentence. The effort doesn't transfer.

I'm a developer. I'm not the best at writing in English. I never was. Email, PR comments, Jira tickets — I lean on AI to help me get the wording right because prose isn't the part of the job I'm good at. That was the deal AI was supposed to give me. Help the engineers who can't write.

The deal is broken. Because the output reads as AI. And the moment your reader thinks "AI", everything you wrote loses weight.

There’s a paper on this. Liang et al., 2023 (arXiv:2304.02819). They ran AI detectors against essays written by non-native English speakers and against essays written by US 8th-graders. The detectors consistently flagged the non-native writing as AI. The native writing went through clean. So the tool meant to level the field for ESL writers is actively penalizing them. AI for everyone, except the people who need it most.

That's the detector side. But the same fingerprint that triggers a detector triggers a human reader. We've all gotten good at this. Em dashes everywhere. "It's worth noting that." "Delve." Sycophancy openers. Hedging stacks. Tricolons stacked three deep. You see one or two of them and the brain switches modes.

So I took it on myself. Not pay $9.99 a month for some "AI humanizer" SaaS site. Build something open-source that runs in my terminal, in my editor, inside my Claude Code session. And actually have it work.

I called it unslop.

pip install unslop
Enter fullscreen mode Exit fullscreen mode

Repo: github.com/MohamedAbdallah-14/unslop


What it actually does


I read 38 papers to build this

The full reading list is in docs/RESEARCH_AND_TECH.md. Five things I learned that ended up in the code:

  1. Warmth and reliability trade off. Ibrahim et al. (arXiv:2507.21919, 2025) trained models to be warmer and more empathetic. The result: error rates went up by 10–30 percentage points across safety-critical tasks. Warm-trained models were significantly more likely to validate the user’s incorrect beliefs, especially when the user sounded sad. So the “be friendlier” finetuning isn’t neutral. It’s making the output less reliable. unslop subtracts. It never adds.

  2. Sycophancy is the loudest tell. SycEval (arXiv:2502.08177) measured sycophantic agreement in 58.19% of cases across ChatGPT-4o, Claude-Sonnet, and Gemini-1.5-Pro. Gemini was worst at 62.47%, ChatGPT best at 56.71%. None of them under 50%. The first thing readers notice. The first thing unslop strips.

  3. Detectors read variance, not vocabulary. DivEye (TMLR 2026) shows modern detectors look at intra-document surprisal variance. Even if you swap synonyms, the variance fingerprint persists. So the rewrite has to engineer burstiness — mix sentence lengths across the paragraph — not just swap words.

  4. Verbal uncertainty beats numeric confidence. Tao et al. (arXiv:2505.23854, 2025) found that linguistic verbal uncertainty (“I think”, “probably”, “seems”) consistently outperforms token-probability and numeric-confidence methods on both calibration and discrimination. So unslop preserves real uncertainty in human form, not as confidence intervals.

  5. Voice imitation is harder than it looks. EMNLP 2025 Findings (arXiv:2509.14543) tested LLMs on imitating personal writing styles. They can approximate structured formats like news and email. They struggle on the nuanced, informal voice you’d find in blogs or forums. unslop’s voice-match mode is honest about being a best-effort approximation, not a clone.

There are 33 more papers and they all shaped a specific decision in the code. The point isn't to flex the bibliography. The point is that the standard "make AI sound human" advice you find online — add emoji, add warmth, add filler — is the opposite of what the research says works.

You subtract. You don't add.


Who this is for

If you're a developer who uses LLMs for emails, tickets, PR descriptions, comments — and you've felt that twinge when the output reads as AI — this is for you.

If you're an ESL writer who's tired of having your writing flagged as AI by detectors that can't tell the difference between you and a chatbot.

If you ship a lot of prose with Claude Code and you don't want every response to start with "Great question!"

If you want to humanize your output without paying $9.99/month to a SaaS site that's a worse wrapper around the same models you're already paying for.


The honest close

It's 2 AM as I'm finishing this post. I'm a solo developer with no following. There's a real chance this dies on the graveyard of GitHub. There's another chance it takes years to be known.

I checked the repo this morning. Three people starred it yesterday. That made my week.

That made my week.

If you read this far and any part of it landed, here's the ask:

pip install unslop
Enter fullscreen mode Exit fullscreen mode

Star the repo: github.com/MohamedAbdallah-14/unslop

If it helps you, tell one person.

That's the whole pitch.

Top comments (0)