I kept writing tweets, posting them, and getting 200 views. Same effort, wildly different outcomes. So I went to twitter/the-algorithm on GitHub to find out why.
Turns out X published exactly how they rank content. Replies are worth 27x a like. Your own reply to your own tweet? 150x. Bookmarks? 20x. External links? -50% reach. It's all in the source code.
I extracted 36 scoring rules from the algorithm and built a Chrome extension that grades your tweets in real time as you type.
What it does
You open X.com. Start typing a tweet. A small overlay appears:
- Score: 72/100 (updating live as you type)
- Predicted reach: ~14,200 people
- Remove the link → 21,600
- Add an image → 19,600
- Both → 34,400
That's it. Know your reach before you post.
The 36 rules
Every tweet is scored across 5 categories:
| Category | Rules | What it checks |
|---|---|---|
| Hook | 12 | Opening strength, open loops, contrarian claims, story openers, pattern interrupts |
| Structure | 5 | Length, hashtag/emoji spam, thread length, line breaks |
| Engagement | 2 | CTA presence, bookmark-worthy formats |
| Penalties | 9 | Engagement bait, AI slop, hedging, external links, combative tone, grammar |
| Bonuses | 7 | First-person voice, media, questions, sentiment, readability, surprise |
These aren't vibes. They're derived from the actual algorithm weights:
Reply → 27x a like (twitter/the-algorithm)
Self-reply → 150x a like (twitter/the-algorithm)
Bookmark → 20x a like (twitter/the-algorithm)
Media → 2x Earlybird boost
External link → -30% to -50% reach
3+ hashtags → ~40% engagement drop
The reach prediction formula
predictedReach = baseReach
* contentMultiplier (score/50, so score 75 = 1.5x)
* timeMultiplier (peak=1.25x, off-peak=0.85x)
* trendMultiplier (matches trending = 1.15x)
* mediaMultiplier (image/video = 1.38x)
* linkPenalty (external link = 0.55x)
* healthMultiplier (account health 0.6-1.3x)
* calibrationFactor (auto-corrects from your own data)
The calibration factor is the interesting part. After you post, ReachOS fetches your real metrics at 15-minute intervals, compares predicted vs actual, and adjusts the model. It gets more accurate the more you use it.
X-Ray mode
Toggle it on and every tweet in your timeline gets a color-coded score pill. Red through purple. Scroll your feed and immediately see which tweets the algorithm would push.
I use this more than the composer scoring. You start to internalize the patterns fast.
AI features (optional, BYOK)
The core scoring works entirely client-side with zero API calls. But if you bring your own Anthropic API key, you get:
- Slop detection — 28 weighted patterns that flag AI-sounding language, plus Claude verification
- Hook analysis — 6-dimension assessment of your opening line
- Auto-optimize — 5 rounds of iterative rewriting, keeps the best version
- Self-reply generator — Creates a reply to your own tweet (that 150x algorithm boost)
No keys required for the base experience. No account needed. No data leaves your browser unless you opt in.
Architecture
The extension watches the X.com DOM for the tweet composer. On every keystroke (debounced), the rules engine runs locally and updates the overlay. After 2 seconds of idle, it optionally calls the API for AI-powered suggestions.
The API is a Next.js app deployable on Vercel with a Neon PostgreSQL database. Four cron jobs handle metric fetching, weight learning, batch optimization, and forecast calibration.
Chrome Extension
├── Composer Detector (DOM watch)
├── Rules Engine (36 rules, instant, client-side)
├── Score Overlay (React)
└── X-Ray Mode (timeline pills)
│
▼ (optional, after 2s idle)
Next.js API (Vercel)
├── /analyze (AI delta scoring)
├── /suggest (hook/CTA rewrites)
├── /account-health (X profile scoring)
└── Cron jobs (metrics, learning, calibration)
Quick start
No server needed for basic usage:
git clone https://github.com/AytuncYildizli/reach-optimizer.git
cd reach-optimizer
pnpm install
pnpm --filter @reach/extension build
Load the apps/extension/dist/ folder as an unpacked extension. Go to X.com and start typing.
For the full experience with AI and tracking, copy .env.example, add your keys, and run pnpm dev.
What I learned building this
The algorithm is surprisingly transparent. X published the weights. Most people just never read them.
Links are reach killers. Everyone knows this intuitively, but seeing "-50% reach" quantified changes behavior fast. Put links in replies.
Self-replies are broken good. 150x a like is insane. The extension generates self-replies for you.
AI detection is the new spam filter. Negative sentiment toward AI-sounding content is a real penalty. The slop detector catches phrases like "delve into", "it's worth noting", "game-changer" before you post them.
Calibration matters more than the model. The base formula is rough. But after ~50 tweets of calibration data, predictions get within 20% of actual reach.
Try it
GitHub: AytuncYildizli/reach-optimizer
Star it if it's useful. Issues and PRs welcome. The rules engine is in packages/rules-engine if you want to add rules or adjust weights.
Built this because I was mass-deleting draft tweets that flopped. Now I delete them before posting.
Top comments (0)