DEV Community

Cover image for I Analyzed the Readability of 10 Popular Developer Documentation Sites
ckmtools
ckmtools

Posted on

I Analyzed the Readability of 10 Popular Developer Documentation Sites

Good documentation is worth nothing if developers can't read it. I ran 10 popular developer docs pages through standard readability formulas—Flesch-Kincaid, Flesch Reading Ease, Gunning Fog—to see which ones actually write at a level humans can parse. The results were more consistent than I expected, with one glaring outlier.

The Methodology

Three formulas, all derived from word length and sentence length:

  • Flesch Reading Ease: 0–100 scale, higher is easier. 60–70 is considered standard/plain English. Anything below 30 is classified as very difficult (think academic journals or legal contracts).
  • Flesch-Kincaid Grade Level: Maps to US school grade levels. Grade 8 means an 8th grader can read it. Grade 12+ starts getting into college territory.
  • Gunning Fog Index: Similar grade-level metric, but also accounts for complex words (3+ syllables). Higher Fog = more jargon-dense text.

I fetched each page with requests, stripped code blocks, navigation, headers, footers, and sidebars with BeautifulSoup, then ran the remaining prose through Python's textstat library. Pages with fewer than 200 extractable words were skipped (two pages—Prisma and Express—fell into this category because their content is rendered client-side or split across tabs).

The Results

Documentation Flesch Reading Ease FK Grade Level Gunning Fog Assessment
GitHub REST API Docs 56.2 8.8 11.0 Fairly difficult
Stripe API Docs 54.0 9.4 11.3 Fairly difficult
Fastify Server Docs 53.3 9.2 10.9 Fairly difficult
Next.js Installation 52.9 8.6 10.5 Fairly difficult
Vercel Getting Started 52.0 10.0 10.9 Fairly difficult
Supabase Getting Started 49.3 11.0 12.9 Difficult
PlanetScale Docs 44.6 11.9 14.0 Difficult
Turso Docs 19.0 18.7 21.1 Very difficult
Prisma Getting Started N/A N/A N/A Failed to fetch: insufficient extractable text (client-side rendered)
Express Hello World N/A N/A N/A Failed to fetch: insufficient extractable text (<200 words)

Key Findings

  • The cluster is tight at the top. GitHub, Stripe, Fastify, Next.js, and Vercel all scored within 4 points of each other on Flesch Reading Ease (52–56). That's not a coincidence—mature, well-funded projects converge on similar writing patterns over time.
  • None of them hit the 60–70 "standard" range. Every site I measured scored "fairly difficult" or worse. Developer docs as a category skew harder than general web writing, likely because of technical terminology dragging up syllable counts.
  • Turso is in a different league. A Flesch Reading Ease of 19.0 and a Gunning Fog of 21.1 puts it solidly in the "very difficult" category—closer to academic papers than product documentation. The page analyzed (turso.tech/docs) appears to be a hub page with dense navigation text and product terminology rather than explanatory prose.
  • PlanetScale's conceptual docs are the most jargon-heavy of the narrative pages. A Gunning Fog of 14.0 suggests heavy use of multi-syllable technical terms throughout. The "What is PlanetScale" page covers database branching, non-blocking schema changes, and connection pooling—all of which pull the score down.

What Surprised Me

I expected GitHub's docs to score near the top—they have a dedicated technical writing team and it shows. But I didn't expect Stripe to be that close behind. Stripe's API reference page pulls in a lot of domain-specific vocabulary (idempotency, webhook, idempotent), yet the sentence structure is short and direct enough to offset the complexity.

Turso's score surprised me in the other direction. A 19.0 is genuinely unusual for a product landing/docs page. Some of it is measurement artifact—the page is thin on prose and heavy on navigation labels and feature names—but even accounting for that, it's an outlier. If Turso's conceptual overview pages score similarly, that's a usability gap worth addressing.

Why This Matters

Readability scores don't measure accuracy or completeness—you can have perfectly readable docs that are still wrong. But they do correlate with time-to-first-success: if your docs require a college reading level and your target developer is working late on a deadline, every extra sentence complexity is friction. The tight cluster at 52–56 among the top-tier tools suggests that's roughly where developer docs naturally land when written by experienced technical writers under editorial review.

The bigger gap is between that cluster (52–56) and the ideal range (60–70). No one in this sample hit plain-English territory. That's a realistic target for "getting started" and tutorial content, where the goal is onboarding rather than reference.


If you need to run this kind of analysis at scale—across docs sites, product pages, or user-generated content—the TextLens API (currently in early access waitlist at ckmtools.dev) handles text extraction and readability scoring via a single endpoint.

Top comments (0)