We're a small startup. And documentation is literally our product — we build tools to help other teams keep their docs accurate. So if our own docs have broken links, inconsistent terminology, or sloppy formatting, that's not just embarrassing. It's the kind of credibility problem that makes potential customers close the tab.
Here's the thing nobody tells you about running a docs-focused company: you become absurdly paranoid about your own documentation. Every broken link feels personal. Every style inconsistency keeps you up at night. But we're a small team — we can't spend our days playing copy editor on every PR.
So we automated the stuff that shouldn't require a human brain. Here are the five GitHub Actions that run on every docs PR in our repo. They're all free for open source, and together they catch the vast majority of issues that used to eat up our review cycles.
1. Catch Broken Links Before They Go Live
The problem nobody wants to admit: You write docs, link to other pages, API references, external resources. You feel good about it. Three months later, half those links are dead. Someone moved a page, a third-party site decided to restructure their entire URL scheme (thanks, I love re-doing work), or you linked to a branch that got merged and deleted.
Broken links are the fastest way to make users distrust your documentation. One dead link and they start wondering what else is wrong. It's like finding a cockroach in a restaurant — you know there are more.
The tool: Lychee — a Rust-based link checker that's fast enough to run on every PR without making you want to throw your laptop.
name: Check Links
on:
pull_request:
paths:
- 'docs/**'
- '*.md'
jobs:
link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check links
uses: lycheeverse/lychee-action@v2
with:
args: >-
--no-progress
--exclude-mail
'docs/**/*.md'
fail: true
Why Lychee and not the Node-based alternatives: Speed. Lychee is written in Rust and checks links concurrently. On our docs repo (~200 pages), it finishes in under 30 seconds. We tried markdown-link-check first — 3+ minutes. When you're running this on every PR, that difference matters. Nobody wants to wait for CI when they just fixed a typo.
We learned this the hard way: Add a .lychee.toml config to exclude URLs that are expected to fail in CI. We spent a very confusing afternoon debugging "broken" links that were just localhost examples in code snippets:
exclude = [
"localhost",
"127.0.0.1",
"example.com"
]
Time saved: About 2 hours per week of "why does this link 404" tickets. Which, honestly, were the most soul-crushing tickets to deal with.
2. Enforce Your Style Guide Automatically
The problem: Your style guide says "use second person" and "prefer active voice." Your docs say "the configuration can be completed by the user." Everyone knows the rules. Nobody catches everything. Including you — especially at 4pm on a Friday.
Style guide violations are like dishes in the sink. Nobody notices one. But after a month, your kitchen is a health hazard and nobody wants to cook. After a year, your docs read like they were written by 20 different people — because they were.
The tool: Vale — a prose linter with presets for Google, Microsoft, and other style guides. Think ESLint, but for words.
name: Prose Lint
on:
pull_request:
paths:
- 'docs/**'
jobs:
vale:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Vale lint
uses: errata-ai/vale-action@reviewdog
with:
files: docs/
reporter: github-pr-review
vale_flags: "--minAlertLevel=warning"
The key configuration is the .vale.ini file in your repo root:
StylesPath = .vale/styles
MinAlertLevel = warning
[docs/*.md]
BasedOnStyles = Google
This uses Google's style guide out of the box. Want Microsoft's instead? Change Google to Microsoft. Want your own custom rules? Create YAML files in .vale/styles/YourCompany/. Fair warning: you will go down a rabbit hole. It's kind of fun, though.
Honest caveat: Vale is powerful but it has a learning curve. Writing custom rules is its own skill, and it took us a few iterations to get our config dialed in without drowning in false positives. If you want something that works out of the box with less yak-shaving, that's where tools like EkLine come in (see #5). But if you're the type who enjoys fine-grained control and building your own system, Vale is the gold standard. No question.
Time saved: About 1-2 hours per week of leaving "please change this to active voice" comments. My least favorite type of review comment to leave, and probably everyone's least favorite to receive.
3. Kill Typos Without a Human Proofreader
The problem: Traditional spell checkers and technical writing are mortal enemies. They flag kubectl, OAuth, GraphQL, and every product name as errors. So people turn them off. And then actual typos ship. And then you're the company whose security doc says "authetication."
(Don't laugh. We caught that one in a PR. Twice.)
Documentation with typos reads as careless. And in code examples, a typo isn't just embarrassing — it means the example literally doesn't work. A developer copies your snippet, gets an error, and blames your product. Not the typo. Your product.
The tool: CSpell — a spell checker that actually understands technical terminology, so it won't flag every third word in your API docs.
name: Spell Check
on:
pull_request:
paths:
- 'docs/**'
jobs:
cspell:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Spell check
uses: streetsidesoftware/cspell-action@v6
with:
files: 'docs/**/*.md'
incremental_files_only: true
The magic is in cspell.json, where you define your product dictionary:
{
"version": "0.2",
"language": "en",
"dictionaries": ["tech-terms"],
"dictionaryDefinitions": [
{
"name": "tech-terms",
"path": ".cspell/tech-terms.txt",
"addWords": true
}
],
"words": [
"EkLine", "ekline",
"Docusaurus", "Mintlify", "GitBook",
"devops", "cicd"
],
"ignorePaths": [
"node_modules/**",
"*.min.js"
]
}
Do this or you'll hate yourself: Use incremental_files_only: true to only check changed files. We made the mistake of running it on everything the first time. Hundreds of findings. Nobody wanted to touch it. It sat in a ticket for three weeks before we wised up and switched to incremental-only. Build the dictionary over time — don't try to boil the ocean.
Time saved: About 30 minutes per week, plus the priceless benefit of never shipping "authetication" in a security doc. Again.
4. Stop Docs PRs From Going Stale
The problem: Someone opens a docs PR. Reviewer is busy. A week passes. The contributor moves on to something else. The PR sits for two months, accumulates merge conflicts, and eventually gets closed with a sad little "closing due to inactivity" comment.
We've all been on both sides of this. It's demoralizing for contributors, and for small teams it creates this invisible drag — every time you open the PR list, you have to mentally re-process "is this still relevant?" for PRs that should have been closed weeks ago.
The tool: Stale — GitHub's official action for managing inactive issues and PRs. Simple, effective, slightly passive-aggressive in the best way.
name: Stale Docs PRs
on:
schedule:
- cron: '0 9 * * MON' # Every Monday at 9am
jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-pr-message: >
This PR has been inactive for 14 days.
If the changes are still needed, please rebase
and request a re-review. Otherwise, it will be
closed in 7 days.
stale-pr-label: 'stale'
days-before-pr-stale: 14
days-before-pr-close: 21
only-labels: 'documentation'
Why this matters more than it sounds: Before we added this, our PR list was a graveyard. Eight open PRs, four of them from two months ago. Every Monday's standup included someone asking "should we close these?" and nobody committing to it. Now the bot handles it. The mental overhead just... disappeared.
One thing we got wrong at first: Set only-labels: 'documentation' so this only affects docs PRs. Code PRs have different lifecycle expectations. We didn't scope it initially and accidentally auto-closed a feature PR that was waiting on a design review. That was a fun conversation.
Time saved: About 1 hour per week of PR grooming and "is this still needed?" purgatory.
5. Automated Docs Review With Inline PR Comments
The problem: Actions #1-3 each solve one piece of the puzzle. But running three separate actions means three separate configurations, three sets of output to parse, and no single view of "how's this PR's documentation quality, overall?"
What if one action could check your style guide, validate links, flag terminology issues, and give you a quality score — all as inline PR comments, right on the lines where the issues are?
The tool: EkLine — full disclosure, this is ours. I'll keep this brief and honest.
name: Documentation Review
on:
pull_request:
paths:
- 'docs/**'
- '*.md'
jobs:
docs-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: EkLine Docs Review
uses: ekline-io/ekline-github-action@v6
with:
content_dir: ./docs
ek_token: ${{ secrets.EKLINE_TOKEN }}
github_token: ${{ secrets.GITHUB_TOKEN }}
reporter: github-pr-review
It analyzes changed files and leaves inline review comments directly on the PR — style guide compliance, link validity, terminology consistency, quality score per file. Not a wall of CI output. Comments on the exact lines where the issues are.
When to use EkLine vs. the DIY stack:
| DIY Stack | EkLine | |
|---|---|---|
| Setup time | Hours (configure each tool) | Minutes (one action, managed rules) |
| Customization | Full control over every rule | Presets + custom config |
| Cost | Free (all open source) | Free for public repos, paid for private |
| Output | Separate CI logs per tool | Unified inline PR comments |
| Maintenance | You maintain configs + updates | Managed |
| Best for | Teams who want granular control | Teams who want it working now |
We obviously eat our own dogfood — EkLine runs on every PR we open. But the honest truth is: if you enjoy building your own toolchain and you want total control, the DIY stack (Vale + Lychee + CSpell) is excellent. If you're a small team and you'd rather spend your time writing docs than configuring linters, that's what we built EkLine for.
Time saved: About 3-4 hours per week of mechanical review work.
The Combined Workflow
You don't have to pick just one. Here's a minimal combined workflow:
name: Docs Quality
on:
pull_request:
paths:
- 'docs/**'
- '*.md'
jobs:
spell-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: streetsidesoftware/cspell-action@v6
with:
files: 'docs/**/*.md'
incremental_files_only: true
link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: lycheeverse/lychee-action@v2
with:
args: --no-progress --exclude-mail 'docs/**/*.md'
fail: true
docs-review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: ekline-io/ekline-github-action@v6
with:
content_dir: ./docs
ek_token: ${{ secrets.EKLINE_TOKEN }}
github_token: ${{ secrets.GITHUB_TOKEN }}
reporter: github-pr-review
The three jobs run in parallel. Most PR builds finish in under a minute. Contributors get feedback before a human reviewer even opens the PR.
What This Actually Looks Like in Practice
Before we set this up, our docs PR review went like this:
- Contributor opens PR
- Waits 1-3 days for reviewer (we're a small team, we're busy)
- Reviewer leaves 8 comments — 3 about style, 2 about links, 1 typo, 2 about actual content
- Contributor fixes, pushes again
- Reviewer re-reviews (another day)
- Merged on day 5
Now:
- Contributor opens PR
- Automated checks run in 60 seconds
- Contributor sees inline feedback, fixes mechanical issues themselves
- Reviewer sees clean PR, focuses on one question: "Is this content accurate and helpful?"
- Merged same day
The biggest shift isn't time saved — it's who fixes what. Contributors fix their own mechanical issues because the feedback is instant and specific. It's not a person telling them "you got this wrong." It's a bot, and somehow that stings less. Reviewers stop being the grammar police and start being the content experts they were hired to be.
That's the real win. Not the hours. The sanity.
Start With One, Add More Later
If you're starting from zero, don't set up all five on day one. You'll spend more time configuring tools than writing docs, and that defeats the entire point.
Here's the order I'd recommend:
- Start with Lychee (link checking). Broken links are the most common and most damaging docs issue. Basically zero configuration. Immediate value.
- Add CSpell once you've built a product dictionary. This takes about a week of "add to dictionary" clicks before it stops being annoying and starts being useful.
- Add Vale or EkLine when you have a style guide you actually want to enforce. If you don't have a style guide yet, pick Google's and customize from there.
- Add Stale when your PR backlog grows beyond what you can track in your head. For us, that was about PR number six.
Each one is useful on its own. Together, they're a system. But a system you build up over time — not one you configure in a weekend and then never touch again (because that system will rot just like your docs).
What's in your docs CI pipeline? We're always looking for new tools to try. Drop a comment or reach out — we genuinely want to know what's working for other teams.
About the author: Bipin works on documentation tooling at EkLine — a small team building documentation quality tools. He's interested in how teams scale documentation quality without scaling review burden. Check out our public repos at github.com/ekline-io/ekline-github-action.
Top comments (0)