DEV Community

Bipin Rimal
Bipin Rimal

Posted on

Docs-as-Code: The Complete CI/CD Workflow (From Git to Production)

You know what docs-as-code is. I'm not going to explain it for the 900th time. Store docs in Git, write in Markdown, review through PRs, deploy through CI/CD. Got it. Moving on.

What nobody seems to write about is what a working pipeline actually looks like. The real config files, the real tradeoffs, the stuff that took us three iterations to get right. Every "docs-as-code guide" I've read either stops at the philosophy or gives you a toy example that falls apart the moment you have more than five pages.

This is the actual pipeline. Config files you can copy. Decisions explained so you understand why, not just what. Opinions included at no extra charge.

Repository Structure

Start with a layout that scales. Here's what works:

your-product/
├── docs/
│   ├── getting-started/
│   │   ├── quickstart.md
│   │   ├── installation.md
│   │   └── first-project.md
│   ├── guides/
│   │   ├── authentication.md
│   │   ├── configuration.md
│   │   └── deployment.md
│   ├── api-reference/
│   │   ├── endpoints.md
│   │   └── error-codes.md
│   ├── assets/
│   │   └── images/
│   └── index.md
├── .github/
│   └── workflows/
│       └── docs.yml
├── .vale.ini              # Style linting config
├── .lychee.toml           # Link checker config
├── cspell.json            # Spell checker config
└── .ekline/
    └── config.yaml        # EkLine config (if using)
Enter fullscreen mode Exit fullscreen mode

Now, before you blindly copy this, let me explain the decisions, because they matter more than the structure.

Docs live in the same repo as code. Not a separate repo. This is the hill I will die on. When an engineer changes the authentication flow, the docs for authentication should be right there, in the same PR. Separate repos create a gap between "code changed" and "docs should change". That gap is where documentation debt lives. Every guide that tells you to use a separate docs repo is optimizing for the docs team's convenience at the expense of accuracy.

If your docs are already in a separate repo, don't panic. Everything in this guide still applies. But if you're starting fresh, co-locate them. You'll thank me when someone changes an API endpoint and the docs PR is part of the same review.

Group by user task, not by content type. getting-started/ is a user journey. api-reference/ is a content type. I'd love to tell you I'm perfectly consistent about this, but the truth is most real doc structures are a hybrid. The important thing: a new user should be able to look at the folder names and know where to go. If they can't, your IA needs work.

Config files live at the root. Not in a .config/ folder, not inside docs/. Root level. Why? Because every tool expects them there by default, and fighting tool defaults is a waste of your finite time on earth.


The Writing Workflow

Branching

Treat docs PRs like code PRs. Same workflow, same rigor:

git checkout -b docs/update-auth-guide
# write your changes
git add docs/guides/authentication.md
git commit -m "Update auth guide for OAuth2 PKCE flow"
git push origin docs/update-auth-guide
Enter fullscreen mode Exit fullscreen mode

Use a docs/ prefix for documentation branches. Two reasons: it makes filtering your PR list trivial, and you can apply branch-specific CI rules (like running docs checks only on docs/* branches). Small thing, surprisingly useful.

PR Template

Create .github/PULL_REQUEST_TEMPLATE/docs.md:

## What changed
<!-- Brief description of documentation changes -->

## Why
<!-- What triggered this update? Feature change, user feedback, bug fix? -->

## Pages affected
<!-- List the doc pages modified -->

## Checklist
- [ ] Tested all code examples
- [ ] Verified all links work
- [ ] Screenshots are current (if applicable)
- [ ] Reviewed on mobile/responsive view
Enter fullscreen mode Exit fullscreen mode

The "Why" field is the most important part of this template, and the one people skip most often. "Updated the auth docs" tells a reviewer nothing. "Updated auth docs because we switched from API keys to OAuth2 in v3.2" tells them everything. It connects the docs change to the trigger, and six months from now when someone asks "why did we rewrite this page," the answer is right there in the PR.


Automated Quality Checks

This is where docs-as-code stops being a philosophy and starts being a system. Every PR that touches documentation triggers a pipeline of automated checks. Here's each stage, with the configs we actually use.

Stage 1: Spell Checking

The least glamorous check and the one that catches the most embarrassing mistakes:

# .github/workflows/docs.yml (spell check job)
spell-check:
  runs-on: ubuntu-latest
  if: contains(github.event.pull_request.labels.*.name, 'documentation') ||
      contains(join(github.event.pull_request.changed_files), 'docs/')
  steps:
    - uses: actions/checkout@v4
    - uses: streetsidesoftware/cspell-action@v6
      with:
        files: 'docs/**/*.md'
        incremental_files_only: true
Enter fullscreen mode Exit fullscreen mode

Your cspell.json needs a product-specific dictionary. Without one, CSpell will flag every product name, every technical term, and every acronym. After this, your engineers will disable the check within a week:

{
  "version": "0.2",
  "language": "en",
  "words": [
    "your-product-name",
    "OAuth", "PKCE", "webhook", "webhooks",
    "authn", "authz", "kubectl"
  ],
  "ignorePaths": ["docs/assets/**"]
}
Enter fullscreen mode Exit fullscreen mode

Important: Start with incremental_files_only: true. I cannot stress this enough. Running spell check on all files the first time will generate hundreds of findings. Nobody will fix them. The check will become noise. Check only changed files, build the dictionary organically, and clean up old files as you touch them.

Stage 2: Link Validation

Broken links are the cockroaches of documentation. For every one you see, there are ten you don't:

# Link check job
link-check:
  runs-on: ubuntu-latest
  steps:
    - uses: actions/checkout@v4
    - uses: lycheeverse/lychee-action@v2
      with:
        args: >-
          --no-progress
          --exclude-mail
          --exclude 'localhost'
          --exclude '127.0.0.1'
          --exclude 'example.com'
          'docs/**/*.md'
        fail: true
Enter fullscreen mode Exit fullscreen mode

Lychee is fast (Rust-based, concurrent) and handles most edge cases. Your .lychee.toml config handles the rest:

exclude = ["localhost", "127.0.0.1", "example.com"]
accept = [200, 204, 301, 429]
timeout = 10
max_retries = 2
Enter fullscreen mode Exit fullscreen mode

Accept 429 (rate limited) as OK. Some sites aggressively rate-limit automated checkers. Without this, you'll get false failures from GitHub, npm, and basically any popular site. You'll chase ghosts for an hour before you figure out it's not your links that are broken but it's the target servers telling you to slow down. Ask me how I know.

Stage 3: Style Guide Enforcement

This is where consistency happens at scale. Two options, both legitimate:

Option A: Vale (DIY, full control)

# Style lint job
style-lint:
  runs-on: ubuntu-latest
  steps:
    - uses: actions/checkout@v4
    - uses: errata-ai/vale-action@reviewdog
      with:
        files: docs/
        reporter: github-pr-review
Enter fullscreen mode Exit fullscreen mode

Option B: EkLine (managed — includes link checking and style in one action)

# Docs review job
docs-review:
  runs-on: ubuntu-latest
  steps:
    - uses: actions/checkout@v4
    - uses: ekline-io/ekline-github-action@v6
      with:
        content_dir: ./docs
        ek_token: ${{ secrets.EKLINE_TOKEN }}
        github_token: ${{ secrets.GITHUB_TOKEN }}
        reporter: github-pr-review
Enter fullscreen mode Exit fullscreen mode

Full disclosure: EkLine is ours. The honest difference: Vale gives you granular control over every rule and costs nothing. EkLine bundles style, links, and terminology checking into one action and manages the rules for you. If you use EkLine, skip the separate link check job. That is already included. If you use Vale, keep Lychee for links.

Pick based on your situation, not my sales pitch.


The Full Pipeline

Here's the complete workflow file. Copy this, adjust the paths, and you have a working docs-as-code pipeline:

name: Documentation Quality

on:
  pull_request:
    paths:
      - 'docs/**'
      - '*.md'
      - '!README.md'

jobs:
  spell-check:
    name: Spell Check
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: streetsidesoftware/cspell-action@v6
        with:
          files: 'docs/**/*.md'
          incremental_files_only: true

  link-check:
    name: Link Validation
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: lycheeverse/lychee-action@v2
        with:
          args: --no-progress --exclude-mail 'docs/**/*.md'
          fail: true

  style-lint:
    name: Style Guide
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: errata-ai/vale-action@reviewdog
        with:
          files: docs/
          reporter: github-pr-review

  # Optional: build and preview
  build:
    name: Build Preview
    runs-on: ubuntu-latest
    needs: [spell-check, link-check, style-lint]
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      - run: npm ci
      - run: npm run build
Enter fullscreen mode Exit fullscreen mode

All three quality jobs run in parallel which is the point. The build step waits for all checks to pass before generating a preview. On most repos, the whole thing finishes in under a minute.


The Human Review Process

Here's the part that trips people up: automated checks don't replace human review. They change what humans review.

What the machine handles:

  • Spelling errors
  • Broken links
  • Style guide violations (headings, terminology, voice, tone)
  • Formatting consistency

What humans handle:

  • Is the content actually accurate?
  • Is the explanation clear to someone who doesn't already understand it?
  • Are the code examples correct and tested? (Machines can check syntax. They can't check if your example actually solves the problem it claims to.)
  • Is anything missing?
  • Does the information architecture make sense?

This separation is the real value of docs-as-code. Not "docs in Git". That's just storage. The value is this: when a reviewer opens a PR and the automated checks have already passed, they know the formatting is clean, the links work, and the style guide is followed. They can focus entirely on whether the content is right.

That's a fundamentally different (and much more useful) review conversation.


Deployment

Once a PR merges, deploy automatically. I'll keep this section short because deployment is well-documented elsewhere and the choices are straightforward:

GitHub Pages (simplest, free):

deploy:
  runs-on: ubuntu-latest
  if: github.ref == 'refs/heads/main'
  needs: [build]
  steps:
    - uses: actions/checkout@v4
    - run: npm run build
    - uses: peaceiris/actions-gh-pages@v3
      with:
        github_token: ${{ secrets.GITHUB_TOKEN }}
        publish_dir: ./build
Enter fullscreen mode Exit fullscreen mode

Vercel/Netlify: Connect your repo, get preview URLs on every PR, auto-deploy on merge. Both auto-detect Docusaurus, Next.js, Astro, and most doc frameworks. Setup takes about five minutes. The docs for both platforms cover this better than I can.

Docs platforms (Mintlify, GitBook, ReadMe): They have their own deployment pipelines: push to a branch, they rebuild. Your CI checks still run; they just validate before the platform builds.

Pick one and move on. The deployment step is the least interesting part of this pipeline.


Common Mistakes (The Part You Actually Came Here For)

If you've set up a docs-as-code pipeline before, you probably scrolled straight to this section. I respect that. Here's what goes wrong:

Running all checks on all files from day one. You will generate 500 findings on existing docs. Nobody will fix them. The checks will become noise that everyone ignores. This is the single most common mistake, and I've watched teams make it repeatedly. Start incremental by only checking changed files. Clean up existing docs gradually, file by file, as you touch them for other reasons.

Blocking PRs on warnings. Set style violations to warning for the first month. Let people see them, get used to the feedback loop, understand why the rules exist. Once the team accepts the system, upgrade critical rules to error. If you start with error, you'll get a revolt. I promise.

Separate docs repo without a trigger. If docs live in a separate repo from code, you need a way to signal "the product changed, docs might need updating." A webhook, a GitHub Action that runs on product repo changes, a Slack notification. Something. Anything. Without this signal, the docs repo slowly diverges from reality and nobody notices until a customer complains. This is how documentation debt starts: not with bad writing, but with missing signals.

No code example testing. The most common docs bug isn't a typo or a broken link. It's a code example that doesn't work. The developer copies your snippet, gets an error, and blames your product. Not the docs. If your docs include code snippets, consider extracting and testing them. Tools like mdx-js or custom scripts can help. At minimum, have someone actually run the examples before publishing. Yes, every time.

Ignoring the contributor experience. If your CI takes 10 minutes to run, contributors won't wait. They'll push, context-switch, forget, and your PR will go stale. Keep the docs pipeline under 2 minutes. Run checks in parallel. Use incremental checking. The fastest way to kill a docs-as-code culture is to make the feedback loop slow.


Start Here

If you're setting up docs-as-code from scratch, don't try to build the whole pipeline in a day. Here's the order that works:

Week 1: Set up the repo structure. Add Lychee for link checking. This catches real problems immediately with zero configuration. You'll find broken links you didn't know existed. It's both satisfying and slightly alarming.

Week 2: Add CSpell. Build your product dictionary as you go. Add words when they're flagged incorrectly. The first few days will be annoying. By day five, it's useful.

Week 3: Add style enforcement (Vale or EkLine). Start with a preset style guide. Warnings only. Do not start with errors. I will say this as many times as it takes.

Week 4: Review the findings from week 3. Tune the rules. Disable noisy ones, tighten important ones. Upgrade key rules from warning to error. Now you have a system.

Ongoing: Clean up existing docs gradually. Don't create a "fix all style issues" ticket. That ticket will sit in your backlog for eternity and make everyone sad. Fix issues as you touch files for other reasons. This is how real documentation quality improves: incrementally, consistently, without heroics.

The pipeline is never "done." It evolves with your docs, your team, and your style guide. The important thing is that it exists. And that it runs on every PR, quietly catching the stuff that used to require a human to notice.

That's what docs-as-code actually means. Not "docs in Git." Docs with the same quality guarantees we give to code.


What does your docs pipeline look like? I'd love to compare setups — especially the weird edge cases. Drop a comment or find me at ekline.io.

Top comments (0)